SQL language

This page describes the SQL dialect (Calcite) recognized by LeanXcale default SQL parser. Query engine from LeanXcale forked from Apache Calcite project so there are a lot of points in common with respect to grammar, though there have been a few major changes regarding transaction management, DML extenstions and optimization for the use of LeanXcales KiVi datastore.

1. Grammar

SQL grammar in BNF-like form.

statement:
      setStatement
  |   resetStatement
  |   explain
  |   describe
  |   insert
  |   update
  |   merge
  |   delete
  |   query

statementList:
      statement [ ';' statement ]* [ ';' ]

explain:
      EXPLAIN PLAN
      [ WITH TYPE | WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION ]
      [ EXCLUDING ATTRIBUTES | INCLUDING [ ALL ] ATTRIBUTES ]
      [ AS JSON | AS XML ]
      FOR ( query | insert | update | delete )

insert:
      ( INSERT | UPSERT ) INTO tablePrimary
      [ '(' column [, column ]* ')' ]
      query

update:
      UPDATE tablePrimary
      SET assign [, assign ]*
      [ WHERE booleanExpression ]

assign:
      identifier '=' expression

delete:
      DELETE FROM tablePrimary [ [ AS ] alias ]
      [ WHERE booleanExpression ]

query:
      values
  |   WITH withItem [ , withItem ]* query
  |   {
          select
      |   selectWithoutFrom
      |   query UNION [ ALL | DISTINCT ] query
      |   query EXCEPT [ ALL | DISTINCT ] query
      |   query INTERSECT [ ALL | DISTINCT ] query
      }
      [ ORDER BY orderItem [, orderItem ]* ]
      [ LIMIT { count | ALL } ]
      [ OFFSET start { ROW | ROWS } ]
      [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]

withItem:
      name
      [ '(' column [, column ]* ')' ]
      AS '(' query ')'

orderItem:
      expression [ ASC | DESC ] [ NULLS FIRST | NULLS LAST ]

select:
      SELECT [ ALL | DISTINCT ]
          { * | projectItem [, projectItem ]* }
      FROM tableExpression
      [ WHERE booleanExpression ]
      [ GROUP BY { groupItem [, groupItem ]* } ]
      [ HAVING booleanExpression ]
      [ WINDOW windowName AS windowSpec [, windowName AS windowSpec ]* ]

selectWithoutFrom:
      SELECT [ ALL | DISTINCT ]
          { * | projectItem [, projectItem ]* }

projectItem:
      expression [ [ AS ] columnAlias ]
  |   tableAlias . *

tableExpression:
      tableReference [, tableReference ]*
  |   tableExpression [ NATURAL ] [ ( LEFT | RIGHT | FULL ) [ OUTER ] ] JOIN tableExpression [ joinCondition ]
  |   tableExpression CROSS JOIN tableExpression

joinCondition:
      ON booleanExpression
  |   USING '(' column [, column ]* ')'

tableReference:
      tablePrimary
      [ matchRecognize ]
      [ [ AS ] alias [ '(' columnAlias [, columnAlias ]* ')' ] ]

tablePrimary:
      [ [ catalogName . ] schemaName . ] tableName
      '(' TABLE [ [ catalogName . ] schemaName . ] tableName ')'
  |   [ LATERAL ] '(' query ')'
  |   UNNEST '(' expression ')' [ WITH ORDINALITY ]
  |   [ LATERAL ] TABLE '(' [ SPECIFIC ] functionName '(' expression [, expression ]* ')' ')'

columnDecl:
      column type [ NOT NULL ]

optionValue:
      stringLiteral
  |   numericLiteral

values:
      VALUES expression [, expression ]*

groupItem:
      expression
  |   '(' ')'
  |   '(' expression [, expression ]* ')'
  |   CUBE '(' expression [, expression ]* ')'
  |   ROLLUP '(' expression [, expression ]* ')'
  |   GROUPING SETS '(' groupItem [, groupItem ]* ')'

window:
      windowName
  |   windowSpec

windowSpec:
      '('
      [ windowName ]
      [ ORDER BY orderItem [, orderItem ]* ]
      [ PARTITION BY expression [, expression ]* ]
      [
          RANGE numericOrIntervalExpression { PRECEDING | FOLLOWING }
      |   ROWS numericExpression { PRECEDING | FOLLOWING }
      ]
      ')'

In insert, if the INSERT or UPSERT statement does not specify a list of target columns, the query must have the same number of columns as the target table and the order of the fields in the VALUES clause must be the same as the order of the fields in the table.

In orderItem, if expression is a positive integer n, it denotes the nth item in the SELECT clause. As an example:

SELECT F1, F2, F3, F4 FROM T ORDER BY 1,2

This will order by F1 and then F2 which are the first and second in the SELECT clause.

In query, count and start may each be either an unsigned integer literal or a dynamic parameter whose value is an integer.

An aggregate query is a query that contains a GROUP BY or a HAVING clause, or aggregate functions in the SELECT clause. In the SELECT, HAVING and ORDER BY clauses of an aggregate query, all expressions must be constant within the current group (that is, grouping constants as defined by the GROUP BY clause, or constants), or aggregate functions, or a combination of constants and aggregate functions. Aggregate and grouping functions may only appear in an aggregate query, and only in a SELECT, HAVING or ORDER BY clause.

A scalar sub-query is a sub-query used as an expression. If the sub-query returns no rows, the value is NULL; if it returns more than one row, it is an error.

IN, EXISTS and scalar sub-queries can occur in any place where an expression can occur (such as the SELECT clause, WHERE clause, ON clause of a JOIN, or as an argument to an aggregate function).

An IN, EXISTS or scalar sub-query may be correlated; that is, it may refer to tables in the FROM clause of an enclosing query.

selectWithoutFrom is equivalent to VALUES, but is not standard SQL. An example follows:

SELECT 1, CURRENT_TIMESTAMP;

This just yields the values 1 and the current timestamp in the database server.

MINUS is not supported. EXCEPT is and is completely equivalent to MINUS.

“LIMIT start, count” is not supported, but you can use the equivalent “LIMIT count OFFSET start”

2. Keywords

The following is a list of SQL keywords. Reserved keywords are bold.

A, ABS, ABSENT, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, APPLY, ARE, ARRAY, ARRAY_MAX_CARDINALITY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BEGIN_FRAME, BEGIN_PARTITION, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHARACTER_LENGTH, CHARACTER_SET_CATALOG, CHARACTER_SET_NAME, CHARACTER_SET_SCHEMA, CHAR_LENGTH, CHECK, CLASSIFIER, CLASS_ORIGIN, CLOB, CLOSE, COALESCE, COBOL, COLLATE, COLLATION, COLLATION_CATALOG, COLLATION_NAME, COLLATION_SCHEMA, COLLECT, COLUMN, COLUMN_NAME, COMMAND_FUNCTION, COMMAND_FUNCTION_CODE, COMMIT, COMMITTED, CONDITION, CONDITIONAL, CONDITION_NUMBER, CONNECT, CONNECTION, CONNECTION_NAME, CONSTRAINT, CONSTRAINTS, CONSTRAINT_CATALOG, CONSTRAINT_NAME, CONSTRAINT_SCHEMA, CONSTRUCTOR, CONTAINS, CONTINUE, CONVERT, CORR, CORRESPONDING, COUNT, COVAR_POP, COVAR_SAMP, CREATE, CROSS, CUBE, CUME_DIST, CURRENT, CURRENT_CATALOG, CURRENT_DATE, CURRENT_DEFAULT_TRANSFORM_GROUP, CURRENT_PATH, CURRENT_ROLE, CURRENT_ROW, CURRENT_SCHEMA, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_TRANSFORM_GROUP_FOR_TYPE, CURRENT_USER, CURSOR, CURSOR_NAME, CYCLE, DATA, DATABASE, DATE, DATETIME_INTERVAL_CODE, DATETIME_INTERVAL_PRECISION, DAY, DAYS, DEALLOCATE, DEC, DECADE, DECIMAL, DECLARE, DEFAULT, DEFAULTS, DEFERRABLE, DEFERRED, DEFINE, DEFINED, DEFINER, DEGREE, DELETE, DENSE_RANK, DEPTH, DEREF, DERIVED, DESC, DESCRIBE, DESCRIPTION, DESCRIPTOR, DETERMINISTIC, DIAGNOSTICS, DISALLOW, DISCONNECT, DISPATCH, DISTINCT, DOMAIN, DOUBLE, DOW, DOY, DROP, DYNAMIC, DYNAMIC_FUNCTION, DYNAMIC_FUNCTION_CODE, EACH, ELEMENT, ELSE, EMPTY, ENCODING, END, END-EXEC, END_FRAME, END_PARTITION, EPOCH, EQUALS, ERROR, ESCAPE, EVERY, EXCEPT, EXCEPTION, EXCLUDE, EXCLUDING, EXEC, EXECUTE, EXISTS, EXP, EXPLAIN, EXTEND, EXTERNAL, EXTRACT, FALSE, FETCH, FILTER, FINAL, FIRST, FIRST_VALUE, FLOAT, FLOOR, FOLLOWING, FOR, FOREIGN, FORMAT, FORTRAN, FOUND, FRAC_SECOND, FRAME_ROW, FREE, FROM, FULL, FUNCTION, FUSION, G, GENERAL, GENERATED, GEOMETRY, GET, GLOBAL, GO, GOTO, GRANT, GRANTED, GROUP, GROUPING, GROUPS, HAVING, HIERARCHY, HOLD, HOUR, HOURS, IDENTITY, IGNORE, IMMEDIATE, IMMEDIATELY, IMPLEMENTATION, IMPORT, IN, INCLUDING, INCREMENT, INDICATOR, INITIAL, INITIALLY, INNER, INOUT, INPUT, INSENSITIVE, INSERT, INSTANCE, INSTANTIABLE, INT, INTEGER, INTERSECT, INTERSECTION, INTERVAL, INTO, INVOKER, IS, ISODOW, ISOLATION, ISOYEAR, JAVA, JOIN, JSON, JSON_ARRAY, JSON_ARRAYAGG, JSON_EXISTS, JSON_OBJECT, JSON_OBJECTAGG, JSON_QUERY, JSON_VALUE, K, KEY, KEY_MEMBER, KEY_TYPE, LABEL, LAG, LANGUAGE, LARGE, LAST, LAST_VALUE, LATERAL, LEAD, LEADING, LEFT, LENGTH, LEVEL, LIBRARY, LIKE, LIKE_REGEX, LIMIT, LN, LOCAL, LOCALTIME, LOCALTIMESTAMP, LOCATOR, LOWER, M, MAP, MATCH, MATCHED, MATCHES, MATCH_NUMBER, MATCH_RECOGNIZE, MAX, MAXVALUE, MEASURES, MEMBER, MERGE, MESSAGE_LENGTH, MESSAGE_OCTET_LENGTH, MESSAGE_TEXT, METHOD, MICROSECOND, MILLENNIUM, MILLISECOND, MIN, MINUS, MINUTE, MINUTES, MINVALUE, MOD, MODIFIES, MODULE, MONTH, MONTHS, MORE, MULTISET, MUMPS, NAME, NAMES, NANOSECOND, NATIONAL, NATURAL, NCHAR, NCLOB, NESTING, NEW, NEXT, NO, NONE, NORMALIZE, NORMALIZED, NOT, NTH_VALUE, NTILE, NULL, NULLABLE, NULLIF, NULLS, NUMBER, NUMERIC, OBJECT, OCCURRENCES_REGEX, OCTETS, OCTET_LENGTH, OF, OFFSET, OLD, OMIT, ON, ONE, ONLY, OPEN, OPTION, OPTIONS, OR, ORDER, ORDERING, ORDINALITY, OTHERS, OUT, OUTER, OUTPUT, OVER, OVERLAPS, OVERLAY, OVERRIDING, PAD, PARAMETER, PARAMETER_MODE, PARAMETER_NAME, PARAMETER_ORDINAL_POSITION, PARAMETER_SPECIFIC_CATALOG, PARAMETER_SPECIFIC_NAME, PARAMETER_SPECIFIC_SCHEMA, PARTIAL, PARTITION, PASCAL, PASSING, PASSTHROUGH, PAST, PATH, PATTERN, PER, PERCENT, PERCENTILE_CONT, PERCENTILE_DISC, PERCENT_RANK, PERIOD, PERMUTE, PLACING, PLAN, PLI, PORTION, POSITION, POSITION_REGEX, POWER, PRECEDES, PRECEDING, PRECISION, PREPARE, PRESERVE, PREV, PRIMARY, PRIOR, PRIVILEGES, PROCEDURE, PUBLIC, QUARTER, RANGE, RANK, READ, READS, REAL, RECURSIVE, REF, REFERENCES, REFERENCING, REGR_AVGX, REGR_AVGY, REGR_COUNT, REGR_INTERCEPT, REGR_R2, REGR_SLOPE, REGR_SXX, REGR_SXY, REGR_SYY, RELATIVE, RELEASE, REPEATABLE, REPLACE, RESET, RESPECT, RESTART, RESTRICT, RESULT, RETURN, RETURNED_CARDINALITY, RETURNED_LENGTH, RETURNED_OCTET_LENGTH, RETURNED_SQLSTATE, RETURNING, RETURNS, REVOKE, RIGHT, ROLE, ROLLBACK, ROLLUP, ROUTINE, ROUTINE_CATALOG, ROUTINE_NAME, ROUTINE_SCHEMA, ROW, ROWS, ROW_COUNT, ROW_NUMBER, RUNNING, SAVEPOINT, SCALAR, SCALE, SCHEMA, SCHEMA_NAME, SCOPE, SCOPE_CATALOGS, SCOPE_NAME, SCOPE_SCHEMA, SCROLL, SEARCH, SECOND, SECONDS, SECTION, SECURITY, SEEK, SELECT, SELECT_FOR_UPDATE, SELF, SENSITIVE, SEQUENCE, SERIALIZABLE, SERVER, SERVER_NAME, SESSION, SESSION_USER, SET, SETS, SHOW, SIMILAR, SIMPLE, SIZE, SKIP, SMALLINT, SOME, SOURCE, SPACE, SPECIFIC, SPECIFICTYPE, SPECIFIC_NAME, SQL, SQLEXCEPTION, SQLSTATE, SQLWARNING, SQL_BIGINT, SQL_BINARY, SQL_BIT, SQL_BLOB, SQL_BOOLEAN, SQL_CHAR, SQL_CLOB, SQL_DATE, SQL_DECIMAL, SQL_DOUBLE, SQL_FLOAT, SQL_INTEGER, SQL_INTERVAL_DAY, SQL_INTERVAL_DAY_TO_HOUR, SQL_INTERVAL_DAY_TO_MINUTE, SQL_INTERVAL_DAY_TO_SECOND, SQL_INTERVAL_HOUR, SQL_INTERVAL_HOUR_TO_MINUTE, SQL_INTERVAL_HOUR_TO_SECOND, SQL_INTERVAL_MINUTE, SQL_INTERVAL_MINUTE_TO_SECOND, SQL_INTERVAL_MONTH, SQL_INTERVAL_SECOND, SQL_INTERVAL_YEAR, SQL_INTERVAL_YEAR_TO_MONTH, SQL_LONGVARBINARY, SQL_LONGVARCHAR, SQL_LONGVARNCHAR, SQL_NCHAR, SQL_NCLOB, SQL_NUMERIC, SQL_NVARCHAR, SQL_REAL, SQL_SMALLINT, SQL_TIME, SQL_TIMESTAMP, SQL_TINYINT, SQL_TSI_DAY, SQL_TSI_FRAC_SECOND, SQL_TSI_HOUR, SQL_TSI_MICROSECOND, SQL_TSI_MINUTE, SQL_TSI_MONTH, SQL_TSI_QUARTER, SQL_TSI_SECOND, SQL_TSI_WEEK, SQL_TSI_YEAR, SQL_VARBINARY, SQL_VARCHAR, SQRT, START, STATE, STATEMENT, STATIC, STDDEV_POP, STDDEV_SAMP, STREAM, STRUCTURE, STYLE, SUBCLASS_ORIGIN, SUBMULTISET, SUBSET, SUBSTITUTE, SUBSTRING, SUBSTRING_REGEX, SUCCEEDS, SUM, SYMMETRIC, SYSTEM, SYSTEM_TIME, SYSTEM_USER, TABLE, TABLESAMPLE, TABLE_NAME, TEMPORARY, THEN, TIES, TIME, TIMESTAMP, TIMESTAMPADD, TIMESTAMPDIFF, TIMEZONE_HOUR, TIMEZONE_MINUTE, TINYINT, TO, TOP_LEVEL_COUNT, TRAILING, TRANSACTION, TRANSACTIONS_ACTIVE, TRANSACTIONS_COMMITTED, TRANSACTIONS_ROLLED_BACK, TRANSFORM, TRANSFORMS, TRANSLATE, TRANSLATE_REGEX, TRANSLATION, TREAT, TRIGGER, TRIGGER_CATALOG, TRIGGER_NAME, TRIGGER_SCHEMA, TRIM, TRIM_ARRAY, TRUE, TRUNCATE, TUMBLE, TYPE, UESCAPE, UNBOUNDED, UNCOMMITTED, UNCONDITIONAL, UNDER, UNIFORM, UNION, UNIQUE, UNKNOWN, UNNAMED, UNNEST, UPDATE, UPPER, UPSERT, USAGE, USER, USER_DEFINED_TYPE_CATALOG, USER_DEFINED_TYPE_CODE, USER_DEFINED_TYPE_NAME, USER_DEFINED_TYPE_SCHEMA, USING, UTF16, UTF32, UTF8, VALUE, VALUES, VALUE_OF, VARBINARY, VARCHAR, VARYING, VAR_POP, VAR_SAMP, VERSION, VERSIONING, VIEW, WEEK, WHEN, WHENEVER, WHERE, WIDTH_BUCKET, WINDOW, WITH, WITHIN, WITHOUT, WORK, WRAPPER, WRITE, XML, YEAR, YEARS, ZONE.

If you want to use any of these keywords as part of your column names or whatever, you need to double quote the identifier: "". As an example:

CREATE TABLE T1 (k1 BIGINT PRIMARY KEY, "values" BIGINT)

3. Identifiers

Identifiers are the names of tables, columns and other metadata elements used in a SQL query.

Unquoted identifiers, such as emp, must start with a letter and can only contain letters, digits, and underscores. They are implicitly converted to upper case.

Quoted identifiers, such as "Employee Name", start and end with double quotes. They may contain virtually any character, including spaces and other punctuation. If you wish to include a double quote in an identifier, use another double quote to escape it, like this: "An employee called ""Fred"".".

Matching identifiers to the name of the referenced object is case-sensitive. But remember that unquoted identifiers are implicitly converted to upper case before matching, and if the object it refers to was created using an unquoted identifier for its name, then its name will have been converted to upper case also.

4. Data-types

4.1. Scalar-types

Data type Description Range and example literals

BOOLEAN

Logical values

Values: TRUE, FALSE, UNKNOWN

SMALLINT

2 byte signed integer

Range is -32768 to 32767

INTEGER, INT

4 byte signed integer

Range is -2147483648 to 2147483647

BIGINT

8 byte signed integer

Range is -9223372036854775808 to 9223372036854775807

DECIMAL(p, s)

Fixed point

Example: 123.45 is a DECIMAL(5, 2) value.

NUMERIC

Fixed point

REAL, FLOAT

4 byte floating point

6 decimal digits precision

DOUBLE

8 byte floating point

15 decimal digits precision

CHAR(n), CHARACTER(n)

Fixed-width character string

‘Hello’, ‘’ (empty string), _latin1’Hello’, n’Hello’, _UTF16’Hello’, ‘Hello’ ‘there’ (literal split into multiple parts)

VARCHAR(n), CHARACTER VARYING(n)

Variable-length character string

As CHAR(n)

BINARY(n)

Fixed-width binary string

x’45F0AB’, x’’ (empty binary string), x’AB’ ‘CD’ (multi-part binary string literal)

VARBINARY(n), BINARY VARYING(n)

Variable-length binary string

As BINARY(n)

DATE

Date

Example: DATE ‘1969-07-20’

TIME

Time of day

Example: TIME ‘20:17:40’

TIMESTAMP

Date and time

Example: TIMESTAMP ‘1969-07-20 20:17:40’

INTERVAL timeUnit [ TO timeUnit ]

Date time interval

Examples: INTERVAL ‘1-5’ YEAR TO MONTH, INTERVAL ‘45’ DAY, INTERVAL ‘1 2:34:56.789’ DAY TO SECOND

You can configure the database to use the TIMESTAMP in either one of two behaviors - The default behavior is TIMESTAMP have no implicit timezone. And are seen as written regardless of where the client is located. This means if a client in GMT+6 inserts.

INSERT INTO T VALUES(TIMESTAMP '1969-07-20 20:17:40')

Another client, located in another timezone (let’s say GMT-6) will see:

SELECT TS FROM T
1969-07-20 20:17:40

Timestamp was written as whatever and is seen as whatever no conversion is done. It’s up to the end-user to decide if the TIMESTAMP is representing UTC or whatever.

  • You may configure the TIMESTAMP to behave always as TIMESTAMP WITH LOCAL TIME ZONE. This means that the TIMESTAMP you INSERT will be converted to UTC using your local settings and the reader will see the TIMESTAMP converted to its local settings. In the case above:

Writer(GMT+6)
INSERT INTO T VALUES(TIMESTAMP ‘1969-07-20 20:17:40’)

Database: UTC: 1969-07-20 14:17:40

Reader(GMT-6)
SELECT TS FROM T
1969-07-20 08:17:40

Where:

timeUnit:
  MILLENNIUM | CENTURY | DECADE | YEAR | QUARTER | MONTH | WEEK | DOY | DOW | DAY | HOUR | MINUTE | SECOND | EPOCH

Note:

  • As stated above, by default, DATE, TIME and TIMESTAMP have no time zone. For those types, there is not even an implicit time zone, such as UTC (as in Java) or the local time zone. It is left to the user or application to supply a time zone. In turn, TIMESTAMP WITH LOCAL TIME ZONE does not store the time zone internally, times will be stored UTC and it will rely on the supplied time zone to provide correct semantics.

  • Interval literals may only use time units YEAR, MONTH, DAY, HOUR, MINUTE and SECOND.

4.1.1. Convert Timestamps between Timezones

SELECT CAST(TZ_TIMESTAMP('1969-07-20 20:17:40 America/Montreal') AS TIMESTAMP);

4.2. Non-scalar-types

Type Description Example literals

MAP

Collection of keys mapped to values

MULTISET

Unordered collection that may contain duplicates

Example: int multiset

ARRAY

Ordered, contiguous collection that may contain duplicates

Example: varchar(10) array

CURSOR

Cursor over the result of executing a query

Note:

  • Every ROW column type can have an optional [ NULL | NOT NULL ] suffix to indicate if this column type is nullable, default is not nullable.

MAP format is mapped to VARBINARY type internally and can be used as part of the table to support key value maps. MAP type cannot be used as part of the primary key. An example follows:

CREATE TABLE TMAP (c1 BIGINT, mapfield MAP, PRIMARY KEY (c1));

UPSERT INTO TMAP VALUES(1, MAP['key1', 121]);

SELECT mapfield['key1'] FROM TMAP;

The result of the SELECT above is 121.

5. Operators and functions

5.1. Operator precedence

The operator precedence and associativity, highest to lowest.

Operator Associativity

.

left

::

left

[ ] (array element)

left

+ - (unary plus, minus)

right

* / %

left

+ -

left

BETWEEN, IN, LIKE, SIMILAR, OVERLAPS, CONTAINS etc.

-

< > = ⇐ >= <> !=

left

IS NULL, IS FALSE, IS NOT TRUE etc.

-

NOT

right

AND

left

OR

left

Note that :: is dialect-specific, but is shown in this table for completeness.

5.2. Comparison operators

Operator syntax Description

value1 = value2

Equals

value1 <> value2

Not equal

value1 > value2

Greater than

value1 >= value2

Greater than or equal

value1 < value2

Less than

value1 ⇐ value2

Less than or equal

value IS NULL

Whether value is null

value IS NOT NULL

Whether value is not null

value1 IS DISTINCT FROM value2

Whether two values are not equal, treating null values as the same

value1 IS NOT DISTINCT FROM value2

Whether two values are equal, treating null values as the same

value1 BETWEEN value2 AND value3

Whether value1 is greater than or equal to value2 and less than or equal to value3

value1 NOT BETWEEN value2 AND value3

Whether value1 is less than value2 or greater than value3

string1 LIKE string2 [ ESCAPE string3 ]

Whether string1 matches pattern string2

string1 NOT LIKE string2 [ ESCAPE string3 ]

Whether string1 does not match pattern string2

string1 SIMILAR TO string2 [ ESCAPE string3 ]

Whether string1 matches regular expression string2

string1 NOT SIMILAR TO string2 [ ESCAPE string3 ]

Whether string1 does not match regular expression string2

value IN (value [, value]*)

Whether value is equal to a value in a list

value NOT IN (value [, value]*)

Whether value is not equal to every value in a list

value IN (sub-query)

Whether value is equal to a row returned by sub-query

value NOT IN (sub-query)

Whether value is not equal to every row returned by sub-query

value comparison SOME (sub-query)

Whether value comparison at least one row returned by sub-query

value comparison ANY (sub-query)

Synonym for SOME

value comparison ALL (sub-query)

Whether value comparison every row returned by sub-query

EXISTS (sub-query)

Whether sub-query returns at least one row

comp:
      =
  |   <>
  |   >
  |   >=
  |   <
  |   <=

5.2.1. Regular expressions

LIKE and SIMILAR TO operators allow regular expressions. You can use SQL traditional expressions using special characters ('%','_') and you can also use regular expressions with a syntax similar to perl-like regular expressions. Specifically, regular expressions may contain.

  • A rune, standing for itself.

  • '.', standing for any rune.

  • '^', start of text.

  • '$', end of text.

  • '[a-z]', set of runes. Multiple ranges and single runes ok. '[^a-z]', set or excluded runes. '\a', alpha runes (also ok within sets). Includes ''. '\u', upper runes (also ok within sets). Includes ''. '\l', lower runes (also ok within sets). Includes ''. '\b', blank runes (also ok within sets). '\w', word runes (also ok within sets). Includes ''.

They may also contain the following operators (from low to high precedence):

  • '(re)', for grouping

  • 're|re', alternative

  • concatenation (implicit, no operator)

  • 're*', zero to any number of times the left regexp.

  • 're+', one or any further number of times the left regexp.

  • 're?', zero or one time the left regexp.

The main difference between LIKE and SIMILAR TO is that LIKE implies the expression will start with the text in the expression while SIMILAR doesn’t imply '^'.

5.3. Logical operators

Operator syntax Description

boolean1 OR boolean2

Whether boolean1 is TRUE or boolean2 is TRUE

boolean1 AND boolean2

Whether boolean1 and boolean2 are both TRUE

NOT boolean

Whether boolean is not TRUE; returns UNKNOWN if boolean is UNKNOWN

boolean IS FALSE

Whether boolean is FALSE; returns FALSE if boolean is UNKNOWN

boolean IS NOT FALSE

Whether boolean is not FALSE; returns TRUE if boolean is UNKNOWN

boolean IS TRUE

Whether boolean is TRUE; returns FALSE if boolean is UNKNOWN

boolean IS NOT TRUE

Whether boolean is not TRUE; returns TRUE if boolean is UNKNOWN

boolean IS UNKNOWN

Whether boolean is UNKNOWN

boolean IS NOT UNKNOWN

Whether boolean is not UNKNOWN

5.4. Arithmetic operators and functions

Operator syntax Description

+ numeric

Returns numeric

- numeric

Returns negative numeric

numeric1 + numeric2

Returns numeric1 plus numeric2

numeric1 - numeric2

Returns numeric1 minus numeric2

numeric1 * numeric2

Returns numeric1 multiplied by numeric2

numeric1 / numeric2

Returns numeric1 divided by numeric2

POWER(numeric1, numeric2)

Returns numeric1 raised to the power of numeric2

ABS(numeric)

Returns the absolute value of numeric

MOD(numeric1, numeric2)

Returns the remainder (modulus) of numeric1 divided by numeric2. The result is negative only if numeric1 is negative

SQRT(numeric)

Returns the square root of numeric

LN(numeric)

Returns the natural logarithm (base e) of numeric

LOG10(numeric)

Returns the base 10 logarithm of numeric

EXP(numeric)

Returns e raised to the power of numeric

CEIL(numeric)

Rounds numeric up, returning the smallest integer that is greater than or equal to numeric

FLOOR(numeric)

Rounds numeric down, returning the largest integer that is less than or equal to numeric

RAND([seed])

Generates a random double between 0 and 1 inclusive, optionally initializing the random number generator with seed

RAND_INTEGER([seed, ] numeric)

Generates a random integer between 0 and numeric - 1 inclusive, optionally initializing the random number generator with seed

ACOS(numeric)

Returns the arc cosine of numeric

ASIN(numeric)

Returns the arc sine of numeric

ATAN(numeric)

Returns the arc tangent of numeric

ATAN2(numeric, numeric)

Returns the arc tangent of the numeric coordinates

CBRT(numeric)

Returns the cube root of numeric

COS(numeric)

Returns the cosine of numeric

COT(numeric)

Returns the cotangent of numeric

DEGREES(numeric)

Converts numeric from radians to degrees

PI()

Returns a value that is closer than any other value to pi

RADIANS(numeric)

Converts numeric from degrees to radians

ROUND(numeric1 [, numeric2])

Rounds numeric1 to optionally numeric2 (if not specified 0) places right to the decimal point

SIGN(numeric)

Returns the signum of numeric

SIN(numeric)

Returns the sine of numeric

TAN(numeric)

Returns the tangent of numeric

TRUNCATE(numeric1 [, numeric2])

Truncates numeric1 to optionally numeric2 (if not specified 0) places right to the decimal point

5.5. Character string operators and functions

Operator syntax Description

string

string

Concatenates two character strings

CHAR_LENGTH(string)

Returns the number of characters in a character string

CHARACTER_LENGTH(string)

As CHAR_LENGTH(string)

UPPER(string)

Returns a character string converted to upper case

LOWER(string)

Returns a character string converted to lower case

POSITION(string1 IN string2)

Returns the position of the first occurrence of string1 in string2

POSITION(string1 IN string2 FROM integer)

Returns the position of the first occurrence of string1 in string2 starting at a given point (not standard SQL)

TRIM( \{ BOTH

LEADING

TRAILING } string1 FROM string2)

Removes the longest string containing only the characters in string1 from the start/end/both ends of string1

OVERLAY(string1 PLACING string2 FROM integer [ FOR integer2 ])

Replaces a substring of string1 with string2

SUBSTRING(string FROM integer)

Returns a substring of a character string starting at a given point

SUBSTRING(string FROM integer FOR integer)

Returns a substring of a character string starting at a given point with a given length

INITCAP(string)

Returns string with the first letter of each word converter to upper case and the rest to lower case. Words are sequences of alphanumeric characters separated by non-alphanumeric characters.

Not implemented:

  • SUBSTRING(string FROM regexp FOR regexp)

  • Above you may miss functions like LPAD and RPAD. The behaviour of those functions can be achieved through OVERLAY though LPAD and RPAD are more convenient. For example, the following query will pad at the end:

SELECT OVERLAY('MYTEXT' PLACING 'PAD' FROM CHAR_LENGTH('MYTEXT')+1 FOR 0)

5.6. Binary string operators and functions

Operator syntax Description

binary

binary

Concatenates two binary strings

POSITION(binary1 IN binary2)

Returns the position of the first occurrence of binary1 in binary2

POSITION(binary1 IN binary2 FROM integer)

Returns the position of the first occurrence of binary1 in binary2 starting at a given point (not standard SQL)

OVERLAY(binary1 PLACING binary2 FROM integer [ FOR integer2 ])

Replaces a substring of binary1 with binary2

SUBSTRING(binary FROM integer)

Returns a substring of binary starting at a given point

SUBSTRING(binary FROM integer FOR integer)

Returns a substring of binary starting at a given point with a given length

5.7. Date/time functions

Operator syntax Description

LOCALTIME

Returns the current date and time in the session time zone in a value of datatype TIME

LOCALTIME(precision)

Returns the current date and time in the session time zone in a value of datatype TIME, with precision digits of precision

LOCALTIMESTAMP

Returns the current date and time in the session time zone in a value of datatype TIMESTAMP

LOCALTIMESTAMP(precision)

Returns the current date and time in the session time zone in a value of datatype TIMESTAMP, with precision digits of precision

CURRENT_TIME

Returns the current time in the session time zone, in a value of datatype TIMESTAMP

CURRENT_DATE

Returns the current date in the session time zone, in a value of datatype DATE

CURRENT_TIMESTAMP

Returns the current date and time in the session time zone, in a value of datatype TIMESTAMP

EXTRACT(timeUnit FROM datetime)

Extracts and returns the value of a specified datetime field from a datetime value expression

FLOOR(datetime TO timeUnit)

Rounds datetime down to timeUnit

CEIL(datetime TO timeUnit)

Rounds datetime up to timeUnit

YEAR(date)

Equivalent to EXTRACT(YEAR FROM date). Returns an integer.

QUARTER(date)

Equivalent to EXTRACT(QUARTER FROM date). Returns an integer between 1 and 4.

MONTH(date)

Equivalent to EXTRACT(MONTH FROM date). Returns an integer between 1 and 12.

WEEK(date)

Equivalent to EXTRACT(WEEK FROM date). Returns an integer between 1 and 53.

DAYOFYEAR(date)

Equivalent to EXTRACT(DOY FROM date). Returns an integer between 1 and 366.

DAYOFMONTH(date)

Equivalent to EXTRACT(DAY FROM date). Returns an integer between 1 and 31.

DAYOFWEEK(date)

Equivalent to EXTRACT(DOW FROM date). Returns an integer between 1 and 7.

HOUR(date)

Equivalent to EXTRACT(HOUR FROM date). Returns an integer between 0 and 23.

MINUTE(date)

Equivalent to EXTRACT(MINUTE FROM date). Returns an integer between 0 and 59.

SECOND(date)

Equivalent to EXTRACT(SECOND FROM date). Returns an integer between 0 and 59.

TIMESTAMPADD(timeUnit, integer, datetime)

Returns datetime with an interval of (signed) integer timeUnits added. Equivalent to datetime + INTERVAL 'integer' timeUnit

TIMESTAMPDIFF(timeUnit, datetime, datetime2)

Returns the (signed) number of timeUnit intervals between datetime and datetime2.

LAST_DAY(date)

Returns the date of the last day of the month in a value of datatype DATE; For example, it returns DATE’2020-02-29’ for both DATE’2020-02-10’ and TIMESTAMP’2020-02-10 10:10:10’

SELECT * FROM TABLEDELTA WHERE timest > LOCALTIMESTAMP - INTERVAL '1' DAY

SELECT (LOCALTIMESTAMP - timest) HOUR FROM TABLEDELTA WHERE timest > LOCALTIMESTAMP - INTERVAL '4' DAY AND keyid = '10';

Calls to niladic functions such as CURRENT_DATE do not accept parentheses (nor in LeanXcale nor in standard SQL)

Not implemented:

  • CEIL(interval)

  • FLOOR(interval)

  • + interval

  • - interval

  • interval + interval

  • interval - interval

  • interval / interval

5.8. Conditional functions and operators

Operator syntax Description

CASE value
WHEN value1 [, value11 ]* THEN result1
[ WHEN valueN [, valueN1 ]* THEN resultN ]*
[ ELSE resultZ ]
END

Simple case

CASE
WHEN condition1 THEN result1
[ WHEN conditionN THEN resultN ]*
[ ELSE resultZ ]
END

Searched case

NULLIF(value, value)

Returns NULL if the values are the same.

For example, NULLIF(5, 5) returns NULL; NULLIF(5, 0) returns 5.

COALESCE(value, value [, value ]*)

Provides a value if the first value is null.

For example, COALESCE(NULL, 5) returns 5.

5.9. Type conversion

Generally an expression cannot contain values of different datatypes. For example, an expression cannot multiply 5 by 10 and then add ‘JULIAN’. However, supports both implicit and explicit conversion of values from one datatype to another.

5.9.1. Implicit and Explicit Type Conversion

Calcite recommends that you specify explicit conversions, rather than rely on implicit or automatic conversions, for these reasons:

  • SQL statements are easier to understand when you use explicit datatype conversion functions.

  • Implicit datatype conversion can have a negative impact on performance, especially if the datatype of a column value is converted to that of a constant rather than the other way around.

  • Implicit conversion depends on the context in which it occurs and may not work the same way in every case. For example, implicit conversion from a datetime value to a VARCHAR value may return an unexpected format.

Algorithms for implicit conversion are subject to change across Calcite releases. Behavior of explicit conversions is more predictable.

5.9.2. Explicit type conversion

Operator syntax Description

CAST(value AS type)

Converts a value to a given type.

Supported data types syntax:

type:
      typeName
      [ collectionsTypeName ]*

typeName:
      sqlTypeName
  |   rowTypeName
  |   compoundIdentifier

sqlTypeName:
      char [ precision ] [ charSet ]
  |   varchar [ precision ] [ charSet ]
  |   DATE
  |   time
  |   timestamp
  |   GEOMETRY
  |   decimal [ precision [, scale] ]
  |   BOOLEAN
  |   integer
  |   BINARY [ precision ]
  |   varbinary [ precision ]
  |   TINYINT
  |   SMALLINT
  |   BIGINT
  |   REAL
  |   double
  |   FLOAT
  |   ANY [ precision [, scale] ]

collectionsTypeName:
      ARRAY | MULTISET

rowTypeName:
      ROW '('
      fieldName1 fieldType1 [ NULL | NOT NULL ]
      [ , fieldName2 fieldType2 [ NULL | NOT NULL ] ]*
      ')'

char:
      CHARACTER | CHAR

varchar:
      char VARYING | VARCHAR

decimal:
      DECIMAL | DEC | NUMERIC

integer:
      INTEGER | INT

varbinary:
      BINARY VARYING | VARBINARY

double:
      DOUBLE [ PRECISION ]

time:
      TIME [ precision ] [ timeZone ]

timestamp:
      TIMESTAMP [ precision ] [ timeZone ]

charSet:
      CHARACTER SET charSetName

timeZone:
      WITHOUT TIME ZONE
  |   WITH LOCAL TIME ZONE

5.9.3. Implicit type conversion

Calcite automatically converts a value from one datatype to another when such a conversion makes sense. The table below is a matrix of Calcite type conversions. The table shows all possible conversions, without regard to the context in which it is made. The rules governing these details follow the table.

FROM - TO NULL BOOLEAN TINYINT SMALLINT INT BIGINT DECIMAL FLOAT or REAL DOUBLE INTERVAL DATE TIME TIMESTAMP CHAR or VARCHAR BINARY or VARBINARY

NULL

i

i

i

i

i

i

i

i

i

i

i

i

i

i

i

BOOLEAN

x

i

e

e

e

e

e

e

e

x

x

x

x

i

x

TINYINT

x

e

i

i

i

i

i

i

i

e

x

x

e

i

x

SMALLINT

x

e

i

i

i

i

i

i

i

e

x

x

e

i

x

INT

x

e

i

i

i

i

i

i

i

e

x

x

e

i

x

BIGINT

x

e

i

i

i

i

i

i

i

e

x

x

e

i

x

DECIMAL

x

e

i

i

i

i

i

i

i

e

x

x

e

i

x

FLOAT/REAL

x

e

i

i

i

i

i

i

i

x

x

x

e

i

x

DOUBLE

x

e

i

i

i

i

i

i

i

x

x

x

e

i

x

INTERVAL

x

x

e

e

e

e

e

x

x

i

x

x

x

e

x

DATE

x

x

x

x

x

x

x

x

x

x

i

x

i

i

x

TIME

x

x

x

x

x

x

x

x

x

x

x

i

e

i

x

TIMESTAMP

x

x

e

e

e

e

e

e

e

x

i

e

i

i

x

CHAR or VARCHAR

x

e

i

i

i

i

i

i

i

i

i

i

i

i

i

BINARY or VARBINARY

x

x

x

x

x

x

x

x

x

x

e

e

e

i

i

i: implicit cast / e: explicit cast / x: not allowed

Conversion Contexts and Strategies
  • Set operation (UNION, EXCEPT, INTERSECT): compare every branch row data type and find the common type of each fields pair;

  • Binary arithmetic expression (+, -, &, ^, /, %): promote string operand to data type of the other numeric operand;

  • Binary comparison (=, <, , <>, >, >=): if operands are STRING and TIMESTAMP, promote to TIMESTAMP; make 1 = true and 0 = false always evaluate to TRUE; if there is numeric type operand, find common type for both operands.

  • IN sub-query: compare type of LHS and RHS, and find the common type; if it is struct type, find wider type for every field;

  • IN expression list: compare every expression to find the common type;

  • CASE WHEN expression or COALESCE: find the common wider type of the THEN and ELSE operands;

  • Character + INTERVAL or character - INTERVAL: promote character to TIMESTAMP;

  • Built-in function: look up the type families registered in the checker, find the family default type if checker rules allow it;

  • User-defined function (UDF): coerce based on the declared argument types of the eval() method;

  • INSERT and UPDATE: coerce a source field to counterpart target table field’s type if the two fields differ with type name or precision(scale).

Note:

Implicit type coercion of following cases are ignored:

  • One of the type is ANY;

  • Type coercion within CHARACTER types are always ignored, i.e. from CHAR(20) to VARCHAR(30);

  • Type coercion from a numeric to another with higher precedence is ignored, i.e. from INT to LONG.

Strategies for Finding Common Type
  • If the operator has expected data types, just take them as the desired one. (e.g. the UDF would have eval() method which has reflection argument types);

  • If there is no expected data type but the data type families are registered, try to coerce the arguments to the family’s default data type, i.e. the String family will have a VARCHAR type;

  • If neither expected data type nor families are specified, try to find the tightest common type of the node types, i.e. INTEGER and DOUBLE will return DOUBLE, the numeric precision does not lose for this case;

  • If no tightest common type is found, try to find a wider type, i.e. VARCHAR and INTEGER will return INTEGER, we allow some precision loss when widening decimal to fractional, or promote to VARCHAR type.

5.10. Value constructors

Operator syntax Description

ROW (value [, value ]*)

Creates a row from a list of values.

(value [, value ]* )

Creates a row from a list of values.

map ‘[’ key ‘]’

Returns the element of a map with a particular key.

array ‘[’ index ‘]’

Returns the element at a particular location in an array.

ARRAY ‘[’ value [, value ]* ‘]’

Creates an array from a list of values.

MAP ‘[’ key, value [, key, value ]* ‘]’

Creates a map from a list of key-value pairs.

5.11. Collection functions

Operator syntax Description

ELEMENT(value)

Returns the sole element of an array or multiset; null if the collection is empty; throws if it has more than one element.

CARDINALITY(value)

Returns the number of elements in an array or multiset.

value MEMBER OF multiset

Returns whether the value is a member of multiset.

multiset IS A SET

Whether multiset is a set (has no duplicates).

multiset IS NOT A SET

Whether multiset is not a set (has duplicates).

multiset IS EMPTY

Whether multiset contains zero elements.

multiset IS NOT EMPTY

Whether multiset contains one or more elements.

multiset SUBMULTISET OF multiset2

Whether multiset is a submultiset of multiset2.

multiset NOT SUBMULTISET OF multiset2

Whether multiset is not a submultiset of multiset2.

multiset MULTISET UNION [ ALL

DISTINCT ] multiset2

Returns the union multiset and multiset2, eliminating duplicates if DISTINCT is specified (ALL is the default).

multiset MULTISET INTERSECT [ ALL

DISTINCT ] multiset2

Returns the intersection of multiset and multiset2, eliminating duplicates if DISTINCT is specified (ALL is the default).

multiset MULTISET EXCEPT [ ALL

DISTINCT ] multiset2

See also: the UNNEST relational operator converts a collection to a relation.

5.12. Period predicates

Operator syntax Description

period1 CONTAINS datetime

period1 CONTAINS period2

period1 OVERLAPS period2

period1 EQUALS period2

period1 PRECEDES period2

period1 IMMEDIATELY PRECEDES period2

period1 SUCCEEDS period2

period1 IMMEDIATELY SUCCEEDS period2

Where period1 and period2 are period expressions:

period:
      (datetime, datetime)
  |   (datetime, interval)
  |   PERIOD (datetime, datetime)
  |   PERIOD (datetime, interval)

5.13. JDBC function escape

5.13.1. Numeric

Operator syntax Description

\{fn ABS(numeric)}

Returns the absolute value of numeric

\{fn ACOS(numeric)}

Returns the arc cosine of numeric

\{fn ASIN(numeric)}

Returns the arc sine of numeric

\{fn ATAN(numeric)}

Returns the arc tangent of numeric

\{fn ATAN2(numeric, numeric)}

Returns the arc tangent of the numeric coordinates

\{fn CBRT(numeric)}

Returns the cube root of numeric

\{fn CEILING(numeric)}

Rounds numeric up, and returns the smallest number that is greater than or equal to numeric

\{fn COS(numeric)}

Returns the cosine of numeric

\{fn COT(numeric)}

Returns the cotangent of numeric

\{fn DEGREES(numeric)}

Converts numeric from radians to degrees

\{fn EXP(numeric)}

Returns e raised to the power of numeric

\{fn FLOOR(numeric)}

Rounds numeric down, and returns the largest number that is less than or equal to numeric

\{fn LOG(numeric)}

Returns the natural logarithm (base e) of numeric

\{fn LOG10(numeric)}

Returns the base-10 logarithm of numeric

\{fn MOD(numeric1, numeric2)}

Returns the remainder (modulus) of numeric1 divided by numeric2. The result is negative only if numeric1 is negative

\{fn PI()}

Returns a value that is closer than any other value to pi

\{fn POWER(numeric1, numeric2)}

Returns numeric1 raised to the power of numeric2

\{fn RADIANS(numeric)}

Converts numeric from degrees to radians

\{fn RAND(numeric)}

Returns a random double using numeric as the seed value

\{fn ROUND(numeric1, numeric2)}

Rounds numeric1 to numeric2 places right to the decimal point

\{fn SIGN(numeric)}

Returns the signum of numeric

\{fn SIN(numeric)}

Returns the sine of numeric

\{fn SQRT(numeric)}

Returns the square root of numeric

\{fn TAN(numeric)}

Returns the tangent of numeric

\{fn TRUNCATE(numeric1, numeric2)}

Truncates numeric1 to numeric2 places right to the decimal point

5.13.2. String

Operator syntax Description

\{fn ASCII(string)}

Returns the ASCII code of the first character of string; if the first character is a non-ASCII character, returns its Unicode code point; returns 0 if string is empty

\{fn CONCAT(character, character)}

Returns the concatenation of character strings

\{fn INSERT(string1, start, length, string2)}

Inserts string2 into a slot in string1

\{fn LCASE(string)}

Returns a string in which all alphabetic characters in string have been converted to lower case

\{fn LENGTH(string)}

Returns the number of characters in a string

\{fn LOCATE(string1, string2 [, integer])}

Returns the position in string2 of the first occurrence of string1. Searches from the beginning of string2, unless integer is specified.

\{fn LEFT(string, length)}

Returns the leftmost length characters from string

\{fn LTRIM(string)}

Returns string with leading space characters removed

\{fn REPLACE(string, search, replacement)}

Returns a string in which all the occurrences of search in string are replaced with replacement; if replacement is the empty string, the occurrences of search are removed

\{fn REVERSE(string)}

Returns string with the order of the characters reversed

\{fn RIGHT(string, integer)}

Returns the rightmost length characters from string

\{fn RTRIM(string)}

Returns string with trailing space characters removed

\{fn SUBSTRING(string, offset, length)}

Returns a character string that consists of length characters from string starting at the offset position

\{fn UCASE(string)}

Returns a string in which all alphabetic characters in string have been converted to upper case

Not implemented:

  • \{fn CHAR(string)}

5.13.3. Date/time

Operator syntax Description

\{fn CURDATE()}

Equivalent to CURRENT_DATE

\{fn CURTIME()}

Equivalent to LOCALTIME

\{fn NOW()}

Equivalent to LOCALTIMESTAMP

\{fn YEAR(date)}

Equivalent to EXTRACT(YEAR FROM date). Returns an integer.

\{fn QUARTER(date)}

Equivalent to EXTRACT(QUARTER FROM date). Returns an integer between 1 and 4.

\{fn MONTH(date)}

Equivalent to EXTRACT(MONTH FROM date). Returns an integer between 1 and 12.

\{fn WEEK(date)}

Equivalent to EXTRACT(WEEK FROM date). Returns an integer between 1 and 53.

\{fn DAYOFYEAR(date)}

Equivalent to EXTRACT(DOY FROM date). Returns an integer between 1 and 366.

\{fn DAYOFMONTH(date)}

Equivalent to EXTRACT(DAY FROM date). Returns an integer between 1 and 31.

\{fn DAYOFWEEK(date)}

Equivalent to EXTRACT(DOW FROM date). Returns an integer between 1 and 7.

\{fn HOUR(date)}

Equivalent to EXTRACT(HOUR FROM date). Returns an integer between 0 and 23.

\{fn MINUTE(date)}

Equivalent to EXTRACT(MINUTE FROM date). Returns an integer between 0 and 59.

\{fn SECOND(date)}

Equivalent to EXTRACT(SECOND FROM date). Returns an integer between 0 and 59.

\{fn TIMESTAMPADD(timeUnit, count, datetime)}

Adds an interval of count timeUnits to a datetime

\{fn TIMESTAMPDIFF(timeUnit, timestamp1, timestamp2)}

Subtracts timestamp1 from timestamp2 and returns the result in timeUnits

5.13.4. System

Operator syntax Description

\{fn DATABASE()}

Equivalent to CURRENT_CATALOG

\{fn IFNULL(value1, value2)}

Returns value2 if value1 is null

\{fn USER()}

Equivalent to CURRENT_USER

5.13.5. Conversion

Operator syntax Description

\{fn CONVERT(value, type)}

Cast value into type

5.14. Aggregate functions

Syntax:

aggregateCall:
        agg( [ ALL | DISTINCT ] value [, value ]*)
        [ WITHIN GROUP (ORDER BY orderItem [, orderItem ]*) ]
        [ FILTER (WHERE condition) ]
    |   agg(*) [ FILTER (WHERE condition) ]

where agg is one of the operators in the following table, or a user-defined aggregate function.

If FILTER is present, the aggregate function only considers rows for which condition evaluates to TRUE.

If DISTINCT is present, duplicate argument values are eliminated before being passed to the aggregate function.

If WITHIN GROUP is present, the aggregate function sorts the input rows according to the ORDER BY clause inside WITHIN GROUP before aggregating values. WITHIN GROUP is only allowed for hypothetical set functions (RANK, DENSE_RANK, PERCENT_RANK and CUME_DIST), inverse distribution functions (PERCENTILE_CONT and PERCENTILE_DISC) and collection functions (COLLECT and LISTAGG).

Operator syntax Description

COLLECT( [ ALL

DISTINCT ] value)

Returns a multiset of the values

LISTAGG( [ ALL

DISTINCT ] value [, separator])

Returns values concatenated into a string, delimited by separator (default ‘,’)

COUNT( [ ALL

DISTINCT ] value [, value ]*)

Returns the number of input rows for which value is not null (wholly not null if value is composite)

COUNT(*)

Returns the number of input rows

FUSION(multiset)

Returns the multiset union of multiset across all input values

APPROX_COUNT_DISTINCT(value [, value ]*)

Returns the approximate number of distinct values of value; the database is allowed to use an approximation but is not required to

AVG( [ ALL

DISTINCT ] numeric)

Returns the average (arithmetic mean) of numeric across all input values

SUM( [ ALL

DISTINCT ] numeric)

Returns the sum of numeric across all input values

MAX( [ ALL

DISTINCT ] value)

Returns the maximum value of value across all input values

MIN( [ ALL

DISTINCT ] value)

Returns the minimum value of value across all input values

ANY_VALUE( [ ALL

DISTINCT ] value)

Returns one of the values of value across all input values; this is NOT specified in the SQL standard

BIT_AND( [ ALL

DISTINCT ] value)

Returns the bitwise AND of all non-null input values, or null if none

BIT_OR( [ ALL

DISTINCT ] value)

Returns the bitwise OR of all non-null input values, or null if none

BIT_XOR( [ ALL

DISTINCT ] value)

Returns the bitwise XOR of all non-null input values, or null if none

STDDEV_POP( [ ALL

DISTINCT ] numeric)

Returns the population standard deviation of numeric across all input values

STDDEV_SAMP( [ ALL

DISTINCT ] numeric)

Returns the sample standard deviation of numeric across all input values

STDDEV( [ ALL

DISTINCT ] numeric)

Synonym for STDDEV_SAMP

VAR_POP( [ ALL

DISTINCT ] value)

Returns the population variance (square of the population standard deviation) of numeric across all input values

VAR_SAMP( [ ALL

DISTINCT ] numeric)

Returns the sample variance (square of the sample standard deviation) of numeric across all input values

COVAR_POP(numeric1, numeric2)

Returns the population covariance of the pair (numeric1, numeric2) across all input values

COVAR_SAMP(numeric1, numeric2)

Returns the sample covariance of the pair (numeric1, numeric2) across all input values

REGR_COUNT(numeric1, numeric2)

Returns the number of rows where both dependent and independent expressions are not null

REGR_SXX(numeric1, numeric2)

Returns the sum of squares of the dependent expression in a linear regression model

REGR_SYY(numeric1, numeric2)

Returns the sum of squares of the independent expression in a linear regression model

Not implemented:

  • REGR_AVGX(numeric1, numeric2)

  • REGR_AVGY(numeric1, numeric2)

  • REGR_INTERCEPT(numeric1, numeric2)

  • REGR_R2(numeric1, numeric2)

  • REGR_SLOPE(numeric1, numeric2)

  • REGR_SXY(numeric1, numeric2)

5.15. Window functions

Syntax:

windowedAggregateCall:
        agg( [ ALL | DISTINCT ] value [, value ]*)
        [ RESPECT NULLS | IGNORE NULLS ]
        [ WITHIN GROUP (ORDER BY orderItem [, orderItem ]*) ]
        [ FILTER (WHERE condition) ]
        OVER window
    |   agg(*)
        [ FILTER (WHERE condition) ]
        OVER window

where agg is one of the operators in the following table, or a user-defined aggregate function.

DISTINCT, FILTER and WITHIN GROUP are as described for aggregate functions.

Operator syntax Description

COUNT(value [, value ]*) OVER window

Returns the number of rows in window for which value is not null (wholly not null if value is composite)

COUNT(*) OVER window

Returns the number of rows in window

AVG(numeric) OVER window

Returns the average (arithmetic mean) of numeric across all values in window

SUM(numeric) OVER window

Returns the sum of numeric across all values in window

MAX(value) OVER window

Returns the maximum value of value across all values in window

MIN(value) OVER window

Returns the minimum value of value across all values in window

RANK() OVER window

Returns the rank of the current row with gaps; same as ROW_NUMBER of its first peer

DENSE_RANK() OVER window

Returns the rank of the current row without gaps; this function counts peer groups

ROW_NUMBER() OVER window

Returns the number of the current row within its partition, counting from 1

FIRST_VALUE(value) OVER window

Returns value evaluated at the row that is the first row of the window frame

LAST_VALUE(value) OVER window

Returns value evaluated at the row that is the last row of the window frame

LEAD(value, offset, default) OVER window

Returns value evaluated at the row that is offset rows after the current row within the partition; if there is no such row, instead returns default. Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to NULL

LAG(value, offset, default) OVER window

Returns value evaluated at the row that is offset rows before the current row within the partition; if there is no such row, instead returns default. Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to NULL

NTH_VALUE(value, nth) OVER window

Returns value evaluated at the row that is the nth row of the window frame

NTILE(value) OVER window

Returns an integer ranging from 1 to value, dividing the partition as equally as possible

Note:

  • You may specify null treatment (IGNORE NULLS, RESPECT NULLS) for FIRST_VALUE, LAST_VALUE, NTH_VALUE, LEAD and LAG functions. The syntax handled by the parser, but only RESPECT NULLS is implemented at runtime.

Not implemented:

  • COUNT(DISTINCT value [, value ]*) OVER window

  • APPROX_COUNT_DISTINCT(value [, value ]*) OVER window

  • PERCENT_RANK(value) OVER window

  • CUME_DIST(value) OVER window

5.16. Grouping functions

Operator syntax Description

GROUPING(expression [, expression ]*)

Returns a bit vector of the given grouping expressions

GROUP_ID()

Returns an integer that uniquely identifies the combination of grouping keys

GROUPING_ID(expression [, expression ]*)

Synonym for GROUPING

5.17. Grouped window functions

Grouped window functions occur in the GROUP BY clause and define a key value that represents a window containing several rows.

5.17.1. CTUMBLE

In streaming queries, TUMBLE assigns a window for each row of a relation based on a timestamp column. An assigned window is specified by its beginning and ending. All assigned windows have the same length, and that’s why tumbling sometimes is named as “fixed windowing”. CTUMBLE function works in a similar way as TUMBLE does, but for normal tables.

Operator syntax Description

CTUMBLE(datetime COLUMN, interval [, time ])

Indicates a tumbling window of interval for datetime, optionally aligned at time

Here is an example:

SELECT CAST(CTUMBLE(TIMEST, INTERVAL '60' MINUTE, TIMESTAMP '1970-01-01 01:01:00.000') AS TIMESTAMP) AS GTIMEST,
       keyid,
       SUM(metric1) AS SUMM1,
       SUM(metric2) AS SUMM2,
FROM TIMEMETRICS
WHERE keyid <= '1'
GROUP BY CTUMBLE(TIMEST, INTERVAL '60' MINUTE, TIMESTAMP '1970-01-01 01:01:00.000'), keyid
ORDER BY 1,2

In the query above you will be getting the addition of the metrics for every key and for every hour (60 minute) interval. The aligned at value is causing the intervals to start at minute 01

The result obtained would look similar to:

GTIMEST KEYID SUMM1 SUMM2

2020-09-27 17:01:00

0

10

2.5

2020-09-27 17:01:00

1

10

2.5

2020-09-27 18:01:00

0

20

5.0

2020-09-27 18:01:00

1

20

5.0

2020-09-27 19:01:00

0

30

7.5

2020-09-27 19:01:00

1

30

7.5

5.18. JSON Functions

In the following:

  • jsonValue is a character string containing a JSON value;

  • path is a character string containing a JSON path expression; mode flag strict or lax should be specified in the beginning of path.

5.18.1. Query Functions

Operator syntax Description

JSON_EXISTS(jsonValue, path [ { TRUE | FALSE | UNKNOWN | ERROR ) ON ERROR } )

Whether a jsonValue satisfies a search criterion described using JSON path expression path

JSON_VALUE(jsonValue, path [ RETURNING type ] [ { ERROR | NULL | DEFAULT expr } ON EMPTY ] [ { ERROR | NULL | DEFAULT expr } ON ERROR ] )

Extract an SQL scalar from a jsonValue using JSON path expression path

JSON_QUERY(jsonValue, path [ { WITHOUT [ ARRAY ] | WITH [ CONDITIONAL | UNCONDITIONAL ] [ ARRAY ] } WRAPPER ] [ { ERROR | NULL | EMPTY ARRAY | EMPTY OBJECT } ON EMPTY ] [ { ERROR | NULL | EMPTY ARRAY | EMPTY OBJECT } ON ERROR ] )

Extract a JSON object or JSON array from jsonValue using the path JSON path expression

Note:

  • The ON ERROR and ON EMPTY clauses define the fallback behavior of the function when an error is thrown or a null value is about to be returned.

  • The ARRAY WRAPPER clause defines how to represent a JSON array result in JSON_QUERY function. The following examples compare the wrapper behaviors.

Example Data:

{"a": "[1,2]", "b": [1,2], "c": "hi"}

Comparison:

Operator $.a $.b $.c

JSON_VALUE

[1, 2]

error

hi

JSON QUERY WITHOUT ARRAY WRAPPER

error

[1, 2]

error

JSON QUERY WITH UNCONDITIONAL ARRAY WRAPPER

[ “[1,2]” ]

[ [1,2] ]

[ “hi” ]

JSON QUERY WITH CONDITIONAL ARRAY WRAPPER

[ “[1,2]” ]

[1,2]

[ “hi” ]

Not implemented:

  • JSON_TABLE

5.18.2. Constructor Functions

Operator syntax Description

JSON_OBJECT( { [ KEY ] name VALUE value [ FORMAT JSON ] | name : value [ FORMAT JSON ] } * [ { NULL | ABSENT } ON NULL ] )

Construct JSON object using a series of key (name) value (value) pairs

JSON_OBJECTAGG( { [ KEY ] name VALUE value [ FORMAT JSON ] | name : value [ FORMAT JSON ] } [ { NULL | ABSENT } ON NULL ] )

Aggregate function to construct a JSON object using a key (name) value (value) pair

JSON_ARRAY( { value [ FORMAT JSON ] } * [ { NULL | ABSENT } ON NULL ] )

Construct a JSON array using a series of values (value)

JSON_ARRAYAGG( value [ FORMAT JSON ] [ ORDER BY orderItem [, orderItem ]* ] [ { NULL | ABSENT } ON NULL ] )

Aggregate function to construct a JSON array using a value (value)

Note:

  • The flag FORMAT JSON indicates the value is formatted as JSON character string. When FORMAT JSON is used, the value should be de-parse from JSON character string to a SQL structured value.

  • ON NULL clause defines how the JSON output represents null values. The default null behavior of JSON_OBJECT and JSON_OBJECTAGG is NULL ON NULL, and for JSON_ARRAY and JSON_ARRAYAGG it is ABSENT ON NULL.

  • If ORDER BY clause is provided, JSON_ARRAYAGG sorts the input rows into the specified order before performing aggregation.

5.18.3. Comparison Operators

Operator syntax Description

jsonValue IS JSON [ VALUE ]

Whether jsonValue is a JSON value

jsonValue IS NOT JSON [ VALUE ]

Whether jsonValue is not a JSON value

jsonValue IS JSON SCALAR

Whether jsonValue is a JSON scalar value

jsonValue IS NOT JSON SCALAR

Whether jsonValue is not a JSON scalar value

jsonValue IS JSON OBJECT

Whether jsonValue is a JSON object

jsonValue IS NOT JSON OBJECT

Whether jsonValue is not a JSON object

jsonValue IS JSON ARRAY

Whether jsonValue is a JSON array

jsonValue IS NOT JSON ARRAY

Whether jsonValue is not a JSON array

The following operators are not in the SQL standard.

Operator syntax Description

JSON_TYPE(jsonValue)

Returns a string value indicating the type of a jsonValue

JSON_DEPTH(jsonValue)

Returns an integer value indicating the depth of a jsonValue

JSON_PRETTY(jsonValue)

Returns a pretty-printing of jsonValue

JSON_LENGTH(jsonValue [, path ])

Returns a integer indicating the length of jsonValue

JSON_KEYS(jsonValue [, path ])

Returns a string indicating the keys of a JSON jsonValue

JSON_REMOVE(jsonValue, path[, path])

Removes data from jsonValue using a series of path expressions and returns the result

Note:

  • JSON_TYPE / JSON_DEPTH / JSON_PRETTY return null if the argument is null

  • JSON_LENGTH / JSON_KEYS / JSON_REMOVE return null if the first argument is null

  • JSON_TYPE generally returns an upper-case string flag indicating the type of the JSON input. Currently supported supported type flags are:

    • INTEGER

    • STRING

    • FLOAT

    • DOUBLE

    • LONG

    • BOOLEAN

    • DATE

    • OBJECT

    • ARRAY

    • NULL

  • JSON_DEPTH defines a JSON value’s depth as follows:

    • An empty array, empty object, or scalar value has depth 1;

    • A non-empty array containing only elements of depth 1 or non-empty object containing only member values of depth 1 has depth 2;

    • Otherwise, a JSON document has depth greater than 2.

  • JSON_LENGTH defines a JSON value’s length as follows:

    • A scalar value has length 1;

    • The length of array or object is the number of elements is contains.

Usage Examples:

5.18.4. JSON_TYPE example

SQL

SELECT JSON_TYPE(v) AS c1,
  JSON_TYPE(JSON_VALUE(v, 'lax $.b' ERROR ON ERROR)) AS c2,
  JSON_TYPE(JSON_VALUE(v, 'strict $.a[0]' ERROR ON ERROR)) AS c3,
  JSON_TYPE(JSON_VALUE(v, 'strict $.a[1]' ERROR ON ERROR)) AS c4
FROM (VALUES ('{"a": [10, true],"b": "[10, true]"}')) AS t(v)
LIMIT 10;

Result

c1 c2 c3 c4

OBJECT

ARRAY

INTEGER

BOOLEAN

5.18.5. JSON_DEPTH example

SQL

SELECT JSON_DEPTH(v) AS c1,
  JSON_DEPTH(JSON_VALUE(v, 'lax $.b' ERROR ON ERROR)) AS c2,
  JSON_DEPTH(JSON_VALUE(v, 'strict $.a[0]' ERROR ON ERROR)) AS c3,
  JSON_DEPTH(JSON_VALUE(v, 'strict $.a[1]' ERROR ON ERROR)) AS c4
FROM (VALUES ('{"a": [10, true],"b": "[10, true]"}')) AS t(v)
LIMIT 10;

Result

c1 c2 c3 c4

3

2

1

1

5.18.6. JSON_LENGTH example

SQL

SELECT JSON_LENGTH(v) AS c1,
  JSON_LENGTH(v, 'lax $.a') AS c2,
  JSON_LENGTH(v, 'strict $.a[0]') AS c3,
  JSON_LENGTH(v, 'strict $.a[1]') AS c4
FROM (VALUES ('{"a": [10, true]}')) AS t(v)
LIMIT 10;

Result

c1 c2 c3 c4

1

2

1

1

5.18.7. JSON_KEYS example

SQL

SELECT JSON_KEYS(v) AS c1,
  JSON_KEYS(v, 'lax $.a') AS c2,
  JSON_KEYS(v, 'lax $.b') AS c2,
  JSON_KEYS(v, 'strict $.a[0]') AS c3,
  JSON_KEYS(v, 'strict $.a[1]') AS c4
FROM (VALUES ('{"a": [10, true],"b": {"c": 30}}')) AS t(v)
LIMIT 10;

Result

c1 c2 c3 c4 c5

[“a”, “b”]

NULL

[“c”]

NULL

NULL

5.18.8. JSON_REMOVE example

SQL

SELECT JSON_REMOVE(v, '$[1]') AS c1
FROM (VALUES ('["a", ["b", "c"], "d"]')) AS t(v)
LIMIT 10;

Result

c1

[“a”, “d”]

5.19. Copying Data from one Table to another

Currently, the syntax CREATE TABLE …​ AS SELECT …​ is not allowed. If you want to copy data from one table to another, you first need to create the new table and then do INSERT/UPSERT …​ SELECT.

CREATE TABLE TDEST (f1_key BIGINT, f2_value VARBINARY, PRIMARY KEY(f1_key));

INSERT INTO TDEST SELECT * FROM TORIG;

5.20. GIS/Spatial Support

LeanXcale’s GIS support and LeanXcale’s high performance over GIS data have been carried out by the design and implementation of a new filtering algorithm based on what is called Geohash. Geohash is a public domain geocode system invented in 2008 by Gustavo Niemeyer and (similar work in 1966) G.M. Morton, which encodes a geographic location into a short string of letters and digits. Geohashes offer properties like arbitrary precision and the possibility of gradually removing characters from the end of the code to reduce its size (and gradually lose precision). LeanXcale uses this gradual precision handling to perform ST geometric and geographic operations over GIS data, following the Geohash principle that specifies that nearby places will often present similar prefixes. In particular, LeanXcale’s geometry predicates and operators have been built using this premise.

In this section, we will follow the conventions as below:

In the “C” (for “compatibility”) column, “o” indicates that the function implements the OpenGIS Simple Features Implementation Specification for SQL, version 1.2.1; “p” indicates that the function is a PostGIS extension to OpenGIS.

5.20.1. Geohash indexes

LeanXcale can speed up queries over geospatial data by using Geohash indexing. Geohash is an encoding geographic location on Base 32 where the even bits represent the longitude precision and the odds bits represent the latitude precision. Geohash has some interesting properties so from an envelop you can narrow down the bins in which you have to look for geometries.

A lot of information can be found in the following links:

The geohash index can be the primary key of a table or a secondary. In general the primary key performs better and is the preferred way to define a geom table.

To create a table with geohash you need the following syntax:

  • Create a table whose primary key is a Geohash key. This will create a hidden geohash field in your table that will be used as primary key

CREATE TABLE geohashedcountries(
  countrycode VARCHAR,
  geomLocation VARCHAR,
  name VARCHAR,
  PRIMARY GEOHASH KEY(geomLocation));
  • Create the table, this time with a secondary index for the geohash. Again a hidden geohash table will be used:

CREATE TABLE geohashedcountries(
  countrycode VARCHAR,
  geomLocation VARCHAR,
  name VARCHAR,
  PRIMARY KEY(name),
  GEOHASH(geomLocation));

Instead of a geometry field which can be an arbitrary geometry, you may use two fields that will be used as latitude and longitude

CREATE TABLE geohashedcitypoints(
  citycode VARCHAR,
  latitude DOUBLE,
  longitude DOUBLE,
  name VARCHAR,
  PRIMARY GEOHASH KEY(latitude, longitude));

When running a query, LeanXcale’s query optimizer will automatically detect the Geohash index and use it to narrow down your search:

EXPLAIN PLAN FOR (
  SELECT name FROM geohashedcountries
  WHERE ST_Contains(
    ST_Buffer(ST_MakePoint(cast(-3.67 as double), cast(40.42 as double)), cast(0.5 as double)),
    ST_GeomFromText(geomLocation)
  ));

PLAN=EnumerableCalc(expr#0..2=[{inputs}], expr#3=[-3.67:DECIMAL(19, 0)], expr#4=[40.42:DECIMAL(19, 0)], expr#5=[ST_MAKEPOINT($t3, $t4)], expr#6=[0.5:DOUBLE], expr#7=[ST_BUFFER($t5, $t6)], expr#8=[ST_GEOMFROMTEXT($t0)], expr#9=[ST_CONTAINS($t7, $t8)], NAME=[$t1], $condition=[$t9])
            KiviPKTableScanRel(table=[[leanxcale, ADHOC, GEOHASHEDCOUNTRIES, filter:ST_GH_in($2, ST_GH_minMaxRegions(ST_BUFFER(ST_MAKEPOINT(-3.67:DOUBLE(19, 0), 40.42:DOUBLE(19, 0)), 0.5:DOUBLE))), project:[1, 2, 3]]], project=

Most frequently, all the functionality for geohashing is done internally so there is no need to explicitly define any condition for geohash. Anyway, the following functions are supported to query through the Geohash indexes:

ResultType Function Description

boolean

ST_GH_in(String geohash, String[][] minMaxRegions)

Check if a string is between any region of the list

String[][]

ST_GH_minMaxRegions(Geom geom)

Calculate the min max geohash regions(bins) the geom belongs to

String[]

ST_GH_encodeGeom(Geom geom)

Encodes the given geometry into a list of geoHash that contains it. The first item in the list would be the center

String

ST_GH_minGH(String geohash)

Generate the min value in geohash region

String

ST_GH_maxGH(String geohash)

Generate the max value in geohash region

String

ST_GH_encodeLatLon(double latitude, double longitude, int precision)

Encodes the given latitude and longitude into a geohash with the indicated precision (number of characters==number of 5bits groups)

String

ST_GH_encodeLatLon(double latitude, double longitude)

Encodes the given latitude and longitude into a geohash. Default precision is 12

String

ST_GH_encodeFromBinaryString(String binaryString)

Encodes the given binary string into a geohash.

String

ST_GH_encodeFromLong(long hashVal, int precision)

Encodes the given long into a geohash.

Double[]

ST_GH_decode(String geoHash)

Decodes the given geohash into a latitude and longitude

long

ST_GH_decodeToLong(String geoHash)

Decodes the given geoHash into bits as long value

String

ST_GH_decodeToBinaryString(String geoHash)

Decodes the given geoHash into a binary string

String

ST_GH_adjacent(String geoHash)

Returns the 8 adjacent hashes in the following order: N, NE, E, SE, S, SW, W, NW

String

ST_GH_regionsWithinDistance(double latitude, double longitude, double distance)

Returns the hashes that include the points within the specified distamce

String[]

ST_GH_regionsWithinDistance(Geom geom, Object distance)

Returns the hashes that include the points within the specified distance from the Geom.

String

ST_GH_northernNeighbour(String geoHash)

Returns the immediate neighbouring hash to the north

String

ST_GH_southernNeighbour(String geoHash)

Returns the immediate neighbouring hash to the south

String

ST_GH_westernNeighbour(String geoHash)

Returns the immediate neighbouring hash to the west

String

ST_GH_easternNeighbour(String geoHash)

Returns the immediate neighbouring hash to the east

Double[]

ST_GH_boundingBox(String geoHash)

Return the list of geohash limits for: min Latitude, min Longitude, max Latitude, max Longitude

5.20.2. Geometry conversion functions (2D)

C Operator syntax Description

p

ST_AsText(geom)

Alias for ST_AsWKT

o

ST_AsWKT(geom)

Converts geom → WKT

o

ST_GeomFromText(wkt [, srid ])

Returns a specified GEOMETRY value from WKT representation

o

ST_LineFromText(wkt [, srid ])

Converts WKT → LINESTRING

o

ST_MLineFromText(wkt [, srid ])

Converts WKT → MULTILINESTRING

o

ST_MPointFromText(wkt [, srid ])

Converts WKT → MULTIPOINT

o

ST_MPolyFromText(wkt [, srid ]) Converts WKT → MULTIPOLYGON

o

ST_PointFromText(wkt [, srid ])

Converts WKT → POINT

o

ST_PolyFromText(wkt [, srid ])

Converts WKT → POLYGON

5.20.3. Geometry creation functions (2D)

C Operator syntax Description

o

ST_MakeLine(point1 [, point ]*)

Creates a line-string from the given POINTs (or MULTIPOINTs)

p

ST_MakePoint(x, y [, z ])

Alias for ST_Point

o

ST_Point(x, y [, z ])

Constructs a point from two or three coordinates

5.20.4. Geometry properties (2D)

C Operator syntax Description

o

ST_Boundary(geom [, srid ])

Returns the boundary of geom

o

ST_Distance(geom1, geom2)

Returns the distance between geom1 and geom2

o

ST_GeometryType(geom)

Returns the type of geom

o

ST_GeometryTypeCode(geom)

Returns the OGC SFS type code of geom

o

ST_Envelope(geom [, srid ])

Returns the envelope of geom (which may be a GEOMETRYCOLLECTION) as a GEOMETRY

o

ST_X(geom)

Returns the x-value of the first coordinate of geom

o

ST_Y(geom)

Returns the y-value of the first coordinate of geom

5.20.5. Geometry properties (3D)

C Operator syntax Description

p

ST_Is3D(s)

Returns whether geom has at least one z-coordinate

o

ST_Z(geom)

Returns the z-value of the first coordinate of geom

5.21. Geometry predicates

C Operator syntax Description

o

ST_Contains(geom1, geom2)

Returns whether geom1 contains geom2

p

ST_ContainsProperly(geom1, geom2)

Returns whether geom1 contains geom2 but does not intersect its boundary

o

ST_Crosses(geom1, geom2)

Returns whether geom1 crosses geom2

o

ST_Disjoint(geom1, geom2)

Returns whether geom1 and geom2 are disjoint

p

ST_DWithin(geom1, geom2, distance)

Returns whether geom1 and geom are within distance of one another

o

ST_EnvelopesIntersect(geom1, geom2)

Returns whether the envelope of geom1 intersects the envelope of geom2

o

ST_Equals(geom1, geom2)

Returns whether geom1 equals geom2

o

ST_Intersects(geom1, geom2)

Returns whether geom1 intersects geom2

o

ST_Overlaps(geom1, geom2)

Returns whether geom1 overlaps geom2

o

ST_Touches(geom1, geom2)

Returns whether geom1 touches geom2

o

ST_Within(geom1, geom2)

Returns whether geom1 is within geom2

5.21.1. Geometry operators (2D)

The following functions combine 2D geometries.

C Operator syntax Description

o

ST_Buffer(geom, distance [, quadSegs | style ])

Computes a buffer around geom

o

ST_Union(geom1, geom2)

Computes the union of geom1 and geom2

o

ST_Union(geomCollection)

Computes the union of the geometries in geomCollection

See also: the ST_Union aggregate function.

5.21.2. Geometry projection functions

C Operator syntax Description

o

ST_SetSRID(geom, srid)

Returns a copy of geom with a new SRID

o

ST_Transform(geom, srid)

Transforms geom from one coordinate reference system (CRS) to the CRS specified by srid

6. Select for Update

Snapshot isolation can sufer from write-skew (this is described in more extension the concepts document). To prevent this kind of situations you can use the select for update syntax. When using the select for update command, the transaction will raise an exception whenever there is a change in the rows selected from any other transaction concurrently with it.

The grammar is quite simple:

SELECT_FOR_UPDATE <select statement>
  • When should you use select for update? Let’s imagine a scenario: In this example we want to change PK1’s balance so the sum of PK1 and PK2 balances cannot be less than 10. Hence, we need to prevent other executions to update PK2 while changing PK1’s balance.

The initial situation is:

select i2, i, balance, v from ACCOUNTS where i2=2 or i2=1;
+----+---+---------+---+
| I2 | I | BALANCE | V |
+----+---+---------+---+
| 1  | 1 | 8.0     | a |
| 2  | 2 | 4.0     | c |
+----+---+---------+---+

select sum(balance) from ACCOUNTS where i2=1 or i2=2;
+--------+
| EXPR$0 |
+--------+
| 12.0   |
+--------+

We have 2 concurrent sessions: * Session-1 is changing balance of PK-1:

SELECT_FOR_UPDATE SELECT i2, i, balance, v from ACCOUNTS where i2=2;
+----+---+---------+---+
| I2 | I | BALANCE | V |
+----+---+---------+---+
| 2  | 2 | 4.0     | c |
+----+---+---------+---+
UPDATE ACCOUNTS SET balance = 7 WHERE i2=1;
COMMIT;
  • Session-2 is changing the balance of PK-2:

SELECT_FOR_UPDATE SELECT i2, i, balance, v from ACCOUNTS where i2=1;
+----+---+---------+---+
| I2 | I | BALANCE | V |
+----+---+---------+---+
| 1  | 1 | 6.0     | c |
+----+---+---------+---+
UPDATE ACCOUNTS SET balance = 2 WHERE i2=2;
COMMIT;

Without the SELECT_FOR_UPDATE both operations would have been done and the final result would be (7,2) - which is less than 9 -.

With the SELECT_FOR_UPDATE, one of the transactions would raise an exception saying there is a conflict with the SELECT_FOR_UPDATE of the other session preventing this kind of anomalies:

Error: Error 0 (40001) : Error while executing SQL "update ACCOUNTS set balance = 2 where i2=2": Remote driver error: Connection 3e5a532e-4a12-4549-9ca0-d6fdc31e3168 cannot commit due to write-write conflict with a concurrent transaction: java.lang.RuntimeException: java.sql.SQLException: Error while executing SQL "update ACCOUNTS set balance = 8 where i2=2": LTM error: ConflictManagerException. Aborting txn 80695001. Transaction rollbacked (state=40001,code=0)

7. DDL Grammar

The basic DDL grammar allows the user to create tables and indexes.

ddlStatement:
  |   createTableStatement
  |   createOnlineAggregate
  |   dropTableStatement
  |   truncateTableStatement
  |   recreateTableStatement
  |   alterTableStatement
  |   createIndexStatement
  |   dropIndexStatement
  |   createSequenceStatement
  |   dropSequenceStatement
  |   GRANT permissionModification
  |   REVOKE permissionModification
  |   EXEC tableFunction


createTableStatement:
      CREATE [DELTA] TABLE
      [ IF NOT EXISTS ] name
      [ (KEEP | DISABLE) MVCC ]
      [ '(' tableElement [, tableElement ]* ')' ]
      [ AS query ]

tableElement:
      tableColumn
  |   columnName
  |   [ CONSTRAINT name ] tableConstraint

tableColumn:
      columnName [( SUM | MIN | MAX | COUNT )] type [ columnNullModifier ] [ columnGenerator ] [ columnKeyModifier ]

columnGenerator:
      DEFAULT expression
  |   [ GENERATED ALWAYS ] AS IDENTITY
      [ START WITH initialValue ]
      [ INCREMENT BY incrementValue ]

columnKeyModifier:
      PRIMARY KEY
  |   AUTOSPLIT splitperiod [ AUTOREMOVE AFTER persistperiod ]

columnNullModifier:
      [ NOT ] NULL

tableConstraint:
      PRIMARY KEY '(' columnName [, columnName ]* ')'
  |   GEOHASH '(' columnName [, columnName ]* ')'
  |   HASH '(' columnName [, columnName ]* ')' TO DISTRIBUTE
  |   DISTRIBUTE UNIFORM FROM values TO values
      [ MINMAXS distributeUniformMinMax ]
      [ IN numberOfPartitions PARTITIONS ]
  |   FOREIGN KEY '(' columnName [, columnName ]* ')'
      REFERENCES tableName '(' columnName [, columnName ]* ')'
  |   UNIQUE '(' columnName [, columnName ]* ')'
  |   CHECK expression

distributeUniformMinMax:
      RESTRICT
  |   [ FROM values ] [ TO values ]

createOnlineAggregate:
      CREATE [MANUAL] ONLINE AGGREGATE [INDEX] [ IF NOT EXISTS ] aggregateName
      AS { * | projectItem [, projectItem ]* }
      FROM tableExpression
      [WHERE booleanExpression]
      [ GROUP BY { * | projectItem [, projectItem ]* } ]
      [ AUTOSPLIT columnName splitperiod [ AUTOREMOVE AFTER persistperiod ] ]

dropTableStatement:
      DROP TABLE [ IF EXISTS ] name

truncateTableStatement:
      TRUNCATE TABLE name

recreateTableStatement:
      RECREATE TABLE tableName [ WITH PARTITIONS number ] USE AS SAMPLE

alterTableStatement:
      ALTER TABLE name alterTableAction

alterTableAction:
      RENAME TO newName
  |   DROP COLUMN columnName
  |   DROP constraintName
  |   ADD COLUMN tableColumn
  |   ADD [ CONSTRAINT name ] tableConstraint
  |   ADD PARTITION '(' columnName [, columnName ]* ')'
      FOR values [ MOVE [ TO address ] ]

createIndexStatement:
      CREATE [UNIQUE] INDEX [ IF NOT EXISTS ] name
      ON tableName '(' columnName [, columnName ]* ')'

dropIndexStatement:
      DROP INDEX [ IF EXISTS ] name

createSequenceStatement:
      CREATE SEQUENCE [ IF NOT EXISTS ] name sequenceModifier*

sequenceModifier:
      AS type
  |   START WITH initialValue
  |   INCREMENT BY incrementValue
  |   MAXVALUE maxValue
  |   MINVALUE minValue
  |   CYCLE
  |   CACHE cacheSize
  |   NO ( MAXVALUE | MINVALUE | CYCLE | CACHE )

dropSequenceStatement:
      DROP SEQUENCE [ IF EXISTS ] name

permissionModification:
      ( READ | WRITE | ALTER ) [,  ( READ | WRITE | ALTER )* ]
      ON ( TABLE | SEQUENCE | SCHEMA ) name
      TO user [, user* ]

createTrigger:
      CREATE TRIGGER [ IF NOT EXISTS ] triggerName
      (BEFORE | AFTER)
      [INSERT] [UPDATE] [DELETE]
      ON tableName FOR EACH ROW EXECUTE
      triggerFunction ['(' stringLiteral [, stringLiteral ]* ')']
      [PRIORITY intLiteral]

In createTableStatement, if you specify AS query, you may omit the list of tableElements, or you can omit the data type of any tableElement, in which case it just renames the underlying column.

However, besides the standard DDL syntax, there are some additional features that allow the user to take advantage of some capabilities derived from the nature of LeanXcale as a distributed database.

7.1. Partitioning

You can partition a table when you create it or you can add a partition afterwards.

7.1.1. Creating a table partitioned by Hash

Hashing is a very simple partitioning scheme and has the advantage of doing very good at balancing data across datastores. The database engine calculates a hash value from the key (or part of it) and it will distribute data considering the modulus of the has value. The disadvantage of hashing is that - for scans - the system has to do the scan in all datastores because data can be stored in any of them. Thus, scans require more resources than if you do a key range partitioning.

The syntax for creating a table with a hash and distribute it is:

CREATE TABLE (
  {Field Definition},
  {Primary Key Definition},
  HASH({List of Primary Key Fields to include}) TO DISTRIBUTE
);

The command will create a new table including a hash field as defined for hash partitions and It will create as many partitions as the number of datastores in the distributed database.

A new column, HASHID will be created as part of primary key. This will be the last field in the table. For insert operations, field HASHID can be filled with 0 and the database will take care of calculating the right value.

The less primary key fields that are part of the hash, the more efficient scans will be so It is important to keep HASH as simple as possible. On the contrary, you need a number of fields so the distribution is balanced so if you use only one field whose distribution is very skewed, then it might not provide a good balance.

Example:

CREATE TABLE TABLE_DH (
  f1 INTEGER,
  f2 BIGINT,
  f3 VARCHAR,
  PRIMARY KEY (f1, f2),
  HASH(f1) TO DISTRIBUTE
);
INSERT INTO TABLE_DH VALUES (1, 1, 'abc', 0);

7.1.2. Creating a table partitioned by a uniform key range

Partitioning by key range is the most efficient way to partition data, but in some situations you cannot know your data distribution in advance.

If you know it, you can define your partitions manually using the ALTER command described later.

If your distribution is aproximately uniform - or just as a simple way to start - you can create a table setting a uniform partitioning schema. We have added a DISTRIBUTE UNIFORM clause that allows to define a strategy to distribute data based on primary key partitions. The basic syntax is as follows:

CREATE TABLE UNIFORMTABLE (
  id INT,
  id2 VARCHAR,
  b1 BLOB,
  name VARCHAR,
  PRIMARY KEY (id, id2),
  DISTRIBUTE UNIFORM FROM VALUES(1,'0000') TO VALUES(2,'9999')
);

This command will create a table called UNIFORMTABLE and will create it with as many regions as datastores. The regions (or partitions) will have split points uniformly distributed from vector (1,'0000') TO (2,'9999')

However, this is quite tricky. Let’s see it with some examples:

  • Let’s say you want to calculate the mid value between (1, '0000') and (3, '0000'). This is simple: (2, '0000')

  • But, what’s the mid point between (1, '0000') and (2, '0000')? The answer is strange: This depends on the possible values that the second field can have. If you set no condition and you assume that the string can get any byte value, then the value could be something like (1, '>>>>'). It we limit the values just to digits, then the split value would be (1, '4999').

So, OK, you may have wanted to do something different and this is where constraints can play a role. For this you have an additional syntax term: MINMAXS. MINMAXS allows to define a vector with the minimum and maximum value for each field in the FROM/TO vectors. In our example:

CREATE TABLE UNIFORMTABLE (
  id INT,
  id2 VARCHAR,
  b1 BLOB,
  name VARCHAR,
  PRIMARY KEY (id, id2),
  DISTRIBUTE UNIFORM FROM VALUES(1,'0000') TO VALUES(2,'9999')
    MINMAXS FROM VALUES(1, '0000') TO VALUES(2, '9999')
);

MINMAXS allows some modifiers: * MINMAXS RESTRICT is equivalent to setting the values of FROM/TO as minimum and maximum. So in the SQL above we could have replaced the vectors by RESTRICT * The MINMAXS vector allows the use of NULL. Declaring a value as NULL means to use the default maximum value for the range. Default values for the range are: Byte: [0, 255] SHORT: [0, 32767] INT: [0, 2147483647] LONG: [0, Maximum value for 8 byte long] ** VARCHAR: chars are limited to ASCCI values from ' '(0x32) to '}'(0x7d)

You can also add an optional IN {N} PARTITIONS to specify explictly de number of regions you want to target for the table.

The following SQL statements are valid:

create table ut3 (
  id BIGINT,
  id2 TIMESTAMP,
  id3 BLOB,
  name VARCHAR,
  PRIMARY KEY (id, id2),
  DISTRIBUTE UNIFORM FROM
    VALUES(1,{ts '2020-01-01 00:00:00'}) TO VALUES(2,{ts '2020-08-01 00:00:00'})
);

create table ut3 (
  id BIGINT,
  id2 TIMESTAMP,
  id3 BLOB,
  name VARCHAR,
  PRIMARY KEY (id, id2),
  DISTRIBUTE UNIFORM FROM
    VALUES(1,{ts '2020-01-01 00:00:00'}) TO VALUES(2,{ts '2020-08-01 00:00:00'})
    MINMAXS FROM VALUES(1, {ts '2020-01-01 00:00:00'})
      TO VALUES(2, {ts '2020-08-01 00:00:00'})
);

create table ut3 (
  id BIGINT,
  id2 TIMESTAMP,
  id3 BLOB,
  name VARCHAR,
  PRIMARY KEY (id, id2),
  DISTRIBUTE UNIFORM FROM
    VALUES(1,{ts '2020-01-01 00:00:00'}) TO VALUES(2,{ts '2020-08-01 00:00:00'})
    MINMAXS DEFAULTS IN 2 PARTITIONS
);


create table ut3 (
  id BIGINT,
  id2 TIMESTAMP,
  id3 BLOB,
  name VARCHAR,
  PRIMARY KEY (id, id2),
  DISTRIBUTE UNIFORM FROM
    VALUES(1,{ts '2020-01-01 00:00:00'}) TO VALUES(2,{ts '2020-08-01 00:00:00'})
    MINMAXS RESTRICT IN 4 PARTITIONS
);

7.1.3. Adding a partition

The following grammar allows the user to split a table based on a specific criteria for the primary key:

ALTER TABLE TableName ADD PARTITION({Columns in Primary Key})
  FOR [ VALUES ({Field values for the split point})| SELECT {Field values for the split point} FROM TableName ... ]
    [ MOVE | NOMOVE ]

MOVE is the default behavior. It means that the upper partition created will be moved to another datastore.

Example:

ALTER TABLE warehouse ADD PARTITION(w_id) FOR VALUES (100) MOVE;

If the table had no partition, as a result of the above command you would have 2 partitions: The first would go in the range (-infinity, 100) and [100, infinity).

You can check the partitions of a table by querying system table: LXSYSMETA.TABLE_PARTITIONS

7.1.4. Partitioning based on a table sample

As previously said, sometimes It’s difficult to know the distribution of your keys. However, If you have a sample of the data that is significant, you can create the table upload it with the sample data and then use the RECREATE TABLE command to recreate the table according to the histogram of the keys of the sample data you uploaded.

The grammar is:

RECREATE TABLE TableName WITH PARTITIONS Number USE AS SAMPLE

So, follow the next steps: 1. CREATE TABLE with a simple create table syntax 2. Upload a sample of data that is significative of the histogram of keys 3. RECREATE TABLE

Note: It’s important to note that, after the RECREATE command, the sample data will be removed so If you need to have it, you have to upload it again.

7.1.5. Bidimensional partitioning

Bidimensional partitioning is an automatic partitioning schema that is normally set up on top of one of the previous partitioning schemas. You need to define a time evolving parameter and a criteria that will cause the system to automatically do partition your data to get the best of resources.

For this, basically, you define the field to be used for bidimensional control with the AUTOSPLIT modifier.

The AUTOSPLIT modifier goes with a value definition that can be: * {N}i : Do automatic partitioning when reached value N considered as a dimensionless number * {N}s : Do automatic partitioning when reached value N seconds from last partition * {N}d : Do automatic partitioning when reached value N days from last partition * {N}% : Do automatic partitioning when the region is expected to reach N% of the datastore memory * AUTO : The system will decide using the default partitioning criteria

Another AUTOREMOVE term can be used to define when to remove partitions. In a similar way, AUTOREMOVE allows to define the retention period, but only in these 3 ways: * {N}i : Remove when reached value N considered as a dimensionless number from the lowest limit of the partition when it was created. * {N}s : Remove when reached value N seconds from the limit when it was created * {N}d : Remove when reached value N days from the limit when it was created

Below you can see a few examples:

CREATE TABLE bidiTable (
  id BIGINT,
  id2 BIGINT AUTOSPLIT '0i' AUTOREMOVE AFTER '0d',
  name VARCHAR,
  PRIMARY KEY (id, id2)
);

CREATE TABLE bidiTable (
  id BIGINT,
  id2 BIGINT AUTOSPLIT '12%',
  name VARCHAR,
  PRIMARY KEY (id, id2)
);

CREATE TABLE bidiTable (
  id BIGINT,
  id2 BIGINT AUTOSPLIT 'AUTO' AUTOREMOVE AFTER '220i',
  name VARCHAR,
  PRIMARY KEY (id, id2)
);

7.2. System Tables

LeanXcale’s query engine provides a series of virtual tables that represent system information that can be useful for the end user or the administrator.

The following sections provide some information of the system virtual tables available.

7.2.1. SYSMETA.TABLES

It shows all the tables in the database instance.

> select * from sysmeta.tables;
+----------+------------+--------------+--------------+---------+---------+-----------+----------+------------------------+---------------+
| tableCat | tableSchem |  tableName   |  tableType   | remarks | typeCat | typeSchem | typeName | selfReferencingColName | refGeneration |
+----------+------------+--------------+--------------+---------+---------+-----------+----------+------------------------+---------------+
|          | APP        | CUSTOMER     | TABLE        |         |         |           |          |                        |               |
|          | APP        | DISTRICT     | TABLE        |         |         |           |          |                        |               |
|          | APP        | HISTORY      | TABLE        |         |         |           |          |                        |               |
+----------+------------+--------------+--------------+---------+---------+-----------+----------+------------------------+---------------+

7.2.2. SYSMETA.COLUMNS

It shows all the column information related to the tables

> select * from sysmeta.columns;
+----------+------------+--------------+------------------------+----------+-----------------------------------+------------+--------------+---------------+--------------+----------+-------+
| tableCat | tableSchem |  tableName   |       columnName       | dataType |             typeName              | columnSize | bufferLength | decimalDigits | numPrecRadix | nullable | remar |
+----------+------------+--------------+------------------------+----------+-----------------------------------+------------+--------------+---------------+--------------+----------+-------+
|          | APP        | CUSTOMER     | C_ID                   | 4        | INTEGER                           | -1         | null         | null          | 10           | 1        |       |
|          | APP        | CUSTOMER     | C_D_ID                 | 4        | INTEGER                           | -1         | null         | null          | 10           | 1        |       |
|          | APP        | CUSTOMER     | C_W_ID                 | 4        | INTEGER                           | -1         | null         | null          | 10           | 1        |       |
|          | APP        | CUSTOMER     | C_FIRST                | 12       | VARCHAR                           | -1         | null         | null          | 10           | 1        |       |
|          | APP        | CUSTOMER     | C_MIDDLE               | 12       | VARCHAR                           | -1         | null         | null          | 10           | 1        |       |
...

7.2.3. LXSYSMETA.PRIMARY_KEYS

It shows all the primary keys related to the tables in the system.

7.2.4. LXSYSMETA.FOREIGN_KEYS

It shows all the foreign keys related to the tables in the system.

7.2.5. LXSYSMETA.INDEXES

It shows all the indexes created for the tables in the system.

> select * from lxsysmeta.indexes;
+-----------+----------------+-------------+------+-----------------+------------+-----------+-------------+-------+-----------------+----------+------------+-----------+
| nonUnique | indexQualifier |  indexName  | type | ordinalPosition | columnName | ascOrDesc | cardinality | pages | filterCondition | tableCat | tableSchem | tableName |
+-----------+----------------+-------------+------+-----------------+------------+-----------+-------------+-------+-----------------+----------+------------+-----------+
| false     | APP            | IX_ORDERS   | 2    | 2               | O_W_ID     | A         | 0           | 0     |                 | tpcc     | APP        | ORDERS    |
| false     | APP            | IX_ORDERS   | 2    | 1               | O_D_ID     | A         | 0           | 0     |                 | tpcc     | APP        | ORDERS    |
| false     | APP            | IX_ORDERS   | 2    | 3               | O_C_ID     | A         | 0           | 0     |                 | tpcc     | APP        | ORDERS    |
| false     | APP            | IX_CUSTOMER | 2    | 2               | C_W_ID     | A         | 0           | 0     |                 | tpcc     | APP        | CUSTOMER  |
| false     | APP            | IX_CUSTOMER | 2    | 1               | C_D_ID     | A         | 0           | 0     |                 | tpcc     | APP        | CUSTOMER  |
| false     | APP            | IX_CUSTOMER | 2    | 5               | C_LAST     | A         | 0           | 0     |                 | tpcc     | APP        | CUSTOMER  |
+-----------+----------------+-------------+------+-----------------+------------+-----------+-------------+-------+-----------------+----------+------------+-----------+

7.2.6. LXSYSMETA.TABLE_CHECKS

It shows all the table checks (NOT NULL, …​)

7.2.7. LXSYSMETA.TRANSACTIONS

It shows all active transactions in the Query Engine relating the information to the connection. The following fields are showed:

  • txnId: The internal unique identifier of the transaction

  • state: The state of the transaction

  • startTs: This is the timestamp that controls the visibility of the transaction according to snapshot isolation principles.

  • startTime: This is the time from the epoch in milliseconds when the transaction was started.

  • commitTimeStamp: It is usually -1, meaning that the transaction is not doing COMMIT, yet. Since COMMIT is a short time phase in the transaction you could seldomly see a transaction with a meaningful COMMIT timestamp.

  • sessionId: This is the internal session identifier in the Query Engine

  • connectionId: This allows to relate the transaction with the connection.

  • uid: Identifier of the user who owns the session in which the transaction is being done.

  • connectionMode: Mode of the connection.

  • numPendingScans: Number of SCANs the transaction started but are not finished

  • numTotalScans: Total number of SCANs for the transaction

  • hasWrite: True if the transaction did any write operation (UPDATE, INSERT)

> select * from lxsysmeta.transactions;
+-------------+--------+-------------+-----------------+-----------------+-----------+--------------------------------------+-----+----------------+-----------------+---------------+-------+
|    txnId    | state  |   startTs   |    startTime    | commitTimeStamp | sessionId |             connectionId             | uid | connectionMode | numPendingScans | numTotalScans | hasWr |
+-------------+--------+-------------+-----------------+-----------------+-----------+--------------------------------------+-----+----------------+-----------------+---------------+-------+
| 44011298001 | ACTIVE | 44011298000 | 526814361087000 | -1              | 1217      | 835e1f1a-cd0a-4766-9e53-182ed1bb39d2 |     | RUN            | 0               | 0             | false |
+-------------+--------+-------------+-----------------+-----------------+-----------+--------------------------------------+-----+----------------+-----------------+---------------+-------+

7.2.8. LXSYSMETA.CONNECTIONS

It shows information about all existing connections

> select * from lxsysmeta.connections;
+--------------------------------------+--------------------------------------+-----------+-------+----------------+-------------------------------------------------------------------------+
|             connectionId             |           kiviConnectionId           | isCurrent | dsuid | connectionMode |                                                                         |
+--------------------------------------+--------------------------------------+-----------+-------+----------------+-------------------------------------------------------------------------+
| 0b7ef86b-924f-4a1b-bf28-2e0661cd47a2 | 835e1f1a-cd0a-4766-9e53-182ed1bb39d2 | true      | app   | RUN            | {parserFactory=com.leanxcale.calcite.ddl.SqlDdlParserImpl#FACTORY, sche |
+--------------------------------------+--------------------------------------+-----------+-------+----------------+-------------------------------------------------------------------------+

7.2.9. LXSYSMETA.ONLINE_AGGREGATIONS

It shows information about the existing online aggregation relations

> select * from LXSYSMETA.ONLINE_AGGREGATIONS where onlineRelationTable = 'AGG_TRIPS' order by ordinalPosition;
select * from LXSYSMETA.ONLINE_AGGREGATIONS where onlineRelationTable = 'AGG_TRIPS' order by ordinalPosition;
+---------------------+-----------------+------------+---------+-----------------------+----------+------------+-----------+
| onlineRelationTable | ordinalPosition |    name    | aggType |     aggExpression     | tableCat | tableSchem | tableName |
+---------------------+-----------------+------------+---------+-----------------------+----------+------------+-----------+
| AGG_TRIPS           | 1               | DURATION   | SUM     | CAST2INT[$DURATION]   | demo     | APP        | TRIPS     |
| AGG_TRIPS           | 2               | MAX_START  | MAX     | $START_TS             | demo     | APP        | TRIPS     |
| AGG_TRIPS           | 3               | MIN_START  | MIN     | $START_TS             | demo     | APP        | TRIPS     |
| AGG_TRIPS           | 4               | PASSENGERS | SUM     | CAST2INT[$PASSENGERS] | demo     | APP        | TRIPS     |
| AGG_TRIPS           | 5               | COUNT_ALL  | COUNT   |                       | demo     | APP        | TRIPS     |
+---------------------+-----------------+------------+---------+-----------------------+----------+------------+-----------+

7.2.10. LXSYSMETA.ONLINE_AGGREGATES

It shows information about the group by expressions for existing online aggregation relations

> select * from LXSYSMETA.ONLINE_AGGREGATES where onlineRelationTable = 'AGG_TRIPS' order by ordinalPosition;
select * from LXSYSMETA.ONLINE_AGGREGATES where onlineRelationTable = 'AGG_TRIPS' order by ordinalPosition;
+---------------------+-----------------+-----------+---------------------------------------------------------------------------+----------+------------+-----------+
| onlineRelationTable | ordinalPosition |   name    |                                  groupBy                                  | tableCat | tableSchem | tableName |
+---------------------+-----------------+-----------+---------------------------------------------------------------------------+----------+------------+-----------+
| AGG_TRIPS           | 1               | END_YEAR  | TS_EXTRACT['YEAR', AUTOCAST[($START_TS + AUTOCAST[(1000 * $DURATION)])]]  | demo     | APP        | TRIPS     |
| AGG_TRIPS           | 2               | END_MONTH | TS_EXTRACT['MONTH', AUTOCAST[($START_TS + AUTOCAST[(1000 * $DURATION)])]] | demo     | APP        | TRIPS     |
| AGG_TRIPS           | 3               | END_DAY   | TS_EXTRACT['DAY', AUTOCAST[($START_TS + AUTOCAST[(1000 * $DURATION)])]]   | demo     | APP        | TRIPS     |
| AGG_TRIPS           | 4               | END_HOUR  | TS_EXTRACT['HOUR', AUTOCAST[($START_TS + AUTOCAST[(1000 * $DURATION)])]]  | demo     | APP        | TRIPS     |
+---------------------+-----------------+-----------+---------------------------------------------------------------------------+----------+------------+-----------+

7.2.11. LXSYSMETA.ONLINE_AGGREGATE_FILTERS

It shows information about the filter expressions for existing online aggregation relations

> select * from LXSYSMETA.ONLINE_AGGREGATE_FILTERS where onlineRelationTable = 'AGG_TRIPS';
select * from LXSYSMETA.ONLINE_AGGREGATE_FILTERS where onlineRelationTable = 'AGG_TRIPS';
+---------------------+----------------------------------------+----------+------------+-----------+
| onlineRelationTable |                 filter                 | tableCat | tableSchem | tableName |
+---------------------+----------------------------------------+----------+------------+-----------+
| AGG_TRIPS           | (TS_EXTRACT['YEAR', $START_TS] = 2021) | demo     | APP        | TRIPS     |
+---------------------+----------------------------------------+----------+------------+-----------+

7.2.12. LXSYSMETA.AGGREGATIONS

It shows information about the aggregation colums for existing tables

> select * from LXSYSMETA.AGGREGATIONS;
select * from LXSYSMETA.AGGREGATIONS;
+------------+---------+----------+------------+------------------+
|    name    | aggType | tableCat | tableSchem |    tableName     |
+------------+---------+----------+------------+------------------+
| PASSENGERS | SUM     | demo     | APP        | MANUAL_AGG_TRIPS |
| DURATION   | SUM     | demo     | APP        | MANUAL_AGG_TRIPS |
| MIN_START  | MIN     | demo     | APP        | MANUAL_AGG_TRIPS |
| COUNT_ALL  | SUM     | demo     | APP        | MANUAL_AGG_TRIPS |
| MAX_START  | MAX     | demo     | APP        | MANUAL_AGG_TRIPS |
+------------+---------+----------+------------+------------------+

7.2.13. LXSYSMETA.TABLE_PARTITIONS

It shows information about tables partitions

> select * from LXSYSMETA.TABLE_PARTITIONS order by tableName;
select * from LXSYSMETA.TABLE_PARTITIONS order by tableName;
+---------+--------+------------------+------------------+------------------+----------+----------+
| catalog | schema |    tableName     |     regionId     |     address      | minValue | maxValue |
+---------+--------+------------------+------------------+------------------+----------+----------+
| demo    | APP    | MANUAL_AGG_TRIPS | 281474976710658  | kvds-localhost-2 | MIN      | MAX      |
| demo    | APP    | TDTGEN           | 1688849860263938 | kvds-localhost-1 | MIN      | [5000]   |
| demo    | APP    | TDTGEN           | 562949953421314  | kvds-localhost-2 | [5000]   | MAX      |
| demo    | APP    | TRIPS            | 281474976710658  | kvds-localhost-1 | MIN      | MAX      |
+---------+--------+------------------+------------------+------------------+----------+----------+

7.2.14. LXSYSMETA.TABLE_STATS

It shows information about tables stats

> select * from LXSYSMETA.TABLE_STATS where stat in ('nblks','ctime')  order by tableName;
select * from LXSYSMETA.TABLE_STATS where stat in ('nblks','ctime')  order by tableName;
+---------+--------+------------------+------------------+------------------+-------+------------+
| catalog | schema |    tableName     |     regionId     |     address      | stat  |   value    |
+---------+--------+------------------+------------------+------------------+-------+------------+
| demo    | APP    | MANUAL_AGG_TRIPS | 281474976710658  | kvds-localhost-2 | nblks | 1024       |
| demo    | APP    | MANUAL_AGG_TRIPS | 281474976710658  | kvds-localhost-2 | ctime | 1613032446 |
| demo    | APP    | TDTGEN           | 1688849860263938 | kvds-localhost-1 | ctime | 1613033390 |
| demo    | APP    | TDTGEN           | 1688849860263938 | kvds-localhost-1 | nblks | 1024       |
| demo    | APP    | TDTGEN           | 562949953421314  | kvds-localhost-2 | ctime | 1613033391 |
| demo    | APP    | TDTGEN           | 562949953421314  | kvds-localhost-2 | nblks | 1024       |
| demo    | APP    | TRIPS            | 281474976710658  | kvds-localhost-1 | ctime | 1613032417 |
| demo    | APP    | TRIPS            | 281474976710658  | kvds-localhost-1 | nblks | 1024       |
+---------+--------+------------------+------------------+------------------+-------+------------+

7.2.15. LXSYSMETA.DATASTORE_STATS

It shows information about tables stats

> select * from LXSYSMETA.DATASTORE_STATS where stat like '%nblks%';
select * from LXSYSMETA.DATASTORE_STATS where stat like '%nblks%';
+------------------+------------+-------+
|     address      |    stat    | value |
+------------------+------------+-------+
| kvds-localhost-1 | mblk/nblks | 12720 |
| kvds-localhost-2 | mblk/nblks | 12720 |
+------------------+------------+-------+

7.2.16. LXSYSMETA.USER_GROUP

It shows information about user groups

> select COLUMNNAME from SYSMETA.COLUMNS where tableName = 'USER_GROUP';
select COLUMNNAME from SYSMETA.COLUMNS where tableName = 'USER_GROUP';
+------------+
| COLUMNNAME |
+------------+
| userId     |
| groupId    |
+------------+

7.2.17. LXSYSMETA.GROUP_PERMISSION

It shows information about permissions

> select * from LXSYSMETA.GROUP_PERMISSION where groupId = 'APP';
+---------+------+---------+--------+------------+--------+---------+---------+
| groupId | path | catalog | schema | entityName | isRead | isWrite | isAlter |
+---------+------+---------+--------+------------+--------+---------+---------+
| APP     | /seq | demo    | APP    |            | true   | true    | true    |
| APP     | /tbl | demo    | APP    |            | true   | true    | true    |
+---------+------+---------+--------+------------+--------+---------+---------+

7.2.18. LXSYSMETA.PLANS_CACHE

It shows execution plans cache

> select * from LXSYSMETA.PLANS_CACHE;
+-----------------------------------------------------------------------------------------+-----------------------------------------------------------------------+----------+------------+-------------+
|                                       initialPlan                                       |                             executionPlan                             | tableCat | tableSchem |  tableName  |
+-----------------------------------------------------------------------------------------+-----------------------------------------------------------------------+----------+------------+-------------+
| LogicalProject(inputs=[0..4])
  LogicalTableScan(table=[[db, LXSYSMETA, PLANS_CACHE]])
 | EnumerableTableScan.ENUMERABLE.[](table=[db, LXSYSMETA, PLANS_CACHE]) | DB       | LXSYSMETA  | PLANS_CACHE |
| LogicalProject(inputs=[0..9])
  KiviTableScan(table=[[db, QE, TABLE1]])
                | KiviEnumerableTableScanRel.ENUMERABLE.[](table=[db, QE, TABLE1])      | DB       | QE         | TABLE1      |
+-----------------------------------------------------------------------------------------+-----------------------------------------------------------------------+----------+------------+-------------+
2 rows selected (0.129 seconds)

7.3. LXSYSMETA.TRIGGER_FUNCTIONS

It shows available trigger functions.

> select * from LXSYSMETA.TRIGGER_FUNCTIONS;
+----------------------+------------------------+
|     triggerName      |      description       |
+----------------------+------------------------+
| HashFieldTrigger     | Hash Field Trigger     |
| UUIDTrigger          | UUID Trigger           |
| AutoGeohashTrigger   | Auto Geohash Trigger   |
| autoIncrementTrigger | Auto Increment Trigger |
+----------------------+------------------------+

7.4. LXSYSMETA.TRIGGERS

It shows the existing triggers per table.

> select * from LXSYSMETA.TRIGGERS;
+------------------------+----------------------+--------------------------+----------+----------+----------+----------+----------+--------------------+------------+------------+
|      triggerName       |   triggerFunction    |           args           | isBefore | isInsert | isUpdate | isDelete | priority |      tableCat      | tableSchem | tableName  |
+------------------------+----------------------+--------------------------+----------+----------+----------+----------+----------+--------------------+------------+------------+
| I_autoIncrementTrigger | autoIncrementTrigger | I,NOUSR.TAUTO_IDENTITY_I | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | TAUTO      |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | TESTTAB_FK |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | BYTEATAB   |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | BOOLTAB    |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | BYTEATAB2  |
+------------------------+----------------------+--------------------------+----------+----------+----------+----------+----------+--------------------+------------+------------+

8. Table Functions

Table functions are functions that programmatically produce a collection of rows that can be queried like a real database table.

8.1. How to develop and deploy a Table Function

First you need to add the LeanXcale maven repository and the qe-tableFunctions dependency to your pom.xml file:

<repositories>
    <repository>
        <id>maven-releases</id>
        <url>https://nexus.leanxcale.com/repository/maven-releases</url>
    </repository>
</repositories>

<dependencies>
    <dependency>
        <groupId>com.leanxcale</groupId>
        <artifactId>qe-tableFunctions</artifactId>
        <version>1.7.6</version>
    </dependency>
</dependencies>

In order to create a custom table function, you need your Java class to implement the AutoloadTableFunction interface and define the following methods:

Function Description Return

getFunctionName()

Define the custom table function name

String

getDescription()

Provides the custom table function description

String

eval(<table function arguments>)

Generate the collection of rows to be queried

ScannableTable

You can define as many eval functions as you need as long as their signatures are different.

While creating the eval functions returning ScannableTable, you’ll need to implement this methods:

Method Description Return

scan()

Creates an enumerable with the data to be returned when using the table function

Enumerable<Object[]>

getRowType

Returns this table’s row type. This is a struct type whose fields describe the names and types of the columns in this table

RelDataType

getStatistic

Returns a provider of statistics about this table

Statistic

getJdbcTableType

Type of table

Schema.TableType

isRolledUp

Determines whether the given column has been rolled up

boolean

rolledUpColumnValidInsideAgg

Determines whether the given rolled up column can be used inside the given aggregate function. You can assume that isRolledUp(column) is true

boolean

If you want to have the detail of the parameters that you need to use in each method, look at the Calcite Table interface javadoc.

All the table function classes must belong to the package com.leanxcale if you want them auto-loaded when LeanXcale’s Query Engine starts.

Find below a complete example:

package com.leanxcale;

import com.leanxcale.QE.tableFunction.AutoloadTableFunction;
import org.apache.calcite.DataContext;
import org.apache.calcite.config.CalciteConnectionConfig;
import org.apache.calcite.linq4j.Enumerable;
import org.apache.calcite.linq4j.Linq4j;
import org.apache.calcite.rel.type.RelDataType;
import org.apache.calcite.rel.type.RelDataTypeFactory;
import org.apache.calcite.schema.ScannableTable;
import org.apache.calcite.schema.Schema;
import org.apache.calcite.schema.Statistic;
import org.apache.calcite.schema.Statistics;
import org.apache.calcite.sql.SqlCall;
import org.apache.calcite.sql.SqlNode;
import org.apache.calcite.sql.type.SqlTypeName;
import org.apache.calcite.util.ImmutableBitSet;
import java.util.Collections;

public class SeriesGeneratorTableFunction implements AutoloadTableFunction {

  public static String getFunctionName() {
    return "series_generator";
  }

  public ScannableTable eval(Integer start, Integer stop, Integer step) {
    int actualStep = (null == step) ? 1 : step;
    int numResults = (1 + (stop - start)/actualStep);
    return new ScannableTable() {
      @Override
      public Enumerable<Object[]> scan(DataContext dataContext) {
        Object[][] results = new Object[numResults][1];
        for (int inx = 0; inx < numResults; inx++) {
          results[inx][0] = start + inx*actualStep;
        }
        return Linq4j.asEnumerable(results);
      }

      @Override
      public RelDataType getRowType(RelDataTypeFactory typeFactory) {
        return typeFactory.builder()
          .add("SERIES", SqlTypeName.INTEGER)
          .build();
      }

      @Override
      public Statistic getStatistic() {
        return Statistics.of(numResults, Collections.singletonList(ImmutableBitSet.of(numResults)));
      }

      @Override
      public Schema.TableType getJdbcTableType() {
        return Schema.TableType.TABLE;
      }

      @Override
      public boolean isRolledUp(String s) {
        return false;
      }

      @Override
      public boolean rolledUpColumnValidInsideAgg(String s, SqlCall sqlCall, SqlNode sqlNode, CalciteConnectionConfig calciteConnectionConfig) {
        return false;
      }
    };
  }
}

Once you do mvn package, a .jar file gets created with the custom table functions you defined. This .jar should be included in the Query Engine’s classpath (lxs/LX-BIN/lib). Ask your administrator to include it there, or write an email with the .jar file attached to support@leanxcale.com if you’re using the Cloud version of LeanXcale. The Query Engine should be restarted after adding the table function .jar file.

8.2. Using table functions

You can use a table function in the FROM clause of a query as the first argument to the function table. In this example, series_generator is the table function:

SELECT SERIES FROM table(series_generator(1, 10, 1)) WHERE SERIES > 3;

+--------+
| SERIES |
+--------+
| 4      |
| 5      |
| 6      |
| 7      |
| 8      |
| 9      |
| 10     |
+--------+
7 rows selected (0.008 seconds)

Take into account that if the parameter you’re passing to your table function is a string, you’ll need to use single quotes. Double quotes are not allowed. So this would work:

SELECT * FROM table(my_new_tablefunction('test_string'));

But this wouldn’t:

SELECT * FROM table(my_new_tablefunction("test_string"));

In the previous example, we showed the basics on how to build a Table Function, but you’ll probably want to do more complex stuff like scans over data, some sort of processing and returning it so it can be queried via SQL.

In order to do so, you’ll need to use our low-level KiVi Direct API. You can take a deep look on how to use the basic functionality, or read the Java KiVi API reference.

To use the KiVi API in your Table function project, please add the kivi-api dependency to your pom.xml file:

<dependencies>
    <dependency>
        <groupId>com.leanxcale</groupId>
        <artifactId>kivi-api</artifactId>
        <version>1.7.6</version>
    </dependency>
</dependencies>

So suppose we have a table with some cities and the country they belong to:

lx> SELECT * FROM CitiesCountries;
+-------------+---------+
|    CITY     | COUNTRY |
+-------------+---------+
| Seville     | Spain   |
| Dallas      | USA     |
| Barcelona   | Spain   |
| London      | UK      |
| Los Angeles | USA     |
| Madrid      | Spain   |
+-------------+---------+
6 rows selected (0.011 seconds)

And then we have a table with some persons' data, including the city of residence:

lx> SELECT * FROM persons;
+----------+----------+-------------+-------------------+-------------+
| PERSONID | LASTNAME |  FIRSTNAME  |      ADDRESS      |    CITY     |
+----------+----------+-------------+-------------------+-------------+
| 7        | Perez    | Miguel      | Calle Pez         | Madrid      |
| 8        | Gomez    | Antonio     | Rucu Street       | Barcelona   |
| 9        | Johnson  | Boris       | Ruperson          | London      |
| 10       | Smith    | John        | Abrevadero Street | Los Angeles |
+----------+----------+-------------+-------------------+-------------+
4 rows selected (0.009 seconds)

We can create a Table Function that takes a country (see the eval method) as the parameter and returns the people that lives in cities of that country:

package com.leanxcale;

import com.leanxcale.QE.tableFunction.AutoloadTableFunction;
import com.leanxcale.kivi.database.Table;
import com.leanxcale.kivi.session.Session;
import com.leanxcale.kivi.session.SessionFactory;
import com.leanxcale.kivi.session.Settings;
import com.leanxcale.kivi.tuple.Tuple;
import java.util.*;
import org.apache.calcite.DataContext;
import org.apache.calcite.config.CalciteConnectionConfig;
import org.apache.calcite.linq4j.Enumerable;
import org.apache.calcite.linq4j.Linq4j;
import org.apache.calcite.rel.type.RelDataType;
import org.apache.calcite.rel.type.RelDataTypeFactory;
import org.apache.calcite.schema.ScannableTable;
import org.apache.calcite.schema.Schema;
import org.apache.calcite.schema.Schema.TableType;
import org.apache.calcite.schema.Statistic;
import org.apache.calcite.schema.Statistics;
import org.apache.calcite.sql.SqlCall;
import org.apache.calcite.sql.SqlNode;
import org.apache.calcite.sql.type.SqlTypeName;

public class GetPersonsInCountryTableFunction implements AutoloadTableFunction {

  public static String getFunctionName(){
    return "get_persons_in_country";
  }

  public ScannableTable eval(final String countryToIncluide){

    return new ScannableTable() {

      public Enumerable<Object[]> scan (DataContext dataContext) {

        Session session;
        try {
          Settings settings = Settings.parse("lx://127.0.0.1:9876/db@APP");
          SessionFactory.setSharedKiviConnection(true);
          session = SessionFactory.newSession(settings);
        } catch (Exception e) {
          e.printStackTrace();
          return null;
        }

        Table citiesCountries = session.database().getTable("CITIESCOUNTRIES");

        Iterator<Tuple> citiesIterator =
          citiesCountries
            .find()
            .iterator();

        LinkedList<String> citiesOfTheCountry = new LinkedList<>();
        while (citiesIterator.hasNext()){
          Tuple cities = citiesIterator.next();

          String city = cities.get("CITY", String.class ).toLowerCase();
          String country = cities.get("COUNTRY", String.class ).toLowerCase();

          if (countryToIncluide != null) {
            if (countryToIncluide.toLowerCase().equals(country))
            citiesOfTheCountry.add(city);
          } else {
            citiesOfTheCountry.add(city);
          }
        }

        Table persons = session.database().getTable("PERSONS");
        Object[][] results = new Object[10][2];

        Iterator<Tuple> personsIterator =
          persons
            .find()
            .sort()
            .iterator();

        int i = 0;

        while (personsIterator.hasNext()){
          Tuple person = personsIterator.next();

          String firstName = person.get("FIRSTNAME", String.class );
          String lastName = person.get("LASTNAME", String.class );
          String city = person.get("CITY", String.class );

          if (citiesOfTheCountry.contains(city.toLowerCase())) {
            results[i][0] = firstName + " " + lastName;
            results[i++][1] = city;
          }
        }

        return Linq4j.asEnumerable(results);
      }

      public RelDataType getRowType (RelDataTypeFactory typeFactory) {
        return typeFactory.builder()
          .add("FULL_NAME", SqlTypeName.VARCHAR)
          .add("CITY", SqlTypeName.VARCHAR)
          .build();
      }

      public Statistic getStatistic () {
        return Statistics.UNKNOWN;
      }

      public TableType getJdbcTableType () {
        return Schema.TableType.TABLE;
      }

      public boolean isRolledUp (String column) {
        return false;
      }

      public boolean rolledUpColumnValidInsideAgg (String column, SqlCall call, SqlNode parent, CalciteConnectionConfig config) {
        return false;
      }
    };
  }
}

So you would call the Table Function this way:

lx> SELECT * FROM table(get_persons_in_country('SPAIN')) ORDER BY CITY;
+--------------------+-----------+
|     FULL_NAME      |   CITY    |
+--------------------+-----------+
| Antonio Gomez      | Barcelona |
| Miguel Perez       | Madrid    |
+--------------------+-----------+
2 rows selected (0.007 seconds)

Now, this is a very simple example, and you could have done the same with an SQL query, but you can do a lot more complex stuff with it, and by using our KiVi API you eliminate the overhead of the SQL planner, so for big volumes of data and depending on the structure of the information, you could get big speed-ups.

8.4. Using docker to develop and test Table Functions

You can use the LeanXcale docker to test Table Functions. You can start a LeanXcale server locally using docker this way:

docker run -d --name lx-docker -p 1529:1529 docker.leanxcale.com/lx-docker:1.7.2

Once you’ve generated your Table Function .jar file following the instructions in the previous chapters, you need to copy it to the LeanXcale docker container:

docker cp your_ide_path/your_project/target/tableFunctionsTest-1.0.jar lx-docker:/lx/LX-BIN/lib

Once you’ve copied the .jar file to the lib directory, you need to reboot the Query Engine:

docker exec lx-docker bash -c "python3 /lx/LX-BIN/scripts/lxManageNode.py stop QE ; python3 /lx/LX-BIN/scripts/lxManageNode.py start QE"

Now you should be able to connect using any SQL client so you can check that your Table Function has been loaded by the Query Engine by executing this query:

lx> SELECT * FROM LXSYSMETA.TABLE_FUNCTIONS WHERE functionname='get_persons_in_country';
+------------------------+------------+---------------------------------------------+------------------------------------------------+
|      functionName      | parameters |                 description                 |              implementationClass               |
+------------------------+------------+---------------------------------------------+------------------------------------------------+
| get_persons_in_country | (String)   | Get all the persons of the country provided | com.leanxcale.GetPersonsInCountryTableFunction |
+------------------------+------------+---------------------------------------------+------------------------------------------------+
1 row selected (0.799 seconds)

If your function is there, it’s been auto-loaded, so you can test it. You can connect from the host machine or from within the docker container with the same JDBC url:

user@host:~$ lxClient --color=true --autoCommit=true --fastConnect=true -u jdbc:leanxcale://localhost:1529/db -n app -p app

sqlline version 1.8.0
lx> lx> SELECT * FROM table(get_persons_in_country('UK'));
+---------------+---------+
| FULL_NAME     | US_CITY |
+---------------+---------+
| Boris Johnson | London  |
+---------------+---------+
1 rows selected (0.07 seconds)

lx> SELECT * FROM table(get_persons_in_country('USA'));
+------------+-------------+
| FULL_NAME  |   US_CITY   |
+------------+-------------+
| John Smith | Los Angeles |
+------------+-------------+
1 rows selected (0.03 seconds)

9. DML Triggers

A trigger is procedural code that is automatically executed in response to certain events on a particular table. DML triggers run when a user tries to modify data through a data manipulation language (DML) event. They can be used to enforce business rules and data integrity, query other tables, and include complex statements. DML events are INSERT, UPDATE, or DELETE statements on a table.

CREATE TRIGGER myTrigger BEFORE INSERT UPDATE ON t
FOR EACH ROW EXECUTE genericBackupTrigger('t_backup', 'ts')
PRIORITY 10;

DROP TRIGGER myTrigger ON TABLE t;

9.1. CREATE TRIGGER statement grammar

The syntax to create a trigger is as follow:

createTrigger:
      CREATE TRIGGER [ IF NOT EXISTS ] triggerName
      (BEFORE | AFTER)
      [INSERT] [UPDATE] [DELETE]
      ON tableName FOR EACH ROW EXECUTE
      triggerFunction ['(' arg0 [, argn ]* ')']
      [PRIORITY priority]

where

  • triggerName is the name used by the user to identify the trigger.

  • tableName is the table where the trigger applies

  • triggerFunction is the Trigger Function name as the result of getTriggerName(). See Trigger Function interface

  • arg0, arg1, …​ argn are the trigger function arguments as string literals passed in triggerExecutor(). Check [triggerExecutor params]

  • priority is an integer literal used to order teh trigger execution in case that the table have more than one. Note that the resulting new values from one trigger will be passed to the execution of the next one.

Note that you can define 1 to 3 dml events among INSERT, UPDATE and DELETE.

9.2. Available Trigger function system table

To find out the loaded trigger functions, you can query TRIGGER_FUNCTIONS system table.

> select * from LXSYSMETA.TRIGGER_FUNCTIONS;
+----------------------+------------------------+
|     triggerName      |      description       |
+----------------------+------------------------+
| HashFieldTrigger     | Hash Field Trigger     |
| UUIDTrigger          | UUID Trigger           |
| AutoGeohashTrigger   | Auto Geohash Trigger   |
| autoIncrementTrigger | Auto Increment Trigger |
+----------------------+------------------------+

9.3. Triggers function table

To find out existing triggers, you can query TRIGGER_FUNCTIONS system table.

> select * from LXSYSMETA.TRIGGERS;
+------------------------+----------------------+--------------------------+----------+----------+----------+----------+----------+--------------------+------------+------------+
|      triggerName       |   triggerFunction    |           args           | isBefore | isInsert | isUpdate | isDelete | priority |      tableCat      | tableSchem | tableName  |
+------------------------+----------------------+--------------------------+----------+----------+----------+----------+----------+--------------------+------------+------------+
| I_autoIncrementTrigger | autoIncrementTrigger | I,NOUSR.TAUTO_IDENTITY_I | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | TAUTO      |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | TESTTAB_FK |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | BYTEATAB   |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | BOOLTAB    |
| _UUID__UUIDTrigger     | UUIDTrigger          | _UUID_                   | true     | true     | false    | false    | 100      | contrib_regression | NOUSR      | BYTEATAB2  |
+------------------------+----------------------+--------------------------+----------+----------+----------+----------+----------+--------------------+------------+------------+

9.4. Custom trigger function

Trigger functions are java objects that are automatically loaded into the Query Engine and provide a mechanism to build the Trigger’s executors, that holds the actual trigger procedural code. In order to create a custom trigger function, you need to implement from TriggerFunctionI Interface and define the following methods:

ResultType Function Description

String

getTriggerName()

Define the trigger function name

String

description()

Define a description for the trigger function so users can figure out what it does

TriggerExecutor

triggerExecutor(TriggerUtils triggerUtils, NamedValuesI newValues, NamedValuesI oldValues, String[] args)

Creates an istance of TriggerExecutor. See [trexetr]

public class GenericBackupTriggerExample implements TriggerFunctionI {
  static String triggerName = "genericBackupTrigger";

  @Override
  public String getTriggerName() {
    return triggerName;
  }

  @Override
  public TriggerExecutor triggerExecutor(TriggerUtils triggerUtils, NamedValuesI newValues, NamedValuesI oldValues, String[] args) {
    return new GenericBackupTriggerExecutor(triggerUtils, newValues, oldValues, args[0], args[1]);
  }

  @Override
  public String description() {
    return "Generic backup Trigger example";
  }
}

9.4.1. Method triggerExecutor

The method triggerExecutor builds an instance of TriggerExecutor accordingly to the given params.

Param position Param type Description

1

TriggerUtils

Utils. See Trigger Utils

2

NamedValuesI

New values for table’s row. Null on deletion

3

NamedValuesI

Old values for table’s row. Null on insertion

4

String[]

Trigger arguments

9.5. Custom trigger executor

Trigger’s executors are the ones that actually holds the procedural code to be executed when a trigger is fired. You would just need to extend from TriggerExecutor abstract class and define the following methods:

ResultType Function Description

void

execute()

Executes the procedural code

In order to implement the trigger execution code, you might probably need to use the following final methods:

ResultType Function Description

boolean

hasValues()

Tell if the new values are available

boolean

hasOldValues()

Tell if the old values are available

Object

getValue(String columnValue)

Get new value for columnName

Object

getOldValue(String columnValue)

Get old value for columnName

void

setValue(String columnValue, Object value)

Set new value for columnName

String[]

valuesFieldNames()

Get values field names

Note that TriggerExecutor abstract class constructor has the following parameters:

Param position Param type Description

1

NamedValuesI

New values for table’s row

2

NamedValuesI

Old values for table’s row

  static public class GenericBackupTriggerExecutor extends TriggerExecutor {

    public final String backupTableName;
    public final String tsColumnName;
    public final TriggerUtils triggerUtils;

    public GenericBackupTriggerExecutor(TriggerUtils triggerUtils, NamedValuesI newValues, NamedValuesI oldValues, String backupTableName, String tsColumnName) {
      super(newValues, oldValues);
      this.backupTableName = backupTableName;
      this.tsColumnName = tsColumnName;
      this.triggerUtils = triggerUtils;
    }

    @Override
    public String getExecutorName() {
      return triggerName;
    }

    @Override
    public void execute() {
      if (!hasValues()) { // delete
        return;
      }
      try (Session session = SessionFactory.newSession(triggerUtils.settings())) {
        Table table = session.database().getTable(backupTableName);
        Tuple tuple = table.createTuple();
        for (String column : valuesFieldNames()) {
          tuple.put(column, getValue(column));
        }
        tuple.put(tsColumnName, Timestamp.valueOf(LocalDateTime.now()));
        table.insert(tuple);
        // no need to commit when sharing QE's transaction
      } catch (Exception exception) {
        throw new LeanxcaleRuntimeException(exception);
      }
    }
  }

9.6. Trigger Utils

An instance of trigger utils provides connection information and some other functionalities that could be useful in order to implemet a trigger like getting the next or curretn value of a sequence, generate a uuid, etc. Find below the available methods:

ResultType Function Description

Long

getTid()

Return current transaction

String

getDbName()

Return connection’s database

String

getSchemaName()

Return connection’s schema

String

uuid()

Generate a UUID

Settings

settings(Long externalTransaction)

Build the default settings to share the current kivi connection to connect through the direct API. Use external given transaction when not null to create the settings.

Settings

settings()

Build the default settings to share the current kivi connection and transaction to connect through the direct API.

long

currVal(String sequence)

Get current sequence value or null if it is not set

long

nextVal(String sequence)

Get next sequence value and set current sequence value

String

geohash(List<Object> geohashSourceValues)

Calculate geohash from values. If there is one element, it expects it to be a WKT (well known text). If there is 2, the first one is the latitude and the second is the longitude.

As you might realize, TableFunctionUtils is an extended implementation of TriggerUtils, see Table Function Utils. You could use TableFunctionUtils within trigger’s code but we strongly recommend to use the given TriggerUtils instance so the trigger can be fired by no-SQL API events too.

9.7. Make trigger a table function

For developments environments, it could be interesting to execute a table funtion in order to test a trigger behavior. In order to do this, we provide an abstract class TriggerTableFunction that will save you from do it yourself. As you can see in the following example, you would just need to add the definions of the table’s function methods getFunctionName() and eval(..).

public class BackupTriggerExample extends TriggerTableFunction {

  static String triggerName = "backupTrigger";
  static String[] columnNames = {"PK1", "PK2", "F3", "F4"};

  public static String getFunctionName() {
    return triggerName;
  }

  @Override
  public String getTriggerName() {
    return getFunctionName();
  }

  @Override
  public TriggerExecutor triggerExecutor(TriggerUtils triggerUtils, NamedValuesI newValues, NamedValuesI oldValues, String[] args) {
    return new BackupTriggerExecutor(newValues, oldValues);
  }

  @Override
  public String description() {
    return "Trigger example";
  }

  public ScannableTable eval(Long pk1, String pk2, String f3, Integer f4) {
    Object[] values = {pk1, pk2, f3, f4};
    int[] types = {Types.BIGINT, Types.VARCHAR, Types.VARCHAR, Types.INTEGER};
    NamedValuesI newValues = new TypedNamedValuesImpl(columnNames, values, types);
    return evalTrigger(newValues, null, null);
  }
}

Don’t forget to return the evalTrigger(NamedValuesI newValues, NamedValuesI oldValues, String[] args) result so the trigger is executed and its result is returned by the table function. Note that evalTrigger params will be mapped to the triggerExecutor ones.

 try (ResultSet rs = s.executeQuery("exec backupTrigger(1, 'a', 'af3', null)")) {
        assertThat(rs.next(), is(true));
        assertThat(rs.getLong(1), is(1L));
        assertThat(rs.getString(2), is("a"));
        assertThat(rs.getString(3), is("af3"));
        assertThat(rs.getInt(4), is(100));
        assertThat(rs.next(), is(false));
      }

9.7.1. Typed Named Values

The NamedValuesI interface provides a interface to access a value by name. This is the following:

ResultType

Function

Description

Object

getValue(String name)

Get the value for name. Null when not found or value is null

void

setValue(String name, Object value)

Set the value for name

String[]

getFieldNames()

Get field names

Object[]

getValues()

Get values ordered as getFieldNames()

boolean

contains(String name)

Tells if name exists

int

size()

Tells size of the values

String

sqlTableName()

Table’s name the values belong to. Null when unknown

You wouldn’t need to implement this interface unless you want to use a trigger as a table function. In this case, you would probably rather implement TypedNamedValuesI so the table function can return the row values:

public interface TypedNamedValuesI extends NamedValuesI {
   /**
   * SQL JDBC tye for field
   * @param name field name
   * @return SQL JDBC tye
   */
 int sqlType(String name);
}

Anyway, we provide a basic TypedNamedValuesI implementation that might satisfy your needs.

  /**
   * Create a TypedNamesValues
   * @param positionsMap Map value position by name
   * @param row values
   * @param typeMap Map SQL type by name
   */
  public TypedNamedValuesImpl(Map<String, Integer> positionsMap, Object[] row, Map<String, Integer> typeMap) {
    ...
  }

  /**
   * Create a NamedValues object
   * @param names values names
   * @param row values ordered by names
   * @param types  SQL types ordered by names
   */
  public TypedNamedValuesImpl(String[] names, Object[] row, int[] types) {
    ...
  }

9.8. System Triggers

Despite of custom triggers, we provide some system triggers that can be handled freely by the user.

9.8.1. Auto Increment Trigger

This trigger is created automatically for a table when using the IDENTITY clause on table’s creation or alteration. Its execution fills the desired column with sequence’s next val when it is null. Find below its arguments:

Argument position

Description

1

Name of the column to be assigned with sequence’s next val when null

2

Sequence name. We suggest to include the schema name so the trigger won’t fail when connected to a different schema

The following table’s creation is equivalent to the one below:

create table tauto (i int AS IDENTITY start with 10 increment by 10, v VARCHAR, PRIMARY KEY (i));
create table tauto (i int, v VARCHAR, PRIMARY KEY (i));

create sequence tauto_i_seq start with 10 increment by 10;

CREATE TRIGGER tauto_i_seq_trigger BEFORE INSERT ON tauto
FOR EACH ROW EXECUTE autoIncrementTrigger('i', 'tauto_i_seq')
PRIORITY 100;

-- Note that the autoincrement colums will become an implicit column for inserts
insert into tauto values ('value');

9.8.2. Geohash Trigger

This trigger is created automatically for a table when using the PRIMARY GEOHASH KEY or GEOHASH clauses on table’s creation. Hidden geohash fields are created too when using these clauses. The trigger execution fills the desired column with the calculated geohash when it is null. Find below its arguments:

Argument position

Description

1

Name of the column to be assigned with the calculated geohash

2

WKT field if there is only two arguments, or Latitude field if there is three

3

Longitude field if there is three arguemnts

The following table’s creation is equivalent to the one below:

create table t (wkt VARCHAR, latitude double, longitude double, PRIMARY GEOHASH KEY (wkt), GEOHASH (latitude,longitude));
create table t (wkt VARCHAR, latitude double, longitude double, gh_wkt VARCHAR PRIMARY KEY, gh_lat_long VARCHAR);

CREATE TRIGGER gh_wkt_trigger BEFORE INSERT ON t
FOR EACH ROW EXECUTE AutoGeohashTrigger('gh_wkt', 'wkt')
PRIORITY 100;

CREATE TRIGGER gh_lat_long_trigger BEFORE INSERT ON t
FOR EACH ROW EXECUTE AutoGeohashTrigger('gh_lat_long', 'latitude', 'longitude')
PRIORITY 100;

-- Note that GEOHASH clause will create and index too, so we create it here manually
CREATE INDEX gh_lat_long_index ON t (gh_lat_long);

-- Note that the geohash column will become an implicit column for inserts and selects
insert into t values (ST_asText(ST_MakePoint(3.41,40.25)),cast(40.25 as double), cast(3.41 as double));

Geohash fields are fields indexed geographically for GIS applications

9.8.3. UUID Trigger

This trigger is created automatically for a table when not defining a PRIMARY KEY. A hidden UUID column is created too. The trigger execution fills the desired column with an UUID valuen when it is null. Find below its arguments:

Argument position

Description

1

Name of the column to be assigned with uuid

The following table’s creation is equivalent to the one below:

create table tauto (v VARCHAR);
create table t (id VARCHAR, v VARCHAR, PRIMARY KEY (id));

CREATE TRIGGER uuid_trigger BEFORE INSERT ON t
FOR EACH ROW EXECUTE UUIDTrigger('id')
PRIORITY 100;

-- Note that the uuid column will become a implicit column for inserts and selects
insert into t values ('value');

10. Hints

As for now, the mechanism defined to work with hints in QE Calcite is based on the use of table functions, that must be used to enable, create and remove the hints.

Starting from the point that a hint is a way to help the planner to build better execution plans, the lifecycle of a hint should be the following:

  1. Enable hints for the connection.

  2. Create/list/remove hint

  3. Refresh plan cache

  4. Test query (which installs the new execution plan in the cache)

  5. Go to 2 if the performance is not ok

  6. Disable hints for the connection

Now, we will see the functions that must be used to follow this steps.

10.1. Enable hints

Enable the hints creates a context for the present connection where store the defined hints.

This can be done using enableHints function as follows:

exec enableHints()

The result should be a row with the following message:

Hints enabled for connection ...

10.2. Disable hints

Disable the hints destroy the hints context for the present connection and removes all the hints created. However, the execution plans that have been created using queries will remain in the plan cache.

This can be done using disableHints function as follows:

exec disableHints()

The result should be a row with the following message:

Hints disabled for connection %s

10.3. List hints

Lists hints can be useful to know the defined hints for the present connection, and to retrieve their ID (which can be used to remove them).

This can be done using "listHints" function as follows:

exec listHints()

The result will be a table with a row for each defined hint and three columns (ID, which identifies the hint within the connection, the hint type and a description)

10.4. Define hints

To define hints you have to invoke the appropriate function for the hint you want to define. The available hints are:

10.4.1. Force the use of an Index

This hint forces the access to a table through a defined index. To use it invoke the function as follows:

exec forceAccess('TABLE_NAME','INDEX_NAME')

Where TABLE_NAME is the qualified name of the table (i.e. db-APP-CUSTOMER) and INDEX_NAME is the qualified name of the index. This function does not check the existence of the index nor the table so, if you force the access through an inexistent index, the queries that use this table will fail.

10.4.2. Disable pushdown

This hint disables the pushdown of operations to the Key-Value datastore for a given table. In general, this will negatively impact performance. This means that all the work will be done by Calcite except the scans.

To use it invoke the function as follows:

exec disablePushdown('TABLE_NAME')

Where TABLE_NAME is the qualified name of the table (i.e. db-APP-CUSTOMER).

10.4.3. Fixed join order

This hint sets the order of the joins as they are written in the query. This hint works at query level, so it will affect to all the joins present in the query.

To use it, invoke the function as follows:

exec fixJoinOrder()

10.4.4. Force Join type

This hint allows to force the kind of join which will be used between two given tables.

To use it, invoke the function as follows:

exec forceJoinType('JOIN_TYPE', 'TABLE_1', 'TABLE_2')

Where JOIN_TYPE is one of the available types: - CORRELATE, - MERGE_JOIN, - NESTED_LOOP

TABLE_1 and TABLE_2 are the unqualified names of the affected tables (i.e. CUSTOMER or WAREHOUSE). This function does not check that the chosen join type is available for a given pair of tables so if not the query will fail.

10.4.5. Or to Union

This hints transforms OR conditions into UNION clauses. It accepts a boolean parameter which indicates whether the UNION will be a UNION ALL or not.

To use it, invoke the function as follows:

exec orToUnion(all)

Where all is true for a UNION ALL and false in the other case.

10.5. Parallel hints

The following hints are used to manage the parallel mode. Parallel mode will force parallelism in scans at the query engine. Scans are parallelized at the Key-Value datastore level anyway.

10.5.1. Enable parallel mode

This hint is used to enable the planner rules that transform a scan into a parallel scan when possible, and propose it as an alternative way to traverse a table (which will compete with the rest of alternatives in terms of cost).

To use it, invoke the function as follows:

exec enableParallelMode()

10.5.2. Define parallel tables

This hint is used to define the set of tables over which the scans can be parallelized. This hint accepts a Java regular expression as a table name, so you can use a regexp to cover several tables or invoke this hint for each single one.

To use it, invoke the function as follows:

exec enableParallelTable(REGEXP)

Where REGEXP is a Java regular expression which applies over the full qualified table name (i.e 'db-APP-.' or '.' for all the tables).

10.5.3. Enable parallel pushed down aggregates

This hint enables the planner rules which allows to transform a KiVi scan with an aggregation program into a KiVi parallel scan (with the same aggregation program). To be chosen, this parallel scan will have to beat the rest of the alternatives in terms of cost.

To use it, invoke the function as follows:

exec enableParallelAgg()

10.6. Remove hints

This function allows to remove a hint for the present connection. But, if the hint has been used to execute a query and an execution plan has been created and stored in the query plan cache, remove the hint won’t remove the query plan and the following queries will use it for their execution.

To use it, invoke the function as follows:

exec removeHint(HINT_ID)

Where HINT_ID (an integer, without quotes) is the identifier of the hint that wants to be removed, and can be obtained from the listHints function.

10.7. Clean query plan cache

This function removes all the query plans that have been cached for a given table (not only for this connection but for every query in the server). It can be useful to ensure that a new query plan will be calculated when a hint is added or removed.

To do this, invoke the function as follows:

exec cleanPlanCache('TABLE_NAME')

Where TABLE_NAME is the qualified name of the table (i.e. db-APP-CUSTOMER) from which the query plans will be removed.

10.8. Enable/disable planner logger

These two functions let the user to enable or disable the optimizer logger. To be able to use it, you have to define a logger in your log4j configuration for the class org.apache.calcite.plan.RelOptPlanner with the level you want.

In case of enable it, the log level for the RelOptPlanner logger will be set to ALL, and the planner will log all the query optimization steps in the appender configured for the RelOptPlanner logger.

Use it with caution because the amount of generated data can affect the performance of the query preparation.

To invoke it:

exec enablePlannerDebug()

exec disablePlannerDebug()

11. NOTICE

As stated, the work of the Query Engine forked from Apache Calcite. For the drivers, Apache Avatica framework is also used:

Apache Calcite
Copyright 2012-2019 The Apache Software Foundation

This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

This product is based on source code originally developed
by DynamoBI Corporation, LucidEra Inc., SQLstream Inc. and others
under the auspices of the Eigenbase Foundation
and released as the LucidDB project.

The web site includes files generated by Jekyll.
Apache Calcite -- Avatica
Copyright 2012-2019 The Apache Software Foundation

This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

Geohash coding is based on https://github.com/kungfoo/geohash-java with the following license file:

Copyright 2016 Silvio Heuberger and contributors

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.