Special Features

1. Data Distribution & Partitioning

One key element of a distributed database is how data is distributed among the different datastores.

LeanXcale provides 3 ways of partitioning data:

  • Key range partitioning. This is the most efficient way of partitioning If you know how to balance your data.

  • Hashing. Hashing is a very simple partitioning scheme and has the advantage of doing very good at balancing data across datastores. The system will calculate a hash value from the key and it will distribute data considering the modulus of the has value. The disadvantage of hashing is that - for scans - the system has to do the scan in all datastores because data can be stored in any of them. Thus, scans require more resources than if you do a key range partitioning.

  • Bidimensional partitioning: Bidimensional partitioning is an automatic partitioning schema that is normally set up on top of one of the previous partitioning schemas. You need to define a time evolving parameter and a criteria that will cause the system to automatically do partition your data to get the best of resources.

And the first question is: Why should I distribute data at all? The answer is quite simple. If all the data is just in one datastore you will have all the job done by a single component and won’t take any advantage of parallelism. The more components you distribute your data, the better.

However, if the distribution is not even, then again you will be using one component more than other and this will be your bottleneck.

Therefore, we want to have a good data distribution so data is distributed as evenly as possible in terms of data access (which in the end depends on your workload).

1.1. Key range partitioning

Key range partitioning is about partitioning and distributing your data based on ranges of the primary key.

Why is this the most efficient way to distribute data? In the end this depends on the kind of queries, but in general, if you have a condition over the primary key, the condition will only be addressed to the partitions that have the information and the rest of the partitions can be handling other queries. In a multiquery system, if the data is well balanced this will be the most efficient use of the resources in parallel.

1.1.1. Key range partitioning example

To partition by key range you can use the CREATE TABLE …​ DISTRIBUTE UNIFORM clause or the RECREATE TABLE clause. However, to have the best control you will be creating each partition with the ALTER TABLE ADD PARTITION clause.

Let’s say we have a table DISTRICT that has a two field primary key, and we know it is not distributed evenly so 25% of the keys go in the range on d_w_id [0, 20), 25% in the range [20, 24), 25% in the range [24, 42) and the rest 25 in [42, …​).

So we first create the table:

CREATE TABLE district (
    d_id integer NOT NULL,
    d_w_id integer NOT NULL,
    d_name char(10),
    d_street_1 char(20),
    d_street_2 char(20),
    d_city char(20),
    d_state char(2),
    d_zip char(9),
    d_tax decimal(4,4),
    d_ytd decimal(12,2),
    d_next_o_id integer,
    CONSTRAINT pk_district PRIMARY KEY (d_w_id, d_id)
);

Then, we will add all partitions (since d_id is not significant in the description, I am using value 0 for all)

ALTER TABLE DISTRICT ADD PARTITION(d_w_id, d_id) FOR VALUES(20,0) MOVE
ALTER TABLE DISTRICT ADD PARTITION(d_w_id, d_id) FOR VALUES(24,0) MOVE
ALTER TABLE DISTRICT ADD PARTITION(d_w_id, d_id) FOR VALUES(42,0) MOVE

Note that for having the 4 partitions, we just need to define 3 split points.

1.2. Hash partitioning

Hashing is - very basically - a transformation from an input space to a number. If the distribution from the input space to the number is very uniform then the hash value will balance well the input space and you can use the hash value to distribute your data according that value instead of the input space.

In the current case, the input space can be a set of fields of any type and for LeanXcale’s partitioning you can the use the resulting hash code to partition.

One key point is what fields to use for the hash partitioning. I will try to go through the pros and cons with an example.

Let’s depart from the following table definition:

CREATE TABLE ip_packets (
  IP_ORIGIN VARCHAR,
  IP_DESTINATION VARCHAR,
  PORT_ORIGIN INT,
  PORT_DESTINATION INT,
  INFOTS TIMESTAMP,
  PACKETID INT,
  BYTES_TRANSMITTED BIGINT,
  PRIMARY KEY (IP_ORIGIN, IP_DESTINATION, PORT_ORIGIN, PORT_DESTINATION, INFOTS)
);

At first, it’s not easy to know if the IP_ORIGIN will be evenly distributed but It’s likely It won’t because you will be dealing with a subnetwork or several subnetworks and It’s very difficult to know in advance. So maybe we need to go for hash partitioning. Now, which fields do we include to calculate the hash value?

As first rule of thumb the less fields the more efficient the queries will be. So let’s start with IP_ORIGIN. Then, HASHID = HashFunction(IP_ORIGIN). So any query that has IP_ORIGIN informed can have its HASHID accurately calculated and will go to a specific partition to retrieve the data. On the other hand, if IP_ORIGIN is not informed SCANs will be sent in parallel to all datastores to get the data available in the datastore. If really there is no information in some datastore these are resources spent to have no outcome.

Since IP_ORIGIN is the first field in the primary key it should be one of the most commonly informed fields.

However, I may have one server that is generating 50% of the network packets (because It’s the central NAS of the company for example). Then, a Hash that only considers IP_ORIGIN won’t be balanced because there will be much more information in the partition where the IP of the NAS is located.

Then, we can try to balance by using the HASH of both (IP_ORIGIN, IP_DESTINATION). We could use the HASH of all (IP_ORIGIN, IP_DESTINATION, PORT_ORIGIN, PORT_DESTINATION) this will certainly balance the data better, but maybe a 10% unbalance in data size will be outperformed from the fact that most queries will have the IP_ORIGIN, IP_DESTINATION informed but won’t have all fields informed so, definitely we should stick to just IP_ORIGIN, IP_DESTINATION.

1.2.1. Hash partitioning example

So let’s create the table considering those two fields:

CREATE TABLE ip_packets (
  IP_ORIGIN VARCHAR,
  IP_DESTINATION VARCHAR,
  PORT_ORIGIN INT,
  PORT_DESTINATION INT,
  INFOTS TIMESTAMP,
  PACKETID INT,
  BYTES_TRANSMITTED BIGINT,
  PRIMARY KEY (IP_ORIGIN, IP_DESTINATION, PORT_ORIGIN, PORT_DESTINATION, INFOTS),
  HASH(IP_ORIGIN, IP_DESTINATION) TO DISTRIBUTE
);

1.2.2. Advantages

So, in summary the main advantages of hash partitioning is that allows for a simple partitioning mechanism to take advantage of all the resources of data distribution without having to know almost any information about your data in advance.

1.2.3. When to use

If you know your data distribution or you have a sample of the data that is representative, I would go for key range partitioning. However, if this is not clear, Hash partitioning is the way to balance your data.

There is a second situation in which hash partitioning can be the choice. We have mentioned that if the fields in the HASH are not defined in the query, your scans will be sent to all components because there is no way to know in which partitions the data for the query is located. But for some kind of queries this is not a drawback, but an advantage. For queries that traverse a small range you want to save resources to do other operations in parallel. However, If your query selectivity is small (the query is traversing a big amount of data) then the higher the degree if parallelism the better. In this situaltion HASH partitioning is a good choice even if you know your data distribution.

1.3. Bidimensional partitioning

LeanXcale allows for multidimensional partitioning not just bidimensional partitioning, but let’s first focus on two dimensions and in the automation involved in bidimensional partitioning.

Bidimensional partitioning is an automatic partitioning schema that is normally set up on top of one of the previous partitioning schemas. You need to define a time evolving parameter and a criteria that will cause the system to automatically do partition your data to get the best of resources.

When data is inserted into a B-Tree, the tree keeps growing, and the moment the tree cannot fit in memory then IO starts to be predominant in the process, and the insert rate of the table drops considerably. In LSM trees this problem is not that important, but then, B-Trees are much more efficient for scans.

LeanXcale uses a hybrid LSM + B-Tree structure, but the size of the final B-Tree still plays an important role in the performance of the ingestion.

Bidimensional partitioning is about defining a mechanism that allows to split the tree in an automatic way so the most relevant information can always fit in memory. This way, ingestion times are kept at its maximum rate.

The typical use case is when you have a source whose events have a timestamp clock that evolves naturally with time (IoT devices, sensors, GPS information, time series) are a common sources that show that behavior.

You may have events that get into database with some delay, but typically data is arriving with an increasing timestamp information. So if you split your tree according to some rule in the timestamp, you can have tree regions that can always fit in memory keeping a sustained very fast ingestion rate.

This also has additional benefits. In those use-cases the queries are usually constrained over a certain amount of time, so you will just visit a certain amount of regions. Even, it’s usually true that the older the region the less commonly visited it is.

1.3.1. Advantages

The bidimensional partition has, then, the following advantages:

  1. Keeps a sustained fast data ingestion

  2. Allow for efficient primary key queries. Primary key queries will go directly through the first dimension to get the information quite fast

    Access to data by primary key
  3. Allow efficient queries based on time range. Time range queries will go through the second dimension limiting the number of regions visited

Access to data by time range

Bidimensional partitioning can be applied to any field that increments monotonically, It doesn’t have to be a timestamp.

1.3.2. How to use

To use bidimensional partitioning, when you create the table you need to define the field for which the database should automatically do the partitioning. For this you have to use the key AUTOSPLIT. You can also define the data retention policy.

AUTOSPLIT AUTO will let LeanXcale database apply its internal policy which is based on the size of the region and memory availability.

Further, for columns values that increase along time, it is recommeded to use incremental bidimensional partitioning using the SPLIT modifier.

Below you can see one example that creates a table that will be automatically split every day and whose partitions will be retained for 6 months:

CREATE TABLE bidiTable (
  id BIGINT,
  ts TIMESTAMP SPLIT EVERY INTERVAL '1' DAYS AUTOREMOVE AFTER INTERVAL '180' DAYS(3),
  name VARCHAR,
  PRIMARY KEY (id, ts)
);

You can find all the options for bidimensional partitioning in the SQL syntax document.

1.3.3. When to use

Bidimensional partitioning is specially interesting for time evolving data where data ingestion rates and queries having time range limits are important.

2. Online Aggregates

In real time analytics, a major requirement is to be able to ingest data at high rates while at the same time compute large aggregate queries over the real-time data.

For instance, a common use case in big data is ingesting data at high rates and computing KPIs or other metrics over the ingested data. Examples of this use case are performance monitoring, IoT, eAdvertisement, smart grids, industry 4.0, etc. This kind of workload is troublesome for SQL databases because they are not efficient at ingesting data. Also, the aggregate analytical queries are very expensive because they require to traverse large amounts of data very frequently.

NoSQL Key-Value data stores, are good at ingesting data more efficiently and handle that part of the workload. However, they are not good at analytical queries, if at all supported. As a result, it is very common to find complex architectures that need to combine different data management technologies to solve this use case.

One differential key feature of LeanXcale is online aggregates. LeanXcale has developed a new technology based on a brand new semantic multi-version concurrency control (patent pending) that enables to compute aggregates in an incremental and real-time manner, using aggregate tables. As data is ingested, it becomes possible to update the relevant aggregate tables, so aggregates are always pre-computed.

This way, the formerly large, expensive analytical aggregate queries become almost costless queries that read one or more rows of an aggregate table. Data ingestion becomes slightly more expensive but removes the cost of computing the aggregates. Besides, if you combine the advantages of bidimensional partitioning as described above, you can have a really powerful solution.

How does LeanXcale solve the problem of concurrency to have the values updated in an efficient way? With a new technology based on a patent-pending semantic multiversion concurrency control. In this new concurrency control, writes are not blind. The writes actually carry the operation to be performed, e.g., Sum(10) and Sum(20) and not the final value as a standard update would do. Since additions are commutative, they do not conflict as far as one keeps track that they are additions until the corresponding version of the row is persisted.

In order to attain data consistency and in particular the expected data consistency in multiversion concurrency control, called snapshot isolation, one has to be really smart at how to handle multiversioning. This is exactly what the technology developed by LeanXcale does. This way, aggregates are conflictless.

2.1. How to use Online Aggregations

To use online aggregations you just need to create an online aggregation table that is related to a parent table. Below you can see an example.

CREATE TABLE trips (
  trip_id INTEGER NOT NULL,
  countrycode INTEGER,
  passengers INTEGER,
  start_ts TIMESTAMP,
  end_ts TIMESTAMP,
  duration INTEGER,
  PRIMARY KEY (trip_id));

CREATE ONLINE AGGREGATE agg_trips1 AS
max(start_ts) max_start,
min(start_ts) min_start,
sum(CAST(passengers AS BIGINT)) passengers,
count(*) count_all
FROM trips;

The previous statement will create a relation between the parent table TRIPS and child table AGG_TRIPS so:

  • All data put into table TRIPS will automatically modify the aggregation values according to the semantics of the aggregate

  • The Query Engine optimizer will recognize query patterns on table TRIPS and will replace a compatible aggregations query on TRIPS with the equivalent query on AGG_TRIPS

  • The aggregation functions implemented in LX-DB are the following: SUM, MAX, MIN, COUNT.

In this case, we have some aggregates for all rows in parent table, therefore there will only be one record with the total aggregates in the child table.

Usually, we aggregate values grouping by some field or with some kind of time interval.

CREATE ONLINE AGGREGATE agg_trips2 AS
  max(start_ts) max_start,
  min(start_ts) min_start,
  sum(CAST(passengers AS BIGINT)) passengers,
  count(*) count_all
FROM trips
GROUP BY
  countrycode
  FLOOR(end_ts to hour) endm_ts;

As you have realized, it is very similar to a group by query. Just aggregate over one parent table and name the group by expressions when they are not column identifiers.

In the case above, the aggregate table will have two extra fields, countrycode, endm_ts, used as keys. And it will be aggregating the maximum and minimun of the start_ts in columns max_start and min_start, adding 1 in the column count_all, and the summation for passengers. This aggregate will have online statistics on the trips grouped by end time every hour and by country.

Online aggregation can be combined with partioning or bidimensional partioning, specially well suited to time-evolving data.

create table big_metric (
  id bigint,
  ts bigint AUTOSPLIT '1d' AUTOREMOVE AFTER '7d',
  metric1 bigint,
  metric2 double,
  PRIMARY KEY (id, ts),
  HASH(id, ts) TO DISTRIBUTE
);

CREATE ONLINE AGGREGATE agg_big_metric AS
  SUM(metric1) metric1,
  SUM(metric2) metric2
FROM big_metric
GROUP BY id, FLOOR(ts to hour) ts_h AUTOSPLIT '30d' AUTOREMOVE '360d';

As in the previous example, child table will have two extra key fields, id and ts_h, with bidimensional partitioning for the latter.

There are some cases in which the relation between the parent table and the aggregate is tricky. For such cases, you can also create online aggregates and populate them manually. In those scenarios, you can use the keyword MANUAL to create an aggregate table

CREATE TABLE metrics (
  id INTEGER NOT NULL,
  ts TIMESTAMP,
  pwr_db FLOAT,
  PRIMARY KEY (id)
);

-- Feed parent table
UPSERT INTO metrics(id, ts, pwr_db) VALUES
 (1, TIMESTAMP'2022-02-11 06:23:24', 0.0),
 (2, TIMESTAMP'2022-02-11 06:25:24', 3.0),
 (3, TIMESTAMP'2022-02-11 07:25:24', 3.0),
 (4, TIMESTAMP'2022-02-11 07:55:24', 10.0);

CREATE MANUAL ONLINE AGGREGATE agg_metrics AS
  sum(pwr_db) pwr,
  count(*) count_all
FROM metrics
GROUP BY
  FLOOR(ts to hour) ts_h;

-- Manual feed child table with related data: transform powers from dB to natural units
UPSERT INTO agg_metrics (ts_h, pwr, count_all) VALUES
 (FLOOR(TIMESTAMP'2022-02-11 06:23:24' TO HOUR), POWER(10.0,0.0/10.0), 1),
 (FLOOR(TIMESTAMP'2022-02-11 06:25:24' TO HOUR), POWER(10.0,3.0/10.0), 1),
 (FLOOR(TIMESTAMP'2022-02-11 07:25:24' TO HOUR), POWER(10.0,3.0/10.0), 1),
 (FLOOR(TIMESTAMP'2022-02-11 07:55:24' TO HOUR), POWER(10.0,10.0/10.0), 1);

SELECT 10*LOG10(pwr/count_all) AS avg_pwr_db FROM agg_metrics;
+--------------------+
|     AVG_PWR_DB     |
+--------------------+
| 1.7540486677250415 |
| 7.779797539885854  |
+--------------------+

Doing the UPSERT operations above with the direct KiVi API is really simple and, even much more efficient, so I would recommend using the direct KiVi API in your preferred programming language.

Be aware that in some languages(C#, Python) you have to use method upsertDelta() instead of upsert()

2.2. Advantages: What’s the key difference?

OK, but how come is this different from doing the following in a standard system?:

UPSERT INTO trips(trip_id, country_code, passengers, start_ts, end_ts, duration) VALUES (0, 44, 3, TIMESTAMP'2021-02-11 06:23:24', TIMESTAMP'2021-02-11 08:23:23', 121);
UPDATE agg_trips2 SET
  max_start = MAX(max_start, TIMESTAMP'2021-02-11 06:23:24'),
  min_start = MIN(min_start, TIMESTAMP'2021-02-11 06:23:24'),
  count_all = count_all + 1,
  passengers = passengers + 3,
WHERE endm_ts = FLOOR(TIMESTAMP'2021-02-11 08:23:23' to hour)

The real innovation is the concurrency control. In the case above, there could be a lot of updates to the same row (different timestamps within the same hour produce the same key), and this will cause a lot of contention. LeanXcale provides the “semantic concurrency control” described above. By managing this kind of operations in that special way, there is no contention. This way the impact on ingestion is very low and it’s worth having all those values pre-computed. You will notice the moment you try it.

Then, the second advantage is that the Query Engine optimizer will know about this structure and when someone runs a query like:

SELECT
  max(start_ts),
  min(start_ts),
  sum(CAST(passengers AS BIGINT)),
  count(*)
FROM trips
GROUP BY
  FLOOR(end_ts to hour)

The optimizer will automatically transform it to:

SELECT duration, max_start, min_start, passengers, count_all FROM agg_trips;

This second query runs in very little time compared to the original one because the original would have to traverse all the trips which means a lot more rows than the resulting one.

OK, then, but I can calculate these aggregates in memory, what’s the advantage then? Think a moment about it: How are you handling persistency? (What happens if your service stops?), How are you handling concurrency?, How many aggregates are you handling?

The advantage is you don’t have to calculate them in memory for an arbitrary number of aggregates. Computing a few aggregates in memory is simple, but in the database you can hold a complex aggregate structure. Besides, LeanXcale database handles concurrency and persistency for you.

2.2.1. When to use

Online aggregates are a really powerful mechanism because they allow you to have pre-computed data immediately available to serve your application. Note that even if you need a complex KPI, this may be composed of pre-computed aggregates. That’s the case of the standard deviation statistic for example.

Common applications are:

  • Have immediate statistical aggregates to provide real-time results for your application

  • Working with multiresolution data. You may have a system whose source devices send information every second, but most statistics can be calculated much easier at minute resolution. Aggregates can be used to store aggregated data at minute resolution and at 15 minute resolution while having seconds resolution raw data. You can also use different retention policies for each resolution.

  • At its maximum, together with multidimensional partitioning, online aggregates can be seen as pre-computed OLAP cubes. This is a very powerful data to have available with little latency to get it.

3. Trigram indexing and searching

A trigram index allows for fast similarity (fuzzy) search over a varchar column or a combination of varchar columns from one or more tables. Similarity is measured through the number of common trigrams (substrings of three consecutive characters) that are seen in both strings being compared.

To create a trigram index on one or more varchar columns from a table, run the following statement:

EXEC CreateTrigramIndex('<TRGM_IDX>, <MAIN_TBL>, <MAIN_TBL_PK>, <COL_DEF>[, <COL_DEF>...]')

Where:

  • <TRGM_IDX> is the name of the trigram index;

  • <MAIN_TBL> is the name of the table;

  • <MAIN_TBL_PK> is the name of the primary key column of <MAIN_TBL>; o Note: In the current version, the primary key must be a single varchar column.

  • <COL_DEF> is the name of a varchar column being indexed;

  • If the index is defined over multiple columns, the concatenation of all values with a joining space character is considered as the entry for indexing;

  • If the value of any column is null, the entry will not be indexed. Note: In the current version, trigram indexes, unlike regular indexes, do not get populated with existing data upon creation. To guarantee consistency, make sure to create the trigram index when tables are empty. Once created, however, trigram indexes are incrementally maintained in consistency with data being inserted, updated, or deleted.

To perform similarity search, run the following table function:

SELECT ID, SCORE FROM TABLE(TrigramSearch('<TRGM_IDX>', '<STR>'))
Trigram index/search is case-insensitive, i.e. the case of characters in <STR> does not matter.

The returned result set contains values of <MAIN_TBL_PK>, for which the value of the indexed column (or combination of columns) is found to be similar to <STR>, together with an associated score. The score is an integer that corresponds to the similarity – greater score means higher similarity and the greatest possible score is the length of the indexed string minus 1 and can be achieved only when <STR> is equal to the indexed string. The result can consist of more than one candidate. Whether a candidate will be included in the result set depends on a candidate threshold, which is a fraction of the highest score, below which candidates are filtered out. The candidate threshold defaults to 0.8 and can be explicitly specified through the optional third argument of the TrigramSearch function.

Indexed column(s) can be from foreign table(s) that can be linked to <MAIN_TBL> through a foreign-key-like relationship. In this case <COL_DEF> is of the form:

<COL_DEF> := <COL_NAME> FROM <OTHER_TBL> ON <MAIN_TBL_FK> = <OTHER_TBL_PK>

Where:

  • <OTHER_TBL> is the name of the foreign table referenced from <MAIN_TBL>;

  • <COL_NAME> is the name of the varchar column from <OTHER_TBL> being indexed;

  • <OTHER_TBL_PK> is the name of the primary key column of <OTHER_TBL>; o Note: In the current version, the primary key of the referenced table must be a single varchar column.

  • <MAIN_TBL_FK> is the name of the referencing column from <MAIN_TBL> that is used to join <MAIN_TBL> with <OTHER_TBL> o Note: In the current version, the referencing column must be varchar.

Examples:

CREATE TABLE EMP (ID VARCHAR, ENAME VARCHAR);
CREATE TABLE PROJ (ID VARCHAR, PNAME VARCHAR);
CREATE TABLE EMP_PROJ (ID VARCHAR, EID VARCHAR, PID VARCHAR, EROLE VARCHAR);

-- create a simple trigram index on EMP.ENAME
EXEC CreateTrigramIndex('EMP_TRGM, EMP, ID, ENAME');

-- create a complex trigram index on (EMP.ENAME, PROJ.PNAME, EMP_PROJ.EROLE)
-- the main table must be the one with foreign keys to all others, i.e. EMP_PROJ
EXEC CreateTrigramIndex('EMP_PROJ_TRGM, EMP_PROJ, ID, ENAME FROM EMP ON EID=ID, PNAME FROM PROJ ON PID=ID, EROLE');

-- populate EMP
INSERT INTO EMP VALUES ('E1', 'Frank Russel');
INSERT INTO EMP VALUES ('E2', 'Larry Potter');

-- populate PROJ
INSERT INTO PROJ VALUES ('P1', 'Graph Databases');
INSERT INTO PROJ VALUES ('P2', 'Computer Science');

-- populate EMP_PROJ
INSERT INTO EMP_PROJ VALUES ('EP1', 'E1', 'P1', 'Coordinator');
INSERT INTO EMP_PROJ VALUES ('EP2', 'E1', 'P2', 'Advisor');
INSERT INTO EMP_PROJ VALUES ('EP3', 'E2', 'P1', 'Consultant');
INSERT INTO EMP_PROJ VALUES ('EP4', 'E2', 'P2', 'Manager');

-- search with ENAME as is: greatest possible score returned
SELECT E.ENAME, T.SCORE
FROM TABLE(TrigramSearch('EMP_TRGM', 'Larry Potter')) T, EMP E
WHERE T.ID = E.ID;
+--------------+-------+
|    ENAME     | SCORE |
+--------------+-------+
| Larry Potter | 11    |
+--------------+-------+

-- search with slightly confused ENAME: the score is lower
SELECT E.ENAME, T.SCORE
FROM TABLE(TrigramSearch('EMP_TRGM', 'Harry Potter')) T, EMP E
WHERE T.ID = E.ID;
+--------------+-------+
|    ENAME     | SCORE |
+--------------+-------+
| Harry Potter | 9    |
+--------------+-------+

-- search with balanced confusion across the two ENAME entries:
-- two equally-scored candidates returned
SELECT E.ENAME, T.SCORE
FROM TABLE(TrigramSearch('EMP_TRGM', 'Larry Russel')) T, EMP E
WHERE T.ID = E.ID;
+--------------+-------+
|    ENAME     | SCORE |
+--------------+-------+
| Larry Potter | 5     |
| Frank Russel | 5     |
+--------------+-------+


-- search with different levels of confusion
-- on the more complex EMP_PROJ_TRGM index

SELECT E.ENAME, P.PNAME, EP.EROLE, T.SCORE
FROM TABLE(TrigramSearch('EMP_PROJ_TRGM', 'Larry Potter graph databases consultant')) T,
EMP E, PROJ P, EMP_PROJ EP
WHERE T.ID = EP.ID AND EP.EID = E.ID AND EP.PID = P.ID;
+--------------+-----------------+------------+-------+
|    ENAME     |      PNAME      |   EROLE    | SCORE |
+--------------+-----------------+------------+-------+
| Larry Potter | Graph Databases | Consultant | 38    |
+--------------+-----------------+------------+-------+

SELECT E.ENAME, P.PNAME, EP.EROLE, T.SCORE
FROM TABLE(TrigramSearch('EMP_PROJ_TRGM', 'Larry Potter graph databases advisor')) T,
EMP E, PROJ P, EMP_PROJ EP
WHERE T.ID = EP.ID AND EP.EID = E.ID AND EP.PID = P.ID;
+--------------+-----------------+------------+-------+
|    ENAME     |      PNAME      |   EROLE    | SCORE |
+--------------+-----------------+------------+-------+
| Larry Potter | Graph Databases | Consultant | 25    |
+--------------+-----------------+------------+-------+

SELECT E.ENAME, P.PNAME, EP.EROLE, T.SCORE
FROM TABLE(TrigramSearch('EMP_PROJ_TRGM', 'Larry Potter graph science advisor')) T,
EMP E, PROJ P, EMP_PROJ EP
WHERE T.ID = EP.ID AND EP.EID = E.ID AND EP.PID = P.ID
ORDER BY T.SCORE DESC;
+--------------+------------------+------------+-------+
|    ENAME     |      PNAME       |   EROLE    | SCORE |
+--------------+------------------+------------+-------+
| Larry Potter | Computer Science | Manager    | 16    |
| Frank Russel | Computer Science | Advisor    | 13    |
| Larry Potter | Graph Databases  | Consultant | 13    |
+--------------+------------------+------------+-------+