Java
1. Two interfaces to access the same data

LeanXcale is a SQL database with NoSQL characteristics. As a relational key-value database, LeanXcale provides access to the same data through two independent APIs:
-
A 2003 ANSI SQL interface that is powerful and easy to use, but can be slow, so is most suitable for complex queries (e.g., multijoin queries).
-
A proprietary key-value interface, called KiVi, is fast and provides all functionality except for join operations, so it is most suitable for simpler queries where the overhead of SQL is too costly (e.g., updates and inserts).
Both interfaces interact with the same data and can be part of the same transaction.
2. SQL

2.1. Installation
To use the JDBC driver, you have to first add the LeanXcale maven repository in your maven configuration. Include the next lines under the <repositories>
tag of the pom.xml
file:
<repositories>
<repository>
<id>maven-releases</id>
<url>https://nexus.leanxcale.com/repository/maven-releases</url>
</repository>
</repositories>
Then add the maven dependency under the <dependencies>
tag in the same file:
<dependency>
<groupId>com.leanxcale</groupId>
<artifactId>qe-driver</artifactId>
<version>2.0.0</version>
</dependency>
If you prefer to include the library in your project manually, then you can download the Java JDBC driver for LeanXcale from the Drivers page. By taking this approach, you must execute this command to include it into your local maven repository :
mvn install:install-file
-Dfile=<path-to-file>
-DgroupId=com.leanxcale
-DartifactId=qe-driver
-Dversion=2.0.0
-Dpackaging=jar
-DgeneratePom=true
Where:
<path-to-file> is the path to the file to load
With this, you can use the maven dependency defined previously.
2.2. Quick Start
This section provides a set of code snippets to connect, read, insert, and update information through the JDBC Java API:
2.2.1. Connecting to LeanXcale
Connecting to the SQL interface requires some basic information:
-
If you use LeanXcale as a Cloud Service, then you can find the IP or hostname for your instance in the LeanXcale Cloud Console.
-
If you use LeanXcale on-premise, then provide the information from your installation.
In our examples, we include these parameters:
-
IP / hostname:
123.45.67.89
-
Port:
1529
-
DB name:
db
-
Username:
APP
-
Password:
APP
With JDBC, a database is represented as a URL, user, and password:
import java.sql.Connection;
import java.sql.DriverManager;
Connection conn = DriverManager.getConnection("jdbc:leanxcale://123.45.67.89:1522/dbtest", "user1", "pass4user1");
In case you are connecting a database with Security, you must add the parameter to the JDBC URL specifying that you want to use a secure connection.
Connection conn = DriverManager.getConnection("jdbc:leanxcale://123.45.67.89:1529/dbtest;secure=true", "user1", "pass4user1");
Note: You must have previously download your TLS certificate and added it to yor Java Truststore.
By default, the connection is configured with 'autocommit', so that each command is sent to the database. This behavior can be changed with:
conn.setAutoCommit(false);
To close the database connection, call the close()
method on the Connection
:
conn.close();
Using an external pool
Another way to obtain a connection is to use a connection pool. An example using the hikariCP JDBC connection pool is the following:
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
final HikariConfig configPool = new HikariConfig();
configPool.setMaximumPoolSize(20);
configPool.setDriverClassName("com.leanxcale.client.Driver");
configPool.setJdbcUrl("jdbc:leanxcale://172.17.0.2:1522/db");
configPool.setUsername("app");
configPool.setPassword("app");
configPool.setAutoCommit(false);
final HikariDataSource ds = new HikariDataSource(configPool);
Connection connection = ds.getConnection();
2.2.2. SQL Queries
Any time you want to query the database with SQL statements, you need to use a Statement or PreparedStatement instance.
Once you have a Statement
or PreparedStatement
to execute a query. This returns a ResultSet
instance from which you obtain the complete result.
import java.sql.Statement;
import java.sql.ResultSet;
import java.sql.SQLException;
try (Statement st = conn.createStatement()) {
try (ResultSet rs = st.executeQuery("SELECT * FROM person WHERE lastName = 'Doe'")) {
while (rs.next()) {
System.out.print("Column 1 returned ");
System.out.println(rs.getString(1));
}
}
}
The same example, but with PreparedStatement
, is the following that requires binding the value in the query:
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
try (PreparedStatement pst = conn.prepareStatement("SELECT * FROM person WHERE lastName = ?")) {
pst.setString(1,"Doe");
try (ResultSet rs = pst.executeQuery()) {
while (rs.next()) {
System.out.print("Column 1 returned ");
System.out.println(rs.getString(1));
}
}
}
This example shows how we would bind an array value.
try (PreparedStatement pst = conn.prepareStatement("SELECT * FROM tarry WHERE doubles = ?")) {
pst.setArray(1,new double[]{0.023533916958532797, 5.476721734666199, 1.0});
try (ResultSet rs = pst.executeQuery()) {
while (rs.next()) {
int id = rs.getInt("id");
String longs = rs.getString(2);
String doubles = rs.getString(3);
String strings = rs.getString(4);
}
}
}
2.2.3. Creating, modifying and dropping
To create, modify or drop a database object, such as a table or sequence, you use the execute()
method, which is similar to executeQuery()
, but does not return an object Result
.
Tables
An example of creating a table:
String createTable= "CREATE TABLE person (" +
" id BIGINT NOT NULL" +
", name VARCHAR " +
", lastName VARCHAR " +
", birth_date DATE " +
", PRIMARY KEY (id)" +
")";
Statement st = conn.createStatement();
st.execute(createTable);
st.close();
Then, to drop the created table:
Statement st = conn.createStatement();
st.execute("DROP TABLE person");
st.close();
Sequences
Another example of an object could be a sequence. We create these with a table and associate to one field:
String createTable= "CREATE TABLE person (" +
" id BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY" +
", name VARCHAR " +
", lastName VARCHAR " +
", phone VARCHAR " +
", PRIMARY KEY (id)" +
")";
Statement st = conn.createStatement();
st.execute(createTable);
st.close();
Or, a single standalone sequence can be created by:
Statement st = conn.createStatement();
st.execute("CREATE SEQUENCE IF NOT EXISTS personSeq");
st.close();
Indexes
Unlike sequences, indexes can only be created after a table exists:
Statement st = conn.createStatement();
st.execute("CREATE INDEX phoneIdx ON person (phone)");
st.close();
If we want unique indexes to avoid duplicated rows, then we create the index with this query:
st.execute("CREATE UNIQUE INDEX passportIdx ON person (passport)");
2.2.4. Inserting, updating and deleting
To change data (i.e., perform an INSERT, UPDATE, or DELETE query), we use the executeUpdate()
method that is similar to executeQuery()
used to query a SELECT statement, but does not return a ResultSet
. Instead, this update method returns the number of rows affected by the INSERT, UPDATE, or DELETE statement.
To insert one new record:
String sql = "INSERT INTO person VALUES (100, 'John', 'Doe', '555333695')";
st.executeUpdate(sql);
To update existing records:
String sql = "UPDATE person SET passport = '666668888' WHERE id in (100, 101)";
st.executeUpdate(sql);
To delete records:
int id = 100;
PreparedStatement pst = conn.prepareStatement("DELETE FROM Registration WHERE id = ?");
pst.setInt(1, id);
int rowsDeleted = pst.executeUpdate();
System.out.println(rowsDeleted + " rows deleted");
2.2.5. Queries from table with array columns
Here are some examples of queries on tables with array columns.
-
We can get the values of array fields as character strings.
try (Statement s = connection.createStatement()) {
ResultSet rs = s.executeQuery("SELECT id, TO_string(longs), TO_string(doubles), TO_string(strings) FROM tarry");
while (rs.next()) {
int id = rs.getInt("id");
String longs = rs.getString(2);
String doubles = rs.getString(3);
String strings = rs.getString(4);
}
}
In each of these String variables, we would receive the entire array converted into a string. For the variable longs, it would be: [-4367937811447089352, 7, 5575876413993699465]
For the variable doubles, it would be:
For the variable strings, it would be:
-
We can perform queries that filter using the entire contents of an array field.
try (Statement s = connection.createStatement()) {
ResultSet rs = s.executeQuery("SELECT * FROM tarry WHERE strings = array['CUENCA', 'BILBAO']");
while (rs.next()) {
int id = rs.getInt(1);
Array longs = rs.getArray(2);
Array doubles = rs.getArray(3);
Array strings = rs.getArray(4);
}
}
We would get the records whose column strings contains an array with the same number of positions and the same values in those positions.
+----+------------------------------------------------+------------------------------------------------+-------------------------+
| ID | LONGS | DOUBLES | STRINGS |
+----+------------------------------------------------+------------------------------------------------+-------------------------+
| 2 | [2334589765239847563, 0, 44] | | [CUENCA, BILBAO] |
+----+------------------------------------------------+------------------------------------------------+-------------------------+
-
We could also filter by the contents of a particular position in an array.
try (Statement s = connection.createStatement()) {
ResultSet rs = s.executeQuery("SELECT * FROM tarry WHERE longs[3] = 5575876413993699465");
while (rs.next()) {
int id = rs.getInt(1);
Array longs = rs.getArray(2);
Array doubles = rs.getArray(3);
Array strings = rs.getArray(4);
}
}
The result would be something like this:
+----+------------------------------------------------+------------------------------------------------+-------------------------+
| ID | LONGS | DOUBLES | STRINGS |
+----+------------------------------------------------+------------------------------------------------+-------------------------+
| 1 | [-4367937811447089352, 7, 5575876413993699465] | [0.023533916958532797, 5.476721734666199, 1.0] | [MADRID, MURCIA, JEREZ] |
+----+------------------------------------------------+------------------------------------------------+-------------------------+
2.2.6. Queries from many tables
In this example, we recover information from many tables by calling a single query:
Statement stmt = conn.createStatement();
String query ="SELECT name, lastname, isbn FROM person INNER JOIN books ON person.id = books.authorid";
ResultSet rs = stmt.executeQuery(query);
System.out.println("Name LastName ISBN");
while (rs.next()) {
String name = rs.getString("name");
String lastname = rs.getString("lastname");
int isbn = rs.getInt("isbn");
System.out.println(name + " " + lastname+" "+isbn);
}
3. KiVi

3.1. Installation
To use the Direct KiVi API connector, you simply define the maven dependency to the kivi-api
library with the desired version:
<dependencies>
<dependency>
<groupId>com.leanxcale</groupId>
<artifactId>kivi-api</artifactId>
<version>2.0.0</version>
</dependency>
</dependencies>
Be sure to include the LeanXcale public maven repository in your maven configuration:
<repositories>
<repository>
<id>maven-releases</id>
<url>https://nexus.leanxcale.com/repository/maven-releases</url>
</repository>
</repositories>
If you prefer to include the library into your project manually, then you can download the Java KiVi Direct API driver from the Drivers page.
3.2. Quick Start
The following sections include a set of code snippets to connect, read, insert, and update information through the KiVi JAVA direct API. You can read the API here.
3.2.1. Connecting and starting a session
When you start a new session, it automatically connects. If this is the first session, then the API library manages the connections for you.
In the credentials, you set the user, password, and name of the database (e.g., "db" in the example ):
import com.leanxcale.kivi.database.Database;
import com.leanxcale.kivi.database.Index;
import com.leanxcale.kivi.database.Table;
import com.leanxcale.kivi.session.Session;
import com.leanxcale.kivi.session.SessionFactory;
import com.leanxcale.kivi.session.Settings;
Settings settings = Settings.parse("lx://lxserver:9876/db@APP");
// you can overwrite any connection property
settings.add("URL", "lx://lxserver:9876");
settings.add("DATABASE", "db");
settings.add("USER", user);
Session session = SessionFactory.newSession(settings);
Database database = session.database();
In the following examples, the session creation is not included but is still needed to start any session. The import section is commonly shown so that you can be aware of the classes you need to import, but then the part of the session initialization is removed.
3.2.2. Enabling security
If your LeanXcale instance is configuring with security, you will have to generate an application certificate in order to allow your application to connect to LeanXcale.
Once this is done, configure the settings to enable security and provide the path to the certificate file.
Settings settings = Settings.parse("lx://lxserver:9876/db@" + user + ";CERT=/path/to/certificate/file.kcf");
Session session = SessionFactory.newSession(settings);
Database database = session.database();
3.2.3. Inserting, updating and deleting
Insert one Record(Tuple):
Table people = database.getTable("person");
Tuple person = people.createTuple();
//Fill in tuple fields
person.put("id", 1L).
.put("name", "John")
.put("lastName", "Doe")
.put("phone", "555333695")
.put("email", "johndoe@nowhere.no")
.put("birthday",Date.valueOf("1970-01-01"))
.put("numChildren", 4);
//Insert tuple
people.insert(person);
//Tuples are sent to the datastore when COMMIT is done
session.commit();
Insert using a Sequence:
Table people = database.getTable("person");
Tuple person = people.createTuple();
long personId = database.getSequence("personId").nextVal();
person.put("id", personId, Long.class )
.put("name", "John", String.class )
.put("lastName", "Doe", String.class )
.put("phone", "555333695", String.class )
.put("email", "johndoe@nowhere.no", String.class )
.put("birthday",Date.valueOf("1970-01-01"), String.class )
.put("numChildren", 4, String.class );
people.insert(person);
Update a Record:
// Tuple person to be updated has to be previosly retrieved through
// a get()
Tuplekey key = table.createTupleKey();
key.put("id", 0L, Long.class );
Tuple person = table.get(key);
person.put("phone", "anotherPhone", String.class )
.put("email", "johndoe@somewhere.some", String.class )
people.update(person);
Delete a Record by its Key:
Table people = database.getTable("person");
TupleKey johnKey = people.createTupleKey();
johnKey.put("id", key);
people.delete(johnKey);
Delete with a Filter:
import static com.leanxcale.kivi.query.aggregation.Aggregations.*;
import static com.leanxcale.kivi.query.expression.Constants.*;
import static com.leanxcale.kivi.query.expression.Expressions.*;
import static com.leanxcale.kivi.query.filter.Filters.*;
import static com.leanxcale.kivi.query.projection.Projections.*;
// Simple filter
people.delete(Filters.eq("name", string("John")));
// Expression usage
people.deleteTuples(Filters.gt(
div( field("weight"),
pow(field("height"),2)),
float(26)
));
3.2.4. Reading and Scanning data
Retrieving a data row directly by its Key:
import com.leanxcale.kivi.tuple.Tuple;
import com.leanxcale.kivi.tuple.TupleKey;
Tuplekey key = table.createTupleKey();
key.put("id", 0L, Long.class );
Tuple tuple = table.get(key);
Perform a simple SCAN with the basic options:
import static com.leanxcale.kivi.query.aggregation.Aggregations.*;
import static com.leanxcale.kivi.query.expression.Constants.*;
import static com.leanxcale.kivi.query.expression.Expressions.*;
import static com.leanxcale.kivi.query.filter.Filters.*;
import static com.leanxcale.kivi.query.projection.Projections.*;
Table people = database.getTable("person");
TupleKey min = people.createTupleKey();
min.put("id", 20);
TupleKey max = people.createTupleKey();
max.put("id", 30L);
people.find()
.min(min)
.max(max)
.forEach(tuple->processTuple(tuple));
// Max 20 results
people.find()
.first(20)
.forEach(tuple->processTuple(tuple));
Perform an incremental SCAN with the basic options:
try(SessionImpl session = (SessionImpl) SessionFactory.newSession(settings)){
Table people = session.database().getTable("person");
Tuplekey key = people.createTupleKey();
key.put("id",20, Long.class );
Tuple tupleFrom=people.get(key);
people.find()
.fromTupleTs(tupleFrom) //only tuples committed after than the given tuple was committed will be taken into account by the scan
.forEach(tuple->processTuple(tuple));
}
Scan a table using a Secondary Index:
Table table people = database.getTable("person");
TupleKey minKey = table.createTupleKey();
minKey.put("dni", "111111111Q");
people.find()
.index("dniIdx")
.indexMin(minKey)
.forEach(tuple->processTuple(tuple));
3.2.5. Streaming data
Perform a scan in the form of a Stream:
try(SessionImpl session = (SessionImpl) SessionFactory.newSession(settings)){
Table people = session.database().getTable("person");
List<Long> ids = people.find().asStream().stream().map(t -> t.get("id", Long.class)).collect(Collectors.toList());
}
Runs an infinite stream getting all new tuples
try(SessionImpl session = (SessionImpl) SessionFactory.newSession(settings)){
Table people = session.database().getTable("person");
Table table2 = session.database().getTable("otherTable");
//waits for all new tuples committed to the DB from this moment and return it in the stream
people.find().fromNow(session).asStream().allowInternalCommits().infinite().stream().forEach(t ->{
Tuple processedTuple = processPerson(t);
table2.insert(processedTuple);
session.commit();
});
}
3.2.6. Arrays
We can add arrays to kivi just like we add any other kind of objects, such as Integer, Boolean, String, etc.
Adding arrays to a tuple
The following example shows how to add arrays to a kivi tuple object.
try(Session session = SessionFactory.newSession(settings)){
Table table=session.database().getTable("tarry");
Tuple tuple=table.createTuple();
tuple.put("id",1);
tuple.put("longs", new long[]{-436l, 17l, 24l});
tuple.put("doubles", new double[]{0.023533, 5.476721, 3.1415, 4d});
tuple.put("strings", new String[]{"ALBACETE","CACERES","PALENCIA","ZARAGOZA"});
tuple.put("bytes", new byte[]{1, 2, 3, 4});
table.insert(tuple);
session.commit();
}
Note: A column containing an array of bytes would be defined in LeanXcale as a BINARY field.
Reading arrays from a tuple
The following example shows how to get arrays from a kivi tuple object.
Tuplekey key = table.createTupleKey();
key.put("id", 1, Integer.class );
Tuple tuple = table.get(key);
long[] longs = (long[]) tuple.get("longs");
double[] doubles = (double[]) tuple.get("doubles");
String[] strings = (String[]) tuple.get("strings");
byte[] bytes = (byte[]) tuple.get("bytes");
Note: A column containing an array of bytes would be defined in LeanXcale as a BINARY field.
3.2.7. BLOBS
Blobs can be added to kivi in two ways: a InputStream directly added to the tuple as any other field or in the way of a OutputStream. Blob fields should have been created as "BLOB" type.
Adding a blob to a tuple
The following example shows how to send a binary file directly attached to the tuple being inserted.
try(Session session = SessionFactory.newSession(settings)){
Table table=session.database().getTable("MyTable");
java.net.URL fileUrl=getClass().getResource("/blobTest.png");
File testFile=new File(fileUrl.getFile());
InputStream inputStream=new FileInputStream(testFile);
Tuple tuple=table.createTuple();
tuple.put("id",1L);
tuple.put("blobField",inputStream);
table.insert(tuple);
session.commit();
}
Adding a blob as an OutputStream
The following example shows how to open a OutputStream and write a blob to it. Please bear in mind that in this way you should call the insert/update just before creating the blob
try(Session session = SessionFactory.newSession(settings)) {
Table table = session.database().getTable("MyTable");
TupleKey key = table.createTupleKey();
key.put("id", 1L);
Tuple tuple = table.get(key); //tuple already added to the table
table.update(tuple); //update (or insert) should be called just before creating the blob
OutputStream blobOut = table.createBlobStream(tuple, "blobField");
blobOut.write(myBlobOject.getBytes());
blobOut.close();
session.commit();
}
Reading a blob
The following example shows how to get a blob value. A blob field is always returned in the form of a InputStream
try(Session session = SessionFactory.newSession(settings)) {
Table table = session.database().getTable("MyTable");
TupleKey key = table.createTupleKey();
key.put("id", 1L);
Tuple tuple = table.get(key);
File file = new File("blobTest.png")
InputStream blobIn = tuple.get("blobField", InputStream.class);
FileOutputStream fos = new FileOutputStream(file);
byte[] buffer = new byte[8 * 1024];
int bytesRead;
while ((bytesRead = blobIn.read(buffer)) != -1) {
fos.write(buffer, 0, bytesRead);
}
fos.close();
blobIn.close();
}
3.2.8. Advanced Finding and Filtering
The following examples demonstrate how to define a set of filters to be pushed to the KiVi datastore to obtain a result that matches specified conditions.
import static com.leanxcale.kivi.query.aggregation.Aggregations.*;
import static com.leanxcale.kivi.query.expression.Constants.*;
import static com.leanxcale.kivi.query.expression.Expressions.*;
import static com.leanxcale.kivi.query.filter.Filters.*;
import static com.leanxcale.kivi.query.projection.Projections.*;
Table people = database.getTable("person");
// Basic comparisons
people.find()
.filter(gt("numChildren", 4).and(eq("name", string("John"))))
.forEach(tuple->processTuple(tuple));
// Between
Date minDate = Date.valueOf("1900-01-01");
Date maxDate = Date.valueOf("2000-01-01");
people.find()
.filter(between("birthday", date(minDate), date(maxDate))
.forEach(tuple->processTuple(tuple));
// Using a Expression with operators
people.find()
.filter(gt("numChildren",sub(field("numRooms"),int(1))))
.forEach(tuple->processTuple(tuple));
Project:
The concept of the projection is the set of fields from the table you want to obtain from the table. The project may also include operations over the fields to be retrieved.
import static com.leanxcale.kivi.query.aggregation.Aggregations.*;
import static com.leanxcale.kivi.query.expression.Constants.*;
import static com.leanxcale.kivi.query.expression.Expressions.*;
import static com.leanxcale.kivi.query.filter.Filters.*;
import static com.leanxcale.kivi.query.projection.Projections.*;
import com.leanxcale.kivi.query.projection.*;
Table people = database.getTable(tableName);
session.rollback();
// Basic inclusion
String[] fields = {"name", "lastName", "birthDay"};
people.find()
.project(include(fields))
.forEach(tuple->{
String name = tuple.get("name", String.class );
String lastname = tuple.get("lastName", String.class );
Date date = tuple.get("birthDay", Date.class );
});
// SELECT name, Lastname, weight/height^2 AS imc FROM person
// Alias and expression usage
ProjectionExpression[] projExp =
{
alias("name"),
alias("lastName")
};
people.find()
.project(compose(projExp))
.forEach(tuple->{
String name = tuple.get("name", String.class );
String lastName = tuple.get("lastName", String.class );
});
Aggregations:
The most common simple aggregations can also be performed through the KiVi direct API. You must define an initial array with the fields to be used as the group by the key (if any), and then a list of aggregation expressions over some fields.
import static com.leanxcale.kivi.query.aggregation.Aggregations.*;
import com.leanxcale.kivi.query.aggregation.Aggregations;
import com.leanxcale.kivi.query.Aggregation;
import static com.leanxcale.kivi.query.expression.Constants.*;
import static com.leanxcale.kivi.query.expression.Expressions.*;
import static com.leanxcale.kivi.query.filter.Filters.*;
import static com.leanxcale.kivi.query.projection.Projections.*;
import java.util.Iterator;
// Simple aggregation
long numChildren =
people.find()
.aggregate(Collections.emptyList(), count("numChildren"))
.iterator().next().getLong("numChildren");
// Group By aggregation
people.find()
.aggregate(Arrays.asList("name"), count("numChildren"))
.forEach(tuple->{
String name = (String) tuple.get("name");
Long numChildren = (Long) tuple.get("numChildren");
});
// Filtering before aggregate
Date date = Date.valueOf("1970-01-01");
people.find()
.filter(gt("birthDay", date(date)))
.aggregate(Arrays.asList("name"), new Aggregation[]{
count("numChildren"),
avg("averagenumChildren", field("numChildren")),
avg("averagenumChildrenBy2",div(field("numChildren"),pow(field("id"),lng(2))))
})
.forEach(tuple->{
String name = tuple.get("name", String.class );
long numPeople = tuple.get("numChildren", Long.class );
float averagenumChildren = tuple.get("averagenumChildren", Float.class );
float averagenumChildrenBy2 = tuple.get("averagenumChildrenBy2", Float.class );
});
//Filtering after aggregate
people.find()
.aggregate(Arrays.asList("name"), new Aggregation[]{
count("numChildren"),
avg("averagenumChildren", field("numChildren")),
avg("averagenumChildrenBy2",div(field("numChildren"),pow(field("id"),lng(2))))
})
.filter(gt("averagenumChildren",flt(2)))//.or(ge("numChildren",integer(10))))
.forEach(tuple->{
String name = tuple.get("name", String.class );
long numPeople = tuple.get("numChildren", Long.class );
Double averagenumChildren = tuple.get("averagenumChildren", Double.class );
Double averagenumChildrenBy2 = tuple.get("averagenumChildrenBy2", Double.class );
});
3.2.9. JSON support
The kivi-api provides json support in the way of a converter that can be used to convert json to tuples and vice-versa. Each table has its own converter, so we can get the converter using the method #getJsonConverter() from the Table object.
How to insert a Json
Just convert the json to a tuple with the converter and insert/update/upsert it as usual.
try(Session session = SessionFactory.newSession(settings)){
Table table=session.database().getTable("MyTable");
String json = "{\"f1\":\"v1\",\"f2\":\"v2\"}";
KiviJsonConverter jsonConverter = table.getJsonConverter();
Tuple tuple = jsonConverter.jsonToTuple(json);
table.insert(tuple);
session.commit();
}
How to read as a Json
Just get the tuple as usual and then convert it to a json using the converter:
try(Session session = SessionFactory.newSession(settings)){
Table table=session.database().getTable("MyTable");
KiviJsonConverter jsonConverter = table.getJsonConverter();
TupleKey tupleKey = table.createTupleKey();
tupleKey.put("f1", "v1");
Tuple tuple = table.get(tupleKey);
String json = jsonConverter.tupleToJson(tuple);
}
Converter configuration
The json converter has some configuration properties as setter methods. For example, by default the converter ignores the json fields that do not correspond to leanxcale columns. If you want to do a strict conversion and throw an error when some field in the json is not defined in the LX table just do:
try(Session session = SessionFactory.newSession(settings)){
Table table=session.database().getTable("MyTable");
String json = "{\"f1\":\"v1\",\"f2\":\"v2\"}";
KiviJsonConverter jsonConverter = table.getJsonConverter();
jsonConverter.setIgnoreUnknown(false);
jsonConverter.jsonToTuple(json);
table.insert(tuple);
session.commit();
}
Other available configuration is the ability to set the dates format using a java.time.format.DateTimeFormatter:
try(Session session = SessionFactory.newSession(settings)){
Table table=session.database().getTable("MyTable");
String json = "{\"f1\":\"v1\",\"date\":\"2021-01-01\", \"time\":\"14:40:55\", \"timestamp\":\"2021-01-01 14:40:55.213\"}";
KiviJsonConverter jsonConverter = table.getJsonConverter();
jsonConverter.setDateFormatter(DateTimeFormatter.ofPattern("yyyy-MM-dd"));
jsonConverter.setTimeFormatter(DateTimeFormatter.ofPattern("hh:mm:ss"));
jsonConverter.setTimestampFormatter(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS"));
Tuple tuple = jsonConverter.jsonToTuple(json);
table.insert(tuple);
session.commit();
}
3.2.10. Window functions
You can use the direct API to perform aggregations over defined windows in a similar way that can be executed in SQL with the OVER and PARTITION BY clauses.
To do this, you can define a Window object over a projection, indicating the table field that will be used to do the partition, and add the desired aggregation functions over the window:
import static com.leanxcale.kivi.query.projection.Projections.*;
// SELECT name, department, salary, sum(salary) OVER (PARTITION BY department) AS totalSalary
// FROM employees
employees.find()
.project(include("employee", "dept", "salary").addWindow(window("dept").sum("totalSalary","salary")))
.forEach(tuple->{
Long employee = tuple.get("employee", Long.class );
String department = tuple.get("dept", String.class );
Long salary = tuple.get("salary", Long.class );
Long departmentSalary = tuple.get("totalSalary", Long.class );
});
// SELECT name, department, salary, sum(salary) OVER (PARTITION BY department ORDER BY salary) AS totalSalary
// FROM employees
employees.find()
.project(include("employee", "dept", "salary").addWindow(window("dept").sum("totalSalary","salary").orderBy("salary")))
.forEach(tuple->{
Long employee = tuple.get("employee", Long.class );
String department = tuple.get("dept", String.class );
Long salary = tuple.get("salary", Long.class );
Long departmentSalary = tuple.get("totalSalary", Long.class );
});
// SELECT name, department, salary, max(salary) OVER (PARTITION BY department)
// AS maxSalary FROM employees
employees.find()
.project(include("employee", "dept", "salary").addWindow(window("dept").max("maxSalary", "salary")))
.forEach(tuple->{
Long employee = tuple.get("employee", Long.class );
String department = tuple.get("dept", String.class );
Long salary = tuple.get("salary", Long.class );
Long maxSalary = tuple.get("maxSalary", Long.class );
});
// SELECT name, department, salary, min(salary) OVER (PARTITION BY department)
// AS minSalary FROM employees
employees.find()
.project(include("employee", "dept", "salary").addWindow(window("dept").min("minSalary", "salary")))
.forEach(tuple->{
Long employee = tuple.get("employee", Long.class );
String department = tuple.get("dept", String.class );
Long salary = tuple.get("salary", Long.class );
Long minSalary = tuple.get("minSalary", Long.class );
});
// SELECT name, department, salary, count(salary) OVER (PARTITION BY department)
// AS countSalary FROM employees
employees.find()
.project(include("employee", "dept", "salary").addWindow(window("dept").count("countSalary", "salary")))
.forEach(tuple->{
Long employee = tuple.get("employee", Long.class );
String department = tuple.get("dept", String.class );
Long salary = tuple.get("salary", Long.class );
Long departmentSalary = tuple.get("countSalary", Long.class );
});
// SELECT name, department, salary, avg(salary) OVER (PARTITION BY department)
// AS avgSalary FROM employees
employees.find()
.project(include("employee", "dept", "salary").addWindow(window("dept").avg("avgSalary", "salary")))
.forEach(tuple->{
Long employee = tuple.get("employee", Long.class );
String department = tuple.get("dept", String.class );
Long salary = tuple.get("salary", Long.class );
double departmentSalary = tuple.get("avgSalary", Double.class );
});
As the direct API does not have a query optimizer as the SQL engine does, and in order to avoid to load complete tables in memory, a windowed scan will always try to use an index over the partition by field to traverse the table.
This adds two requirements to the use of windows:
-
You must have an index whose prefix is the partition by field.
-
You can only run a scan with a single window (which can have more than one aggregation function).
ORDER BY, ROWNUM and RANK
You can also use position based window functions as ROWNUM and RANK. These functions only make sense when the partition is sorted by some field so to use them you have to first order the partition and then use the appropriate function:
employees.find()
.project(include("employee", "dept", "salary").addWindow(window("dept").sum("totalSalary","salary").orderBy("salary").rank("salaryRank")))
.forEach(tuple->{
Long employee = tuple.get("employee", Long.class );
String department = tuple.get("dept", String.class );
Long salary = tuple.get("salary", Long.class );
Long departmentSalary = tuple.get("salaryRank", Long.class );
});
All the position based functions return a Long value.
3.2.11. Transaction Management
The KiVi Java API is configurable to be transactional or not. If the session is not configured as transactional, then the commit operation forces data to be flushed into the datastores. If the session is configured as transactional, then the internal transaction management capabilities ensure conflict checks, durability, persistence, and consistency.
Settings settings = Settings.parse("lx://lxserver:2181/tpch", user, pass);
// when server not in 2181
settings = Settings.parse("lx://lxserver:2181/tpch;MODE=TRANSACTIONAL", user, pass);
// or for an existing settings
settings.transactional();
4. Other Examples
We feature additional detailed examples and real-world scenarios on our blog. To learn more about how you can connect your Scala application to LeanXcale using JDBC and the direct KiVi API, see:
-
Streaming Twitter data into a LeanXcale database - Ingest huge amounts of data with KiVi and query that data using standard SQL.