Administration Guide

1. Background

1.1. LeanXcale Components

Before installing, it is important to know that LeanXcale has a distributed architecture and it consists of several components:

  • lxqe: Query engine in charge of processing SQL queries.

  • kvds: Data server of the storage subsystem. There might be multiple instances.

  • kvms: Metadata server of the storage subsystem.

  • lxmeta: Metadata process for LeanXcale. It keeps metadata and services needed for other components.

  • stats: Optional monitoring subsystem to see resource usage and performance KPIs of LeanXcale database.

  • odata: Optional OpenDATA server to support a SQL REST API.

There are other components used by the system, that are not relevant for the user and are not described here. For example, spread is a communication bus used by LeanXcale components.

2. Licenses

To check for license status or to install a new license you can use the lx license command.

For a local installation, use just

unix$ lx license
	license expires: Mon Dec 30 00:00:00 2024

For docker installs, each container must include its own license. The DB in the container does not start the DB unless a valid license is found. But, the container must be running to check the license status and to install new licenses. Refer to the section on starting docker containers for help on that.

For example, to list the license status for the container lx1 we can run

unix$ docker exec -it lx1 lx license
lx1 [
	kvcon[1380]: license: no license file
  failed: failed: status 1
]
failed: status 1

To install a new license, just copy the license file to the container as shown here

unix$ docker cp ~/.lxlicense lx1:/usr/local/leanxcale/.lxlicense
unix$ docker exec -it lx1 sudo chown lx /usr/local/leanxcale/.lxlicense

The license status should be ok now:

unix$ docker exec -it lx1 lx license
	license expires: Mon Dec 30 00:00:00 2024

3. Bare Metal Installs

3.1. Prerequisites

Never install or run leanXcale as root.

To install, you need:

  • 2 cores & 2 GB of memory

  • A valid LeanXcale license file. Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

  • The LeanXcale zip distribution. The LeanXcale zip distribution can be found at:

https://artifactory.leanxcale.com/artifactory/lxpublic
  • Ubuntu 20.04 LTS or 22.04 LTS with a standard installation, including

    • openjdk-11-jdk

    • python3

    • python3-dev

    • python3-pip

    • gdb

    • ssh tools as included in most UNIX distributions.

    • zfsutils-linux

    • libpam

  • Availability for ports used by leanXcale (refer to the [LeanXcale Components and Ports] section for details).

The install program itself requires a unix system with python 3.7 (or later).

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

The LeanXcale zip distribution can be found at:

https://artifactory.leanxcale.com/artifactory/lxpublic

There is a zip file per version. Download the latest version.

If needed, install the required software dependencies on your system:

unix$ sudo apt update
unix$ sudo apt -y install python3 python3-dev python3-pip gdb openjdk-11-jdk curl

Here, and in what follows, unix$ is the system prompt used in commands and examples.

3.2. Install

Working on the directory where the distribution has been downloaded, unzip the LeanXcale zipped distribution:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Here, you type the command after the unix$ prompt, and the following lines are an example of the expected output.

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To install, run the lxinst program and, as an option, supply the name of the installation directory (/usr/local/leanxcale by default):

unix$ ./lxinst /usr/local/leanxcale
sysinfo localhost...
install #cfgfile: argv /usr/local/leanxcale...
cfgcomps…
...
config: lxinst.conf
install done.

To run lx commands:
    localhost:/usr/local/leanxcale/bin/lx
Or adjust your path:
    # At localhost:~/.profile:
    export PATH=/usr/local/leanxcale/bin:$PATH
To start (with lx in your path):
    lx start

The command takes the zipped LeanXcale distribution from ./lxdist, and installs it at the target folder.

The detailed configuration file built by the install process is saved at lxinst.conf, as a reference.

The database has users. The user lxadmin is the administrator for the whole system and during the install you are asked to provide its password.

To complete the install, a license must be added, as explained in the next section.

As another example, this installs at the given partition /dev/sdc2, with both encryption and compression, mounting the installed system at /usr/local/leanxcale:

unix$ ./lxinst compress crypt /dev/sdc2:/usr/local/leanxcale
sysinfo localhost...
install #cfgfile: argv /usr/local/leanxcale...
cfgcomps…
...
config: lxinst.conf
install done.

To list installed file systems (on each host):
	zfs list -r lxpool -o name,encryption,compression
To remove installed file systems (on each host):
	sudo zpool destroy lxpool
To run lx commands:
    localhost:/usr/local/leanxcale/bin/lx
Or adjust your path:
    # At localhost:~/.profile:
    export PATH=/usr/local/leanxcale/bin:$PATH
To start (with lx in your path):
    lx start

The final messages printed by the install program will remind you of how to list or remove the installed file systems.

3.3. Using leanXcale Commands

The installation creates the lx command. Use it to execute commands to operate the installed system.

At the end of the installation, the install program prints suggestions for adjusting the PATH variable so that lx will be available as any other command. For example, as done with:

export PATH=/usr/local/leanxcale/bin:$PATH

when the install directory is /usr/local/leanxcale.

The lx command can be found at the bin directory in the install directory. For example, at /usr/local/leanxcale/bin/lx when installing at /usr/local/leanxcale.

In what follows, we assume that lx can be found using the system PATH.

The first command used is usually the one to install a license file:

unix$ lx license -f lxlicense

It is also possible to put the license file at ~/.lxlicense, and the install process will find it and install it on the target system(s).

4. Docker Installs

4.1. Docker Installs with no Preinstalled Image

When a preinstalled leanXcale docker image is available, disregard this section and proceed as described in the next one.

4.1.1. Prerequisites

To install, you need:

  • A valid LeanXcale license file

  • The LeanXcale zip distribution

  • A Linux system with

    • docker version 20.10.6 or later.

    • python 3.7 or later.

  • Access to the internet to permit docker to download standard docker images and software packages for them.

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

Download the latest LeanXcale zip distribution from:

https://artifactory.leanxcale.com/artifactory/lxpublic

There is a zip file per version.

4.1.2. Install

Working on the directory where the distribution has been downloaded, unzip the LeanXcale zipped distribution:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To create a docker image for leanXcale, use the following command:

unix$ lxinst docker
sysinfo lx1...
install #cfgfile: argv docker...
...
install done
docker images:
REPOSITORY   TAG     IMAGE ID           CREATED         SIZE
uxbase       2         434dfeaedf0c     3 weeks ago     1.06GB
lx           2         6875de4f2531     4 seconds ago  1.33GB

docker network:
NETWORK ID     NAME          DRIVER    SCOPE
471c52155823   lxnet         bridge       local
to start:
    docker run -dit --name lx1 --network lxnet lx:2 lx1

The image created is named lx:2. It is configured to run a container with hostname lx1 on a docker network named lxnet.

To list the image we can execute

unix$ docker images lx
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
lx           2         6875de4f2531   About a minute ago   1.33GB

And, to list the networks we can execute

unix$ docker network ls
NETWORK ID     NAME          DRIVER    SCOPE
471c52155823   lxnet         bridge       local

The created image is a single one for all containers. The name given when creating the container determines the host name used (in this example, lx1).

Before using the system, a license file must be installed on each container created. This is explained later.

To remove the image when so desired, use this command:

unix$ docker rmi lx:2

4.2. Docker Installs with Preinstalled Image

4.2.1. Prerequisites

To install, you need:

  • A valid LeanXcale license file

  • The LeanXcale docker image

  • A Linux system with

    • docker version 20.10.6 or later.

    • python 3.7 or later.

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

Download the latest LeanXcale docker image from:

https://artifactory.leanxcale.com/artifactory/lxpublic/

There is a docker image file per version.

4.2.2. Install

Working on the directory where the image file has been downloaded, add the image to your docker system:

unix$ docker load --input lx.2.3.docker.tgz
Loaded image: lx:2

Double check that the image has been loaded:

unix$ docker images lx:2
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
lx           2         bd350f734448   28 hours ago   1.33GB

Create the docker network lxnet:

unix$ docker network create --driver bridge lxnet

The image is a single one for all containers. The name given when creating the container determines the host name used (in this example, lx1).

Before using the system, a license file must be installed on each container created. This is explained in the next section.

To remove the image when so desired, use this command:

unix$ docker rmi lx:2

4.3. Running leanXcale Docker Containers

This section explains how to create containers from leanXcale images and how to install licenses and change the lxadmin password for them.

Provided the leanXcale image named lx:2, and the docker network lxnet, this command runs a leanXcale docker container:

unix$ docker run -dit --name lx1 --network lxnet -p0.0.0.0:14420:14420 lx:2 lx1
b28d30702b80028f8280ed6c55297b23203540387d3b4cfbd52bc78229593e27

In this command, the container name is lx1, the network used lxnext, and the image used lx:2. The port redirection -p…​ exports the SQL port to the underlying host.

It is important to know that:

  • starting the container will start leanXcale if a valid license was installed;

  • stopping the container should be done after stopping leanxcale in it.

The container name (lx1) can be used to issue commands. For example, this removes the container after stopping it:

unix$ docker rm -f lx1

The installed container includes the lx command. Use it to execute commands to operate the installed DB system.

It is possible to attach to the container and use the ''lx'' command as it can be done on a bare metal host install:

unix$ docker attach lx1
lx1$ lx version
...

Here, we type docker attach lx1 on the host, and lx version on the docker container prompt.

Note that if you terminate the shell reached when attaching the docker container, it will stop. Usually, this is not desired.

It is possible to execute commands directly on the executed container. For example:

unix$ docker exec -it lx1 lx version

executes lx version on the container.

4.3.1. Setting up a License and Admin Password

Starting the container starts leanxcale as well, but not if there is no license installed. In this case we must install a license file in the container.

To install a license file, it must be copied to the container as shown here:

unix$ docker cp ./lxlicense lx1:/usr/local/leanxcale/.lxlicense
unix$ docker exec -it lx1 sudo chown lx /usr/local/leanxcale/.lxlicense

To change the password for the lxadmin user, do this after starting the database:

unix$ docker exec -it lx1 lx kvcon addusr lxadmin
pass? *****

4.3.2. Stopping the Container

The docker command to stop a container may not give enough time for leanXcale to stop. First, stop leanXcale:

unix$ docker exec -it lx1 lx stop

And now the container may be stopped:

unix$ docker stop lx1

5. AWS Installs

AWS Installs require installing the AWS command line client, used by the installer. For example, on Linux:

unix$ sudo apt-get install awscli

You need a PEM file to be used for accessing the instance, once created. The PEM file and the public key for it can be created with this command:

unix$ ssh-keygen -t rsa -m PEM -C "tkey" -f tkey

Here, we create a file tkey with the private PEM and a file tkey.pub with the public key. Before proceeding, rename the PEM file to use .pem:

unix$ mv tkey tkey.pem
unix$ chmod 400 tkey.pem

When installing supply the path to the PEM file (without the .pem extension) to flag -K, so that lxinst can find it and use its base file name as the AWS key pair name).

Now, define your access and secret keys for the AWS account:

unix$ export AWS_ACCESS_KEY_ID=___your_key_here___
unix$ export AWS_SECRET_ACCESS_KEY=___your_secret_here___
...

To install the public distribution, go to

https://artifactory.leanxcale.com/artifactory/lxpublic

and download the zip for the last version (for example, lx.2.3.232129.zip).

Before extracting the zip file contents, make sure that there is no lxdist directory from previous versions, or it will include packages that are not for this one.

unix$ rm -f lxdist/*

Extract the zip file:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To install in AWS, use lxinst using the aws property and the -K option to supply the key file/name to use. For example:

unix$ lxinst -K tkey aws
...
aws instance i-03885ca519e8037a1 IP 44.202.230.8
...
aws lx1 instance id i-0d2287deeb3d45a82

Or, specify one or more properties to set the region, instance type, disk size, and AWS tag. The disk size is in GiB. The tag is a name and should not include blanks or dots. It will be used for .aws.leanxcale.com domain names.

For example:

unix$ lxinst -K tkey aws awsregion us-west-1 awstype t3.large \
	awsdisk 30 awstag 'client-tag'
...
config: lxinst.conf
install done.

	# To remove resources:
		./lxinst.uninstall
	#To dial the instances:
		ssh -o StrictHostKeyChecking=no -i xkey.pem lx@18.232.95.2.2

Arguments follow the semantics of a configuration file. Therefore, if a host name is specified, it must come after global properties.

Try to follow the conventions and call lx1 the first host installed, lx2 the second one, etc. Also, do not specify directories and similar attributes, leave them to the AWS install process.

If something fails, or the install completes, check for a lxinst.uninstall command, created by the install process, to remove allocated resource when so desired.

The detailed configuration file built by the install process is saved at lxinst.conf as a reference. This is not a configuration file written by the user, but a configuration file including all the install details. This is an example:

#cfgfile lxinst.conf
awstag client-tag
awsvpc vpc-0147a6d8e9d6e6910
awssubnet subnet-0b738237bbce66037
awsigw igw-0a160524cf4edab30
awsrtb rtb-040894a1050b58a3f
awsg sg-05ee9e0232599026d
host lx1
	awsinst i-0e0167a4233f76bb1
	awsvol vol-01629a4b3694bb5fd
	lxdir /usr/local/leanxcale
	JAVA_HOME /usr/lib/jvm/java-1.11.0-openjdk-amd64
	addr 10.0.120.100
	kvms 100
		addr lx1!14400
	lxmeta 100
		addr lx1!14410
	lxqe 100
		addr lx1!14420
		mem 1024m
	kvds 100
		addr lx1!14500
		mem 1024m

In the installed system, the user running the DB is lx, and LeanXcale is added to the UNIX system as a standard service (disabled by default). The instance is left running. You can stop it on your own if that is not desired.

The command lxinst.uninstall, created by the installation, can be used to remove the created AWS resources:

unix$ lxinst.uninstall
...

To use an instance (and host) name other than lx1, supply your desired host name. And use a name, not something with dots or special characters. For example:

unix$ lxinst aws -K tkey lxhost1 lxhost2
...

creates a network and two instances named lxhost1 and lxhost2, and leaves them running.

Once installed, use lx as in any other system:

unix$ ssh -o StrictHostKeyChecking=no -i tkey.pem lx@44.202.230.8
lx1$ lx stop -v

By default, AWS installs use compression but not encryption. To change this, use the compress and/or the crypt global property with values yes or no as desired. For example:

unix$ lxinst aws -K tkey crypt lxhost1 lxhost2
...

installs with encryption enabled, and this disables compression:

unix$ lxinst aws -K tkey compress no lxhost1 lxhost2
...

5.1. Listing and removing AWS resources (lxaws)

The program lxaws is used to list or remove AWS installs. This program is not built as such. To create it, copy lxinst to lxaws and use it.

usage: lxaws [-h] [-e] [-v] [-d] [-D] [-r region] [-n] [-askpeer] [-yespeer]
             [-netpeer] [-delpeer] [-p] [-o] [-c]
             [tag [tag ...]]

lx AWS cmds

positional arguments:
  tag         aws tag|peer command args

optional arguments:
  -h, --help  show this help message and exit
  -e          define env vars
  -v          verbose
  -d          remove resources
  -D          enable debug diags
  -r region   AWS region
  -n          dry run
  -askpeer    ask peer: tag owner reg vpc
  -yespeer    accept peer:tag pcxid
  -netpeer    set peer net: tag pcxid cidr sec
  -delpeer    del peer: pcxid
  -p          print open ports
  -o          open ports: tag proto port0 portn cidr name
  -c          close ports: tag proto port cidr

Given a region, without any tags, it lists the tags installed:

unix$ lxaws -r us-east-1
xtest.aws.leanxcale.com

Given a tag, it lists the tag resources as found on AWS:

unix$ lxaws -r us-east-1 xtest.aws.leanxcale.com
#xtest.aws.leanxcale.com:
	vpc vpc-0bb89fa4f83fc69c6
	subnet subnet-0b5fb20a5372f89da
	igw igw-08c3cdec1dc865b84
	rtb rtb-0e40ace79169b2e08
	assoc rtbassoc-0248017196d4be19c
	sec sg-028614274a930d0ef
	inst i-041b70633666af01b	xtest1.aws.leanxcale.com	18.209.59.230
	vol vol-04310af65774fc5e7

It is also possible to supply just the base tag without the domain, as in

unix$ lxaws -r us-east-1 xtest

With flag -e prints commands to set environment variables with resources found, as an aid to run other scripts.

unix$ lxaws -e -r us-east-1 xtest.aws.leanxcale.com
#xtest.aws.leanxcale.com:
	export vpc='vpc-0bb89fa4f83fc69c6'
	export subnet='subnet-0b5fb20a5372f89da'
	export igw='igw-08c3cdec1dc865b84'
	export rtb='rtb-0e40ace79169b2e08'
	export assoc='rtbassoc-0248017196d4be19c'
	export sec='sg-028614274a930d0ef'
	export inst='i-041b70633666af01b'
		export addr='peer11.aws.leanxcale.com'
	export vol='vol-04310af65774fc5e7'

When more than one tag is asked for, or more than one instance/volume is found, variable names are made unique adding a number to the name, for example:

unix$ lxaws -e peer1 peer2
#peer1.aws.leanxcale.com:
	export vpc0='vpc-0a50a6e989aa9da9a'
	export subnet0='subnet-0d9fd3a7d03eca61b'
	export igw0='igw-0af4279169fd8cab6'
	export rtb0='rtb-0f5d93a83239c3ada'
	export assoc0='rtbassoc-0e7d0f74cd780e121'
	export sec0='sg-01afb3d3c985f7881'
	export inst0='i-072326e86bcc77e9f'
		export addr0='peer11.aws.leanxcale.com'
	export vol0='vol-08ed1c4acdc0eae61'
#peer2.aws.leanxcale.com:
	export vpc1='vpc-023ce3e3c47bbb48f'
	export subnet1='subnet-0f9af7190d758d6d6'
	export igw1='igw-0ae3c860a69969a83'
	export rtb1='rtb-0d45f2059b4696cf4'
	export assoc1='rtbassoc-0a365cb4472f0b89e'
	export sec1='sg-04b122e86debcd735'
	export inst1='i-0b9cf8cff4b46d657'
		export addr1='peer21.aws.leanxcale.com'
	export vol1='vol-0f9f344a3a2b9bf38'

With flag -d, it removes the resources for the tags given. In this case, tags must be given explicitly in the command line.

unix$ lxaws -d -r us-east-1 xtest.aws.leanxcale.com

5.2. AWS Ports

To list, open, and close ports exported by the AWS install to the rest of the world, use lxaws flags -p (print ports), -o (open ports), and -c (close ports).

In all cases, the first argument is the tag for the install. The tag can be just the AWS install tag name, without .aws.leanxcale.com.

For example, this command lists the open ports:

unix$ lxaws -p xample.aws.leanxcale.com
port: web:	tcp 80	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: comp:	tcp 14420	0.0.0.0/0

Here, the protocol and port (or port range) is printed for each set of open ports. The CIDR printed shows the IP address range that can access the ports, and is 0.0.0.0/0 when anyone can access them.

Before with each port range, the name for the open port range is printed. This name is set by the default install, and can be set when opening ports as shown next.

To open a port range, use -o and give as arguments the tag for the install, the protocol, first and last port in range, the CIDR (or any if open to everyone), and a name to identify why this port range is open (no spaces). For example:

unix$ lxaws -o xample.aws.leanxcale.com tcp 6666 6666 any opentest

The new port will be shown as open if we ask for ports:

unix$ lxaws -p xample.aws.leanxcale.com
port: web:	tcp 80	0.0.0.0/0
port: opentest:	tcp 6666	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: comp:	tcp 14420	0.0.0.0/0

As another example:

unix$ lxaws -o xample tcp 8888 10000 212.230.1.0/24 another
unix$ lxaws -p xample
port: web:	tcp 80	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: another:	tcp 8888-10000	212.230.1.0/24
port: comp:	tcp 14420	0.0.0.0/0

To close a port, use -c and give as arguments the tag for the install, the protocol, a port within the range of interest, and the CIDR used to export the port. Note that any can be used here too instead of the CIDR 0.0.0.0/0 For example:

unix$ lxaws -vc xample tcp 6666 any
searching aws...
close ports tcp 6666-6666 to 0.0.0.0/0

Here, we used the -v flag (verbose) to see what is going on.

As another example, this can be used to close the open port range 8888-10000 from the example above:

unix$ lxaws -c xample tcp 9000 2.2.230.1.0/24

5.3. AWS VPC Peering Connections

Peering connections can be used to bridge two VPCs at AWS.

One peer asks the other peer to accept a peering connection, the peer accepts the connection, and network routes and security group rules for access are configured.

Peering connections are handled using lxaws with the peering connection flags. If you do not have lxaws, copy lxinst to a file named lxaws and give it execution permissions.

These are the flags for peering connections:

  -askpeer    ask peer: tag owner reg vpc
  -yespeer    accept peer:tag pcxid
  -netpeer    set peer net: tag pcxid cidr sec
  -delpeer    del peer: pcxid
  • With flag -askpeer, lxaws requests VPC peering connection.

  • With flag -yespeer, lxaws accepts a VPC peering request.

  • With flag -netpeer, lxaws sets up the routes and port access rules.

  • With flag -delpeer, lxaws removes a peering connection.

To request a peering connection, supply as arguments

  • the tag for the installed system where to setup a peer VPC.

  • the peer AWS owner id (user id).

  • the peer region

  • the peer VPC id

For example:

unix$ lxaws -askpeer  xample 232967442225 us-east-1 vpc-0cf1a3b5c1232d172
peervpc pcx-06548783d83ddaba9

Here, we could have used xample.aws.leanxcale.com instead. The command prints the peering connection identifier, to be used for setting up networking and asking the peer administrator to accept the peering request.

To accept a peer request, supply as an argument the peering connection id, as in:

unix$ lxaws -yespeer pcx-06548783d83ddaba9

In either case, once the dialed peer accepts the request, networking must be set supplying as arguments

  • the tag for the installed system where to setup a peer VPC.

  • the peering connection identifier

  • the peer CIDR block

  • the peer security group id

For example:

unix$ lxaws -netpeer xample pcx-06548783d83ddaba9 10.0.130.0/24 sg-0f277658c2328a955

Our local CIDR block is 10.0.120.0/24. This must be given to the peer, so the peer system can setup routing for this block to our network, along with the VPC id and our security group id.

This information can be retrieved using lxaws as described in the previous section. For example:

unix$ lxaws xample.aws.leanxcale.com
#xample.aws.leanxcale.com:
	vpc vpc-0f69c4a92b0a78523
	peervpc pcx-0e88c49635ed2e59e
	subnet subnet-0ca70fab7476c2a04
	igw igw-0cd6a7bfa99981659
	rtb rtb-06e4994a57d37a054
	assoc rtbassoc-022b54b9a0216e9f5
	sec sg-0df3a6ec01a4ee5ee
	inst i-0b8144548f0e0f1d8	peer11.aws.leanxcale.com	44.200.78.14
	vol vol-0c40de22d908e40bd

Should it be necessary, the CIDR block used by the install can be set when installing the system (but not later), using the property awsnet, as in

unix$ lxinst -K mykey aws awsnet 10.0.120 awstag xample

Note that only the network address bytes used are given, instead of using 10.0.120.0/24.

Once done with a peering connection, it can be dismantled supplying both the tag and the peering connection identifier. The identifier is always given because, when accepting a peering request, the peering connection does not belong to us. But, using the command above you can retrieve such identifier easily.

When peering is no longer desired, the peering connection can be removed.

unix$ lxaws -delpeer xample pcx-005c2b84b89377737

Removing the peering connection also removes the routing entries for it and the security group access rules added when it was setup.

6. HP Greenlake Installs

We currently support VM installs at HP Greenalake. For installing, first create the VM image and in this VM image, install like in bare metal. Follow the steps in Bare Metal Installs. The administration of the database instance will be the same as a bare metal. Just follow the bare metal sections in administration guide.

7. LX Command

7.1. LeanXcale Commands

The command lx is a Shell for running LeanXcale control programs. This simply fixes the environment for the installed host and runs the command given as an argument:

usage: lx [-d] cmd...

The command operates on the whole LeanXcale system, even when multiple hosts are used.

It is convenient to have lx in the PATH environment variable, as suggested in the install program output.

Command output usually includes information on a per-host basis reporting the progress of the used command.

Most commands follow the same conventions regarding options and arguments. We describe them here as a convenience.

Arguments specify what to operate (e.g., what to start, stop, etc.) may be empty to rely on the defaults (whole DB) or may specify a particular host and/or component name:

  • when only component names are given, and only those components will be involved (e.g., lxqe101).

  • when a component type name is given, components for that type are selected. (e.g., lxqe).

  • when a host name is given, any component following is narrowed to that host. If no components follow the host name, all components from the host are selected.

This may be repeated to specify different hosts and/or components.

The special host names db, repl, and repl2 may be used and stand for hosts without the nodb attribute, hosts for the first replica, and hosts for the second replica (hosts that are a mirror of other ones).

7.1.1. LeanXcale Commands on Bare Metal Installs

For bare metal installs, it suffices to have the lx command in the PATH. It can run on any of the installed hosts.

For example, on an installed host, lx version prints the installed version:

unix$ lx version
leanXcale v2.2
    kv         v2.2.2023-09-29.115f5fba70e3af8dc203953399088902c4534389
    QE         v2.2.2023-09-30.1e5933900582.26a7a5c3420cd3d5d589d1fa6cc
    libs       v2.2.2023-09-29.67535752acf19e092a6eaf17b11ad17597897956
    avatica    v2.2.2023-09-27.0b0a786b36e8bc7381fb2bb01bc8b3ed56f49172
    TM         v2.2.2023-09-29.9a9b22cfdc9b924dbc3430e613cddab4ed667a57

7.1.2. LeanXcale Commands on Docker Installs

To use the lx command on a docker install, an installed container must be running, and the command must be called on it.

For example, assume that the container named lx1 is running on a Docker install. The container could be started using the following command, assuming the leanXcale image is named lx:2, and the docker network used is lxnet:

unix$ docker run -dit --name lx1 --network lxnet -p0.0.0.0:14420:14420 lx:2 lx1
b28d30702b80028f8280ed6c55297b2e203540387d3b4cfbd52bc78229593e27

It is possible to attach to the container and use the ''lx'' command as it can be done on a bare metal host install:

unix$ docker attach lx1
lx1$ lx version
...

Here, we type docker attach lx1 on the host, and lx version on the docker container prompt.

Note that if you terminate the shell reached when attaching the docker container, it will stop. Usually, this is not desired.

It is possible to execute commands directly on the executed container. For example:

unix$ docker exec -it lx1 lx version

executes lx version on the lx1 container.

In what follows, lx1 is used as the container name in the examples for docker installs.

7.1.3. LeanXcale Commands on AWS Installs

Using lx on AWS hosts is similar to using it on a bare-metal install. The difference is that you must connect on the AWS instance to run the command there.

For example, after installing xample.aws.leanxcale.com, and provided the PEM file can be found at xample.pem, we can run this:

unix$ ssh -i xample.pem xample.aws.leanxcale.com lx version

to see the installed version.

In what follows, xample.pem is used as the PEM file name and xample.aws.leanxcale.com is used as the installed instance name, for all AWS install examples.

8. Starting the System

8.1. Bare Metal System Start

The start command starts LeanXcale:

unix$ lx start
start...
atlantis [
    cfgile: /ssd/leandata/xamplelx/lib/lxinst.conf...
    bin/spread -c lib/spread.conf ...
    forked bin/spread...
    bin/spread: started pid 1056053
    bin/kvms -D 192.268.1.224!9999 /ssd/leandata/xamplelx/disk/kvms100/kvmeta ...
    forked bin/kvms...
    ...
]
atlantis [
    kvds103 pid 1056084 alive
    kvms100 pid 1056057 alive
    spread pid 1056053 alive
    kvds102 pid 1056075 alive
    kvds100 pid 1056062 alive
    kvds101 pid 1056066 alive

]
unix$

Here, atlantis started a few processes and, once done, the start command checked out if the processes are indeed alive.

In case not all components can be started successfully, the whole LeanXcale system is halted by the start command.

To start a single host or component, use its name as an argument, like in:

# start the given host
unix$ lx start atlantis
# start the named components
unix$ lx start kvds
# start the named components at the given host
unix$ lx start atlantis kvds

Start does not wait for the system to be operational. To wait until the system is ready to handle SQL commands, the status command can be used with the -w (wait for status) flag, as in:

unix$ lx status -w running
status: running

Without the -w flag, the command prints the current status, which can be stopped, failed, running, or waiting.

8.2. Docker System Start

To start LeanXcale installed on Docker containers, you must start the containers holding the installed system components.

For example, consider the default docker install

unix$ lxinst docker
...
install done
docker images:
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
uxbase       2         7c8262008dac   3 months ago   1.07GB
lx           2         cafd60d35886   3 seconds ago   2.62GB

docker network:
NETWORK ID     NAME      DRIVER    SCOPE
a8628b163a21   lxnet     bridge    local
to start:
	docker run -dit --name lx1 --network lxnet lx:2 lx1

The install process created a docker image named lx:2, installed for the docker host lx1, and the docker network lxnet.

To list the image we can

unix$ docker images lx
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
lx           2         75b8c9ffa245   About a minute ago   2.62GB

And, to list the networks

unix$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a8628b163a21   lxnet     bridge    local

The created image is a single one for all containers. The name given when creating the container determines the host name used The install process specified the host names, and containers must be starte using the corresponding host name(s), so they know which leanXcale host they are for.

For example, to start the container for lx1:

unix$ docker run -dit --name lx1 --network lxnet -p0.0.0.0:14420:14420 lx:2 lx1
b28d30702b80028f8280ed6c55297b2e203540387d3b4cfbd52bc78229593e27

In this command, the container name is lx1, the network used lxnext, and the image used lx:2. The port redirection -p…​ exports the SQL port to the underlying host.

Listing docker processes shows now the running container

unix$ docker ps
CONTAINER ID   IMAGE     COMMAND             STATUS          PORTS  NAMES
e81d9d01f40a   lx:2      "/bin/lxinit lx1"   Up 56 seconds   14410  lx1

It is important to know that:

  • starting the container will start leanXcale if a valid license was installed;

  • stopping the container should be done after stopping leanxcale in it.

The container name (lx1) can be used to issue commands. For example,

unix$ docker exec -it lx1 lx version
leanXcale v2.1 unstable
	kv         v2.1.2.14-02-15.c26f496706918e610831c02e99da3676a1cffa47
	lxhibernate v2.1.2.14-02-07.f65c5a628afede27c15c77df6fbbccd6d781d3ee
	TM         v2.1.2.14-02-06.bfc9f92216481dd05f51900ac522e5ccfb6d2555
	QE         v2.1.2.14-02-15.4a8ff4200dc3d3656c8469b6f74c05a296fbdfb3
	avatica    v2.1.2.14-02-14.1c442ac9e630957ace3fdb5c4faf92bb85510099
	...

executes lx version on the lx1 container.

The status for the system can be seen in a similar way:

unix$ docker exec -it lx1 lx status
status: running

Note that the container will not start the DB if no valid license is found.

8.3. AWS System Start

To start LeanXcale installed on AWS, you must start the AWS instances holding the installed system components.

Once started, the lx command is available at any of the installed system instances.

For example, after statring xample.aws.leanxcale.com, and provided the PEM file can be found at xample.pem, we can run this:

unix$ ssh -i xample.pem xample.aws.leanxcale.com lx version

to see the installed version.

9. Checking System Status

9.1. Bare Metal System Status

The command lx status reports the status for the system or waits for a given status. For example,

unix$ lx status
status: waiting
	kvds100: recovering files
	kvds101: recovering files

Or, to wait until the status is running:

unix$ lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running
unix$

To see the status for each one of the processes in the system, use lx procs. For example:

unix$ lx procs
procs...
atlantis [
    kvds103 pid 1057699 alive running
    kvms100 pid 1057672 alive running
    spread pid 1057668 alive
    kvds102 pid 1057690 alive running
    kvds100 pid 1057677 alive running
    kvds101 pid 1057681 alive running

]

9.2. Docker System Status

Before looking at the LeanXcale system status, it is important to look at the status of the docker containers running LeanXcale components.

unix$ docker ps
CONTAINER ID   IMAGE     COMMAND             STATUS          PORTS  NAMES
e81d9d01f40a   lx:2      "/bin/lxinit lx1"   Up 56 seconds   14410  lx1

When containers are running, the command lx status reports the status for the system or waits for a given status. For example,

unix$ docker exec -it lx1 lx status

executes lx status on the lx1 container. The status is reported for the whole system, and not just for that container.

To wait until the status is running:

unix$ docker exec -it lx1 lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running

To see the status for each one of the processes in the system, use lx procs. For example:

unix$ docker exec -it lx1 lx procs
procs...
atlantis [
    kvds103 pid 1057699 alive running
    kvms100 pid 1057672 alive running
    spread pid 1057668 alive
    kvds102 pid 1057690 alive running
    kvds100 pid 1057677 alive running
    kvds101 pid 1057681 alive running

]

9.3. AWS System Status

Before looking at the LeanXcale system status, it is important to look at the status of the AWS instances running LeanXcale components.

When instances are running, the command lx status reports the status for the system or waits for a given status. For example,

unix$ ssh -i xample.pem xample.aws.leanxcale.com lx status

to see the system status.

To wait until the status is running:

unix$ ssh -i xample.pem xample.aws.leanxcale.com lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running

To see the status for each one of the processes in the system, use lx procs. For example:

unix$ ssh -i xample.pem xample.aws.leanxcale.com lx procs
procs...
atlantis [
    kvds103 pid 1057699 alive running
    kvms100 pid 1057672 alive running
    spread pid 1057668 alive
    kvds102 pid 1057690 alive running
    kvds100 pid 1057677 alive running
    kvds101 pid 1057681 alive running

]

10. Stopping the System

10.1. Bare Metal System Stop

The stop command halts LeanXcale:

unix$ lx stop
stop...
atlantis [
    kvcon[1056801]: halt
]
atlantis [
    term 1056062 2056066 1056075 1056084 1056057 1056053...
    kill 1056062 2056066 1056075 1056084 1056057 1056053...

]
unix$

10.2. Docker System Stop

Stopping the LeanXcale containers should be done after stopping leanxcale. The reason is that docker might timeout the stop operation if the system is too busy updating the disk during the stop procedure.

To stop the database,

unix$ docker exec -it lx1 lx stop

stops the components for the whole system (being lx1 an installed container).

We can double check this

unix$  docker exec -it lx1 lx status
status: stopped

Once this is done, we can stop the docker container.

unix$ docker ps
CONTAINER ID   IMAGE     COMMAND             STATUS          PORTS  NAMES
e81d9d01f40a   lx:2      "/bin/lxinit lx1"   Up 56 seconds   14410  lx1
unix$ docker stop lx1
lx1
unix$ docker ps
unix$

We can also remove the container, but, note that doing this removes all data in the container as well.

unix$ docker rm lx1
lx1
unix$

10.3. AWS System Stop

To stop leanXcale on AWS, you must stop leanXcale before stopping the AWS instances running it. For example

unix$ ssh -i xample.pem xample.aws.leanxcale.com lx stop

This stops the system on all the instances it uses.

11. Start & Stop Particularities on Different Installs

System start depends on how the system has been installed. For bare-metal installations, the administrator installing the system is responsible for adding a system service that brings LeanXcale into operation when the machine starts, and stops LeanXcale before halting the system.

For AWS installations, starting the instance starts the LeanXcale service, and stopping the instance stops LeanXcale before the instance stops.

For Docker installations, starting a container starts the LeanXcale service on it, and, for safety, LeanXcale should be halted before halting the container (otherwise Docker might decide to time-out and stop the container before LeanXcale did fully stop).

12. Configuring the System

The lx config command prints or updates the configuration used for the LeanXcale system:

unix$ lx config
cfgile: /usr/local/leanxcale/lib/lxinst.conf...
#cfgfile /usr/local/leanxcale/lib/lxinst.conf
host localhost
    lxdir /usr/local/leanxcale
    JAVA_HOME /usr/lib/jvm/java-1.11.0-openjdk-amd64
    addr 127.0.0.1
    odata 100
        addr localhost!14004
    kvms 100
        addr 127.0.0.1!14400
    lxmeta 100
        addr 127.0.0.1!16500
    lxqe 100
        addr 127.0.0.1!16000
    kvds 100
        addr 127.0.0.1!15000
    kvds 101
        addr 127.0.0.1!15002
    kvds 102
        addr 127.0.0.1!15004
    kvds 103
        addr 127.0.0.1!15006

The configuration printed provides more details than the configuration file (or command line arguments) used to install.

NB: when installing into AWS or docker, the configuration printed might lack the initial aws or docker property used to install it.

It is possible to ask for particular config entries, like done here:

unix$ lx config kvms addr

It is also possible to adjust configured values, like in

unix$ lx config -s lxqe mem=500m

used to adjust the lxqe mem property to be 500m in all lxqe components configured.

13. System Recovery

The lxmeta component watches the status for other components and will stop the system when there is a failure that cannot be recovered online.

Should the system crash or fail-stop, upon a system restart, lxmeta will guide the system recovery.

At start time, each system component checks out its on-disk information and decides to start either as a ready component or as a component needing recovery.

The lxmeta process, guides the whole system start process following these steps:

  • Wait for all required components to be executing.

  • Look up the component status (ready/recovering).

  • If there are components that need recovery, their recovery process is executed.

  • After all components are ready, the system is made available by accepting queries.

The command lx status can be used both to inspect the system status and the recovering process, or to wait until the recovery process finishes and the system becomes available for queries.

14. System Logs

Logs are kept on a per-system directory, named log, kept at the install directory for each system.

Log files have names similar to

	kvms100.240214.1127.log

Here, the component name comes first, and then, the date and time when the log file was created. When a log file becomes too big, a new one is started for the given component.

It is convenient to use the lx logs command to list and inspect logs. It takes care of reaching the involved log files on the involved hosts.

For example, to list all log files:

	unix$ lx logs
	logs...
	atlantis: [
		log/kvds100.240214.1127.log	250.00K
		log/kvms100.240214.1127.log	35.39K
		log/lxmeta100.240214.1127.log	24.10K
		log/lxqe100.240214.1127.log	445.80K
		log/spread.240214.1127.log	963
		log/start.log	426
	]

Here, atlantis was the only system installed.

We can give host and/or component names as in many other commands to focus on those systems and/or components.

For example, to just just logs for kvms processes:

	unix$ lx logs kvms
	logs...
	atlantis: [
		log/kvms100.240214.1127.log	35.39K
	]

Or, to list only those for the kvms100:

	unix$ lx logs kvms100

To list logs for the atlantis host:

	unix$ lx logs atlantis

To list logs for kvms components within atlantis:

	unix$ lx logs atlantis kvms

To list logs for kvms components at atlantis and kvds at orion:

	unix$ lx logs atlantis kvms orion kvds

With flag -p, logs are printed in the output instead of being listed.

	unix$ lx logs -p lxmeta
	atlantis: [
		log/lxmeta100.240214.1127.log	24.10K [
			# pid 3351755 cmd bin/javaw com.leanxcale.lxmeta.LXMeta -a atlantis!14410
	...

Flag -g greps the logs for lines with the given expression. For example:

	unix$ lx logs -g fatal

And, flag -c copies the logs to the given directory

	unix$ lx logs -c /tmp

When printing and copying the logs, only the last log file for each component are used. To operate on all the logs and not just on the last one, use flag -a too:

	unix$ lx logs -a -c /tmp

15. Backups

Backups can be made to a external location (recommended to tolerate disk failures) or within an installed host. External backups (i.e., to an external location) are made using the lxbackup tool, installed at the bin directory, which works with a given configuration file. Internal backups (i.e., to a directory on installed hosts) are made using the lx backup command.

Using lxbackup is exactly like using lx backup with a few differences:

  • Flag -f is mandatory and must be used to provide the installed configuration file.

  • The default backup directory is not $LXDIR/dump, but ./lxdump.

By default, internal backups (lx backup), the backup is $LXDIR/dump at the host used to issue backup commands. The convention is to use the first host to keep backups.

The lxbackup command can be found at the installed $LXDIR/bin directory on any installed host. If you are not sure regarding the $LXDIR value, use this command to find it:

unix$ lx -d pwd
/usr/local/leanxcale
unix$

Flag -d for lx makes it change to the $LXDIR directory before running the given command.

The detailed configuration file to be used with lxbackup is created at lxinst.conf when installing. The configuration can be retrieved also by the lx config command. For example:

unix$ lx config -o lxinst.conf
saved lxinst.conf

On external backups, the backup directory is ./dump unless otherwise specified to the backup command.

This directory can be a mount point or a link to a remote directory to keep backups at a different machine.

Directories under dump/ are named using the date, with .1, .2, etc. appended if more than one backup is created on the same date. If the backup is incremental, a final + is added to the directory name.

For example, create a full, cold, backup on the installed system (e.g., atlantis) when leanXcale is not running:

unix$ lx backup
backup...
host atlantis...
backup: /usr/local/leanxcale/dump/230720
unix$

The printed path is the path for the directory keeping the backup, as used when restoring it.

This backup is not encrypted and is not compressed. Flag -z gzips backed up files, and flag -C also encrypts them using the secret kept in the file given as argument. These flags are also used when restoring a backup.

For example, this behaves like the previous example, but encrypts and compresses the backup

unix$ lx backup -C  ./secretkey
backup...
host atlantis...
backup: /usr/local/leanxcale/dump/230720
unix$

To setup the external host orion for backups, copy the backup program and save and copy the installed configuration:

unix$ lx config -o lxinst.conf
saved lxinst.conf
unix$ lx -d pwd
/usr/local/leanxcale
unix$ scp /usr/local/leanxcale/bin/lxbackup lxinst.conf orion:~

And then, to create a full, ecnrypted, cold, backup on orion:

orion$ lxbackup -C  ./secretkey -f lxinst.conf
backup...
host atlantis...
backup: lxdump/230720
...
orion$

In the examples that follow, orion is used as the external backup host.

An important note is that file modification times are preserved in backup files. They are used to learn if a file must be copied or not in an incremental backup. If a backup is relocated to a different place, and later copied back into the standard location, preserve file modification times in the process.

Using flag -z makes the tool gzip backup files, preserving modification times despite compression.

15.1. Cold Incremental Backups

Flag -i performs an incremental backup, made with respect to the last total backup made on the same backup location.

This is done after the system is stopped. The system should not be running while doing a cold backup.

For example, from the external backup host in the running example:

orion$ lxbackup -i -f lxinst.conf
backup...
host atlantis...
...
orion$

Or, from the installed system backup in our example:

unix$ lx backup -i
backup...
host atlantis...
backup: dump/230720.3+
...
unix$

The system should not be running while doing a cold backup.

Remember to use flag -z or flag -C if the backups made were compressed or encrypted.

15.2. Hot Incremental Backups

To perform a hot backup, backup the redo logs as an incremental dump by specifying lxqe:

orion$ lxbackup -i -f lxinst.conf lxqe
backup...
host atlantis...
backup: dump/230720.4+
...
orion$

Remember to use flag -z or flag -C if the backups made were compressed or encrypted. For example,

orion$ lxbackup -C ./secretkey -i -f lxinst.conf lxqe

is the command if backups at this place are encrypted.

15.3. Listing Backups

With flag -p, both lxbackup and lx backup list the known backups:

unix$ lx backup -p
backup...
/usr/local/leanxcale/dump/230720
/usr/local/leanxcale/dump/230720.1+
/usr/local/leanxcale/dump/230720.2 encrypted
...

Those with a + in their names are incremental backups. Those encrypted are reported as such.

15.4. Removing Old Backups

To remove a backup, it suffices to remove its directory. For example:

unix$ lx -d rm -rf dump/230720.2

Here we used the flag “-d” for lx to change to $LXDIR before executing the remove command, which makes it easy to name the directory used for the dump.

Or, from our external backup example host:

orion$ rm -rf dump/230720.2

Beware that if you remove a backup, you should remove those incremental backups that follow up to the next total backup.

15.5. Restore

To restore a backup, use the -r flag for lxbackup or lx backup and name the backup directory to restore.

For example, this restores the given full backup path:

orion$ lxbackup -f lxinst.conf -r lxdump/230720
...

Do this while the system is stopped.

To restore an incremental backup (and therefore, all previous incremental backups and the total backup made before them), supply an incremental backup path.

This restores the previous total backup made and the incremental backups that follow up to the given one.

15.6. Backup Automation

To automate system backups, use crontab(8) or lxbackup to an external backup host or to run lx backup to an installed host, on the desired times.

16. Reporting Issues

To report an issue, use lx report to gather system information. This program collects information from the system and builds an archive to be sent to support.

unix$ lx report
report: lxreport.231009...
version...
procs...
logs...
stacks...
stack lxmeta100...
stack kvds103...
stack kvms100...
stack spread...
stack kvds102...
stack kvds100...
stack kvds101...
stack lxqe100...

# send this file to support.
-rw-rw-r-- 1 leandata leandata 54861 Oct  9 14:58 lxreport.231009.tgz

As printed by the command output, the resulting tar file should be sent to support.

The archive includes:

  • installed version numbers

  • underlying OS names are versions

  • complete disk usage for the installed systems

  • complete process list for the installed systems

  • memory usage for the installed systems

  • lx process list

  • logs for components (last log file only, for each one)

  • stacks for each component

  • stacks for each core file found

When kvms is still running, the archive includes also:

  • statistics for the sytem

  • long list of kv resources

  • process list for each kvds

  • file list for each kvds

17. Command Reference Manual

This section details all the commands available and how to use them, starting with the install command.

17.1. lxinst

This is the install program:

usage: lxinst [-h] [-v] [-D] [-s] [-m] [-n] [-f cfgfile] [-d dist] [-k key]
              [-K awspem] [-u location] [-c] [-i]
              [where ...]

leanXcale installer

positional arguments:
  where        [aws|docker|stats] dir, host, or host:dir

options:
  -h, --help   show this help message and exit
  -v           verbose
  -D           enable debug diags
  -s           small install
  -m           medium install
  -n           dry run
  -f cfgfile   use this configuration file
  -d dist      distrib file, dir, or url
  -k key       key to download the distribution.
  -K awspem    AWS pem file name w/o .pem
  -u location  update inst at location
  -c           clean. download everything
  -i           ignore system limits

Given arguments specify where to install the DB. They may be paths for directories in the local host, or host names, or host and directory names using the syntax host:dir.

Instead of arguments, a configuration file may be given using the -f flag. The configuration file has been described before in this document.

Flag -k may be used to specify the key to download the distribution.

Flag '-K' must be used when installing on AWS to supply the path (without file name extension) to the pem/pub key files.

By default, the distribution is retrieved from the leanXcale artifactory. However, it is possible to supply one or more times the -d flag giving it the name for a .tgz file, a directory, or a URL. The special argument leanxcale stands for the official repository for the distribution.

Distribution packages will be retrieved by looking at those distribution sources. By convention, the first source is usally the ./lxdist directory, used to keep the distribution when downloaded.

For example, this installs just what is downloaded at ./lxdist, without downloading anything else:

unix$ lxinst -d ./lxdist -f lxinst.conf

Once downloaded, files are not downloaded again. Flag ''-c'' cleans the ''./lxdist'' directory to force a download of everything. If a different directory is specified by the user, it is not cleaned, although packages not found will be still downloaded there. To force a download of particular packages, remove the packages desired from the ''./lxdist'' directory (or the directory specified in the arguments).

For example, this downloads a fresh copy of the distribution and installs it:

unix$ lxinst -c -f lxinst.conf

And this tries first the ../lxdist directory and then the standard leanxcale repository:

unix$ lxinst -d ./lxdist -d leanxcale -f lxinst.conf

This is actually the default when no distribution source is specified. Also, if no directory source is specified as a first option, lxdist is used as a directory to download components (other than files found on the local host).

The database has users (different from UNIX users). The user lxadmin is the administrator for the whole system. During the install you will be asked to type a password for lxadmin. To specify the password without being asked, you can set the password in the LXPASS environment variable:

unix$ export LXKEY=nemo:APA...uQy
unix$ export LXPASS=fds92f3c
unix$ lxinst /usr/local/leanxcale
...

When the users file ./lxowner exists, that file is used as the file describing the lxadmin user and its secret, instead of creating one using the user supplied password for lxadmin.

Do not use a existing file unless you know what you are doing. The reason is that this file must define lxadmin as the first user and that such user must have access to all resources, as configured.

Flag -i makes lxinst ignore system limits, to install on places with reduced disk or memory and in a hurry.

By default, the install performs a large install, trying to use all the machine resources for the service. Flag -s selects a small instead, and flag -m selects a medium install.

The install size affects components and sizes not specified by the user in command line arguments or in the configuration file. In a small install, at most 4 kdvs are added per host, and component memory is limited to 1GiB. In a medium install, at most 8 kvds components are added, and component memory is left with default values.

Flag -u is used to update a previous install. Still WIP, do not use it for now.

17.2. lx

Shell for running LeanXcale control programs. This simply fixes the environment for the installed host and runs the command given as an argument:

usage: lx [-d] cmd...

When flag -d is used, the current working directory is set to the install directory at the host before running the given command.

Most commands follow the same conventions regarding options and arguments. We describe them here for convenience:

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -l          local run only
  -D          enable debug diags

Flag -l is used when executing the command locally. This is used by the lx command framework, and should not be used in general by the end user.

Flag -D enables verbose diagnostics

Arguments specify what to operate (e.g., what to start, stop, etc.) may be empty to rely on the defaults (whole DB) or may specify a particular host and/or component name:

  • when only component names are given, and only those components will be involved (e.g., lxqe101).

  • when a component type name is given, components for that type are selected. (e.g., lxqe).

  • when a host name is given, any component following is narrowed to that host. If no components follow the host name, all components from the host are selected.

This may be repeated to specify different hosts and/or components.

The special host names db, repl, and repl2 may be used and stand for hosts without the nodb attribute, hosts for the first replica, and hosts for the second replica (hosts that are a mirror of other ones).

17.3. lx addlib

Add the given file(s) to the lib directory of the installed hosts, to add or update a library or jar in the installation:

usage: addlib [-h] [-l] [-v] [-D] [-a host] [-x] [files [files ...]]

add libs to installed lx

positional arguments:
  files       files to add

optional arguments:
  -h, --help  show this help message and exit
  -l          local run only
  -v          verbose
  -D          enable debug diags
  -a host     copy to this installed host(s)
  -x          internal use only

17.4. lx backup

Create, list, and restore backups from a installed host:

usage: backup [-h] [-v] [-D] [-n] [-l] [-i] [-d dir] [-r dir] [-p] [-z]
              [-C keyfile]
              [what [what ...]]

backup

positional arguments:
  what        host|comp

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -l          local run only
  -i          incremental
  -d dir      root backup dir
  -r dir      backup dir to restore
  -p          print backup dirs
  -z          gzip backup files
  -C keyfile  encrypt and gzip backup files

This command copies disk contents to a backup directory or restores them. See lxbackup for backups/restores on external hosts.

By default, the backup is $LXDIR/dump at the host used to issue backup commands. The convention is to use the first host to keep backups.

This directory can be a mount point or a link to a remote directory to keep backups at a different machine.

Directories under dump/ are named using the date, with .1, .2, etc. appended if more than one backup is created on the same date.

For incremental backups, a final + is added to the directory name.

For example, create a full, cold, backup when leanXcale is not running:

unix$ lx backup
backup...
host atlantis...
backup: /usr/local/leanxcale/dump/230720
unix$

The printed path is the path for the directory keeping the backup, as used when restoring it.

Using flag -z makes the tool gzip backup files, preserving modification times despite compression. For example:

unix$ lx backup -z
backup...

Using flag -C both encrypts and compresses the backup files. The flag argument is the path to the file keeping the file used to encrypt/decrypt.

unix$ lx backup -C  ./secretkey
backup...

To create an incremental (cold) backup:

unix$ lx backup -i

It is possible to backup just one or a few components, supplying arguments that address them like in other commands, as described before. In this case, it is advisable to perform always backups for the same set of components. This command does not know if the last full backup is for the whole database or just for a few components. The last full backup is considered the full backup for the whole thing.

In general, the system should not be running while doing a backup. To perform a hot backup, backup just the query engine files as an incremental dump:

unix$ lx backup -i lxqe
backup...
host atlantis...

Incremental backups can be mixed for different components, as they overwrite some of the files saved in the full backup.

Create a full backup for just kvds100:

unix$ lx backup kvds100
backup...
host atlantis...
backup: /usr/local/leanxcale/dump/230720.3
unix$

Create an incremental backup adding just kvds200:

unix$ lx backup -i kvds200
backup...
host atlantis...
backup: /usr/local/leanxcale/dump/230720.3+
unix$

List the backups known:

unix$ lx backup -p
backup...
/usr/local/leanxcale/dump/230720
/usr/local/leanxcale/dump/230720.1+
/usr/local/leanxcale/dump/230720.2 encrypted
...

Those with a + in their names are incremental backups. Those encrypted are reported as such.

List the backups known for any kvds:

unix$ lx backup -p kvds

Restore the backup with the given name:

unix$ lx backup -r /usr/local/leanxcale/dump/230720.2

Here, if the name refers to an incremental backup, the restore recovers also the files found in previous incremental backups and in the total backup made before them.

Also, target directories to restore are identified by the current configuration of the system. That is, if kvds100 changed its location to a different host, the restore process will restore its disk at the new location, not at the location used to create the backup.

Restore just the files for kvds components from the given backup:

unix$ lx backup -r /usr/local/leanxcale/dump/230720.2  kvds

To restore a backup, it is usually desirable to format the disk for the involved components before using backup to restore their disks. The restore process copies restored files back, but does not remove anything else found on the disk for the component. For example:

unix$ lx fmt kvds
...
unix$ lx backup -r /usr/local/leanxcale/dump/230720.2  kvds

To remove a backup, it suffices to remove its directory. For example:

unix$ lx -d rm -rf dump/230720.2

Here we used the flag “-d” for lx to change to $LXDIR before executing the remove command, which makes it easy to name the directory used for the dump.

An important note is that file modification times are preserved in backup files. They are used to learn if a file must be copied or not in an incremental backup. If a backup is relocated to a different place, and later copied back into the standard location, preserve file modification times in the process.

Using flag -z makes the tool gzip backup files, preserving modification times despite compression.

When restoring encrypted backups, flag -C must be used to supply the path to the file keeping the secret used to decrypt.

17.5. lxbackup

Create, list, and restore backups from an external host:

usage: lxbackup [-h] [-v] [-D] [-n] [-i] [-d dir] [-r dir] [-p] [-f cfgfile]
                [-z] [-C keyfile]
                [what [what ...]]

lxbackup

positional arguments:
  what        host|comp

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -i          incremental
  -d dir      root backup dir
  -r dir      backup dir to restore
  -p          print backup dirs
  -f cfgfile  config for the install
  -z          gzip backup files
  -C keyfile  encrypt and gzip backup files

This command can be copied along with the installed configuration to an external host, to perform external backups/restores. The external host must have ssh access to the installed hosts.

See lx backup for backups/restores on installed hosts.

Using lxbackup is exactly like using lx backup with a few differences:

  • Flag -f is mandatory and must be used to provide the installed configuration file.

  • The default backup directory is not $LXDIR/dump, but ./lxdump.

For example, create a full backup for just kvds100:

unix$ lxbackup -f lxinst.conf kvds100
backup...
host atlantis...
backup: lxdump/230720.3
unix$

The lxbackup command can be found at the installed $LXDIR/bin directory on any installed host. If you are not sure regarding the $LXDIR value, use this command to find it:

unix$ lx -d pwd
/usr/local/leanxcale
unix$

Flag -d for lx makes it change to the $LXDIR directory before running the given command.

The detailed configuration file to be used is created at lxinst.conf when installing. The configuration can be retrieved also by the lx config command. For example:

unix$ lx config -o lxinst.conf
saved lxinst.conf

Therefore, in the running example, we can setup an external host named orion to perform backups in this way:

unix$ lx -d pwd
/usr/local/leanxcale
unix$ lx config -o lxinst.conf
saved lxinst.conf
unix$ scp /usr/local/leanxcale/bin/lxbackup lxinst.conf orion:~

And then just:

orion$ lxbackup -f lxinst.conf
backup...
host atlantis...
backup: lxdump/230720
...
orion$

This was for a full (cold) system backup. To perform a hot backup create an incremental backup with the query engine files:

orion$ lxbackup -f lxinst.conf -i lxqe

Using flag -z in calls to lxbackup makes the tool gzip backup files, preserving modification times despite compression.

orion$ lxbackup -z -f lxinst.conf
backup...

Using flag -C both encrypts and compresses the backup files. The flag argument is the path to the file keeping the secret used to encrypt/decrypt.

orion$ lxbackup -C ./secretkey -f lxinst.conf
backup...

If any backup is encrypted, -C must be used also to restore, as in

orion$ lxbackup -C ./secretkey -f lxinst.conf -r dump/240422
backup...

17.6. lx config

Inspect or update the configuration:

usage: config [-h] [-v] [-D] [-s value] [-d] [-n] [-o fname]
              [what [what ...]]

Inspect the configuration

positional arguments:
  what        kvaddr|grafana|cons|[host] [comp|prop...]

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -s          set property values
  -d          delete
  -n          dry run
  -o fname    write the output to fname

By default, lx config prints the whole configuration, or that for elements given as arguments.

The arguments follow the conventional syntax used by most commands, but knows property names also:

  • Giving a host name narrows the rest of the arguments to that host, until another host name is given

  • Giving a component kind (e.g., lxqe) selects those components.

  • Giving a component name with included id (e.g., lxqe101) selects just that component

  • Giving a property name (not a host and not a component) selects just that property.

For example,

unix$ lx config

prints all configuration.

unix$ lx config atlantis mariner

prints the configuration for hosts atlantis and mariner (assuming those are configured host names).

unix$ lx config atlantis kvms mariner lxqe

prints the configuration for kvms components found at atlantis and lxqe components found at mariner

unix$ lxconfig kvms kvds101

prints the configuration for any kvms component and the kvds101 one.

unix$ lxconfig kvms addr

prints the addr attribute for any kvms component.

unix$ lxconfig awsdisk

prints the awsdisk property from the (global) configuration.

Use flag -o to save the configuration (or the parts selected) to the given file.

Use flag -d to remove the selected configuration entries. Do not remove hosts or components, and use this with caution.

Use flag -s to update the selected properties with new values. In this case, the argument for a property includes both its name and the new value (e.g., mem=50m)

For example:

unix$ lx config -s lxqe mem=500m kvds mem=500m

17.7. lx disable

Disable transactions.

usage: disable [qe|qename]

For example,

	unix$ lx disable

disables transactions for just the entire system, until the they are enabled.

While transactions are disabled, lx status shows the system status as waiting (for transactions to become available).

Before using this command, stop the watcher if the system was starting with flag -w or lx watch is running.

Otherwise, because new JDBC connections are not accepted while transactions are disabled, the watcher might consider that the system is not in good health, and then restart the system.

17.8. lx enable

Enable transactions.

usage: enable [qe|qename]

For example,

	unix$ lx enable qe100

enables transactions for just the qe100 query engine.

17.9. lx fmt

Format the whole DB or the indicated hosts or components:

usage: fmt [-h] [-D] [what ...]

fmt the store

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -D          enable debug diags

For example, format the kvds101 disk:

unix$ lx fmt kvds101

17.10. lx help

Ask for help:

usage: help [cmd]

Print usage information

Prints the list of known commands with quick usage information, or detailed usage information about the given command.

17.11. lx license

Checks the license status or installs a new license:

usage: license [-h] [-v] [-D] [-f file]

inspect or update license files.

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -f file     license file to install

For example, ask for the current status:

unix$ lx license
    license expires: Mon Dec 30 00:00:00 2024

or install a new file lxlicense with the desired license:

unix$ lx license lxlicense

17.12. lx logs

List and inspect logs for installed components:

usage: logs [-h] [-D] [-g rexp] [-a] [-p] [-c dst] [-s start]
            [-e end] [what ...]

list logs

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -D          enable debug diags
  -g rexp     grep rexp
  -a          all logs, not the last one
  -p          print the last log (all if -a)
  -c dst      copy the last logs to this dir (all if -a)
  -s start    start fname time (yymmdd.hhmm or prefix)
  -e end      end fname time (yymmdd.hhmm or prefix)

For example, list the logs for kvds at atlantis, but only those after the time 230518.0551 (yymmdd.hhmm, or a prefix of this, can be used):

unix$ lx logs  -s 230518.0551 atlantis kvds

Print the last log for kvds101:

unix$ lx logs  -p kvds101

Grep all the logs for lines with fatal:

unix$ lx logs  -g fatal

Copy all (not just the last one) kvds101 logs to /tmp:

unix$ lx -a -c /tmp kvds101

17.13. lx printlog

Print the transaction log contents:

usage: printlog [-vh] [-s ts] [-e endts] dir|addr [qe]

The command prints the log contents for the given log directory (or file). With flag -v, it prints the logged kv message, otherwise it prints just the first line of each logged message.

With flag -h (or -v) it prints the header information too.

It is suggested not to use this on a running log, just in case the program modifies the logger (although that should not happen).

17.14. lx procs

List the processes for the DB:

usage: procs [-h] [-v] [-D] [-p] [what ...]

list processes

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -p          report ports in use

For example, list the processes at host blade123:

unix$ lx procs blade123
procs...
blade123 [
	lxmeta100	pid 813750	alive running
	kvds100	pid 813734	alive running
	kvms100	pid 813729	alive
	kvds103	pid 813746	alive running
	kvds101	pid 813738	alive running
	spread	pid 813725	alive
	kvds102	pid 813742	alive running
	lxqe100	pid 813773	alive running
]

Or to see the port ustage status:

unix$ lx procs -p blade123
blade123 [
		spread	14444	busy
		kvms100	14400	busy
		lxmeta100	14410	idle
		lxqe100	14420	busy
		kvds100	14500	busy
		kvds101	14504	busy
		kvds102.14508	busy
		kvds103	14512	busy
]

17.15. lx report

Report system status and debug information.

usage: report [-h]

report system information for debugging

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -D          enable debug diags

This program collects information from the system and builds an archive to be sent to support.

unix$ lx report
report: lxreport.231009...
version...
procs...
logs...
stacks...
stack lxmeta100...
stack kvds103...
stack kvms100...
stack spread...
stack kvds102...
stack kvds100...
stack kvds101...
stack lxqe100...

# send this file to support.
-rw-rw-r-- 1 leandata leandata 54861 Oct  9 14:58 lxreport.231009.tgz

As printed by the command output, the resulting tar file should be sent to support.

The archive includes:

  • installed version numbers

  • underlying OS names are versions

  • complete disk usage for the installed systems

  • complete process list for the installed systems

  • memory usage for the installed systems

  • lx process list

  • logs for components (last log file only, for each one)

  • stacks for each component

  • stacks for each core file found

When kvms is still running, the archive includes also:

  • statistics for the sytem

  • long list of kv resources

  • process list for each kvds

  • file list for each kvds

17.16. lx run

Run a command on the selected installed hosts, using the lx environment to run it:

usage: run [-h] [-D] ...

run a command on the installed host(s).

positional arguments:
  what       host|comp... cmd...

options:
  -h, --help  show this help message and exit
  -D          enable debug diags

Arguments specify hosts where to run (as usual in the rest of commands), perhaps none (to imply all DB hosts), and then the keyword cmd must be given, followed by the command and arguments to run on each host.

For example, discover where this thing is installed:

unix$ lx run cmd pwd

Or, as an exceptional measure, kill all processes running:

unix$ lx run cmd killprocs

Or, kill just those at blade123:

unix$ lx run blade123 cmd killprocs

Or kill -9 any kvds at blade123:

unix$ lx run blade123 cmd killprocs -9 kvds

17.17. lx stack

Dump process stacks

usage: stack [-h] [-v] [-D] [-l] [what [what ...]]

print stacks

positional arguments:
  what        host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only

The command understands the conventional syntax to select hosts and/or components. For example, to dump the stack of kvms components:

unix$ lx stack kvms
localhost [
	stack kvms100: [
		Thread 12 (Thread 0x7f8fc57fa700 (LWP 9272)):
		#0  __libc_read (nbytes=40, buf=0x7f8fa8000bd8, fd=13) at linux/read.c:26
		#1  __libc_read (fd=13, buf=0x7f8fa8000bd8, nbytes=40) at linux/read.c:24
		...
	]
]

With flag -v, local variables are printed too.

For java processes, both the native stack and the java stacks are printed.

17.18. lx start

Starts the DB for operation:

usage: start [-h] [-v] [-D] [-l] [-w] [-k] [what [what ...]]

start the service

positional arguments:
  what        host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -w          start watch
  -k          keep core files

For example, to start the kvds components at host atlantis and leave the rest of the installation alone:

unix$ lx start atlantis kvds

With flag -w, start will run the watch service. This pings the DB to make sure it can answer queries, and, when that is not the case, try to stop and restart the system. See the Section 17.22 section for details.

By default, start will remove files on the installed directories with names suggesting they are core dumps. Flag -k prevents this from happening and can be used to keep core dump files for debugging.

17.19. lx status

Prints the status for the system or waits for a given status:

usage: status [-h] [-v] [-D] [-w status] [-t tout]

show or wait for a DB status

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -w status   wait for the given status
  -t tout     timeout for -w (secs)

For example, to learn the status:

unix$ lx status
status: waiting
	kvds100: recovering files
	kvds101: recovering files

Or, to wait until the status is running:

unix$ lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running
unix$

The status can be any of:

  • stopped: no process is running.

  • failed: some processes did fail.

  • waiting: processes are running but there is no SQL service.

  • running: processes are running and SQL connections are available.

17.20. lx stop

Halts the DB or stops individual hosts or components:

usage: stop [-h] [-v] [-D] [-l] [-w] [-k] [what [what ...]]

stop the service

positional arguments:
  what        host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -w          do not stop watch
  -k          kill -9

For example, halt the DB:

unix$ lx stop

Stop just the kvds servers:

unix$ lx stop kvds

If the watch service is running, make sure to stop it before stopping individual components. Otherwise it might decide to wait for lxmeta to stop and then restart the system.

Without arguments, stop will first stop the watch service and there is no extra caution needed.

Under flag -w, stop will not stop the watch process.

Flag -k may be used to kill components. Do not use this unless you know what you are doing. For example, to kill the lxqe100

unix$ lx stop -k lxqe100
stop...
atlantis: [
	stop: kill -9 lxqe100 pid 791944...
]

17.21. lx version

Print the installed version:

usage: version [-h]

print installed version

optional arguments:
  -h, --help  show this help message and exit

The program reports the leanXcale version name, along with detailed version information for distribution packages installed:

unix$ lx version
leanXcale v2.2
    kv         v2.2.2023-09-29.115f5fba70e3af8dc203953399088902c4534389
    QE         v2.2.2023-09-30.1e5933900582.26a7a5c3420cd3d5d589d1fa6cc
    libs       v2.2.2023-09-29.67535752acf19e092a6eaf17b11ad17597897956
    avatica    v2.2.2023-09-27.0b0a786b36e8bc7381fb2bb01bc8b3ed56f49172
    TM         v2.2.2023-09-29.9a9b22cfdc9b924dbc3430e613cddab4ed667a57
    lxlbins    v2.2.2023-09-29.79e7e04fb16b38d08c2d5df1fe08e103d49cb22a
    lxinst     v2.2.2023-10-02.b341e6545913aee8e0b0daf255362.273b33ea6d
    calcite    v2.2.2023-09-27.d3dfcf24285d38add3f4e29a9c2e9eacbcd0b913
    lxodata    v2.2.2023-09-23.b84fa4c7d2ca3e778edd9de29389b2aa6e1a9fb8

17.22. lx watch

Watch out the system and restart it if needed:

usage: watch [-h] [-v] [-D]

watch out the db and stop/start it if needed

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags

This program is started by lx start when the flag -w is given to it. It is usually a bad idea to execute this command explicitly.

It waits until the DB is running, doing nothing until that point. From that point on, if the DB ceases to be running, the program will wait for lxmeta to stop, and, then try to stop and start the whole system, and finally exit.

When restarting the system, the flag -w is used, to start a new watcher for the new system.

The implications are that a failure to start will not restart the system more times, and that stopping individual components requires to stop this program first, or it might take actions on its own.

17.23. killprocs

This is a command for local use only:

usage: killprocs [-h] [-9] [-a] [procs]

kill processes

Use lx run to run it at any/all of the installed hosts.

Without any flag, locates the DB process pids looking at the files reporting them, and then kills them.

With flag -a, locates any process in the system by process name (all the ones started by the DB use bin/…​ as a name), and all java processes starting with LX, and kills them.

A TERM signal is sent, and then a KILL signal after a few seconds.

Flag -9 can be used to send only a KILL signal.

If process/component names are given (eg, kvds, or lxqe100), only those processes are killed.

For example, send a kill signal to any kvds at blade123:

unix$ lx run blade123 cmd killprocs -9 kvds

18. Notice

LeanXcale system uses spread as communication bus under the following license:

Version 1.0
June 26, 2001

Copyright (c) 1993-2016 Spread Concepts LLC. All rights reserved.

This product uses software developed by Spread Concepts LLC for use in the Spread toolkit.
For more information about Spread, see http://www.spread.org