Administration Guide

1. Background

1.1. LeanXcale Components

Before installing, it is important to know that LeanXcale has a distributed architecture and it consists of several components:

  • lxqe: Query engine in charge of processing SQL queries.

  • kvds: Data server of the storage subsystem. There might be multiple instances.

  • kvms: Metadata server of the storage subsystem.

  • lxmeta: Metadata process for LeanXcale. It keeps metadata and services needed for other components.

  • stats: Optional monitoring subsystem to see resource usage and performance KPIs of LeanXcale database.

  • odata: Optional OpenDATA server to support a SQL REST API.

There are other components used by the system, that are not relevant for the user and are not described here. For example, spread is a communication bus used by LeanXcale components.

2. Licenses

To check for license status or to install a new license you can use the lx license command.

For a local installation, use just

unix$ lx license
	license expires: Mon Dec 30 00:00:00 2024

For docker installs, each container must include its own license. The DB in the container does not start the DB unless a valid license is found. But, the container must be running to check the license status and to install new licenses. Refer to the section on starting docker containers for help on that.

For example, to list the license status for the container lx1 we can run

unix$ docker exec -it lx1 lx license
lx1 [
	kvcon[1380]: license: no license file
  failed: failed: status 1
]
failed: status 1

To install a new license, just copy the license file to the container as shown here

unix$ docker cp ~/.lxlicense lx1:/usr/local/leanxcale/.lxlicense
unix$ docker exec -it lx1 sudo chown lx /usr/local/leanxcale/.lxlicense

The license status should be ok now:

unix$ docker exec -it lx1 lx license
	license expires: Mon Dec 30 00:00:00 2024

3. Bare Metal and IaaS Installs

In here we describe how to install in bare metal and IaaS instances. Note that an IaaS (or virtualized) instance is equivalent to a bare metal one. IaaS that have been tried include AWS, Google, Azure and OCI.

3.1. Prerequisites

Never install or run leanXcale as root.

To install, you need:

  • 2 cores & 2 GB of memory

  • A valid LeanXcale license file. Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

  • The LeanXcale zip distribution. The LeanXcale zip distribution can be found at:

https://artifactory.leanxcale.com/artifactory/lxpublic
  • Ubuntu 20.04 LTS or 22.04 LTS with a standard installation, including

    • openjdk-11-jdk

    • python3

    • python3-dev

    • python3-pip

    • gdb

    • ssh tools as included in most UNIX distributions.

    • zfsutils-linux

    • libpam

  • Availability for ports used by leanXcale (refer to the [LeanXcale Components and Ports] section for details).

The install program itself requires a unix system with python 3.7 (or later).

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

The LeanXcale zip distribution can be found at:

https://artifactory.leanxcale.com/artifactory/lxpublic

There is a zip file per version. Download the latest version.

If needed, install the required software dependencies on your system:

unix$ sudo apt update
unix$ sudo apt -y install python3 python3-dev python3-pip gdb openjdk-11-jdk curl

Here, and in what follows, unix$ is the system prompt used in commands and examples.

3.2. Install

Working on the directory where the distribution has been downloaded, unzip the LeanXcale zipped distribution:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Here, you type the command after the unix$ prompt, and the following lines are an example of the expected output.

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To install, run the lxinst program and, as an option, supply the name of the installation directory (/usr/local/leanxcale by default):

unix$ ./lxinst /usr/local/leanxcale
sysinfo localhost...
install #cfgfile: argv /usr/local/leanxcale...
cfgcomps…
...
config: lxinst.conf
install done.

To run lx commands:
    localhost:/usr/local/leanxcale/bin/lx
Or adjust your path:
    # At localhost:~/.profile:
    export PATH=/usr/local/leanxcale/bin:$PATH
To start (with lx in your path):
    lx start

The command takes the zipped LeanXcale distribution from ./lxdist, and installs it at the target folder.

The detailed configuration file built by the install process is saved at lxinst.conf, as a reference.

The database has users. The user lxadmin is the administrator for the whole system and during the install you are asked to provide its password.

To complete the install, a license must be added, as explained in the next section.

As another example, this installs at the given partition /dev/sdc2, with both encryption and compression, mounting the installed system at /usr/local/leanxcale:

unix$ ./lxinst compress crypt /dev/sdc2:/usr/local/leanxcale
sysinfo localhost...
install #cfgfile: argv /usr/local/leanxcale...
cfgcomps…
...
config: lxinst.conf
install done.

To list installed file systems (on each host):
	zfs list -r lxpool -o name,encryption,compression
To remove installed file systems (on each host):
	sudo zpool destroy lxpool
To run lx commands:
    localhost:/usr/local/leanxcale/bin/lx
Or adjust your path:
    # At localhost:~/.profile:
    export PATH=/usr/local/leanxcale/bin:$PATH
To start (with lx in your path):
    lx start

The final messages printed by the install program will remind you of how to list or remove the installed file systems.

3.3. Using leanXcale Commands

The installation creates the lx command. Use it to execute commands to operate the installed system.

At the end of the installation, the install program prints suggestions for adjusting the PATH variable so that lx will be available as any other command. For example, as done with:

export PATH=/usr/local/leanxcale/bin:$PATH

when the install directory is /usr/local/leanxcale.

The lx command can be found at the bin directory in the install directory. For example, at /usr/local/leanxcale/bin/lx when installing at /usr/local/leanxcale.

In what follows, we assume that lx can be found using the system PATH.

The first command used is usually the one to install a license file:

unix$ lx license -f lxlicense

It is also possible to put the license file at ~/.lxlicense, and the install process will find it and install it on the target system(s).

3.4. Multiple Hosts and Replicated Installs

To install at multiple hosts, you may indicate target host names and install paths in the command line, like in the following example:

unix$ lxinst mariner:/opt/leanxcale atlantis:/usr/local/leanxcale

or simply provide the target hostnames:

unix$ lxinst mariner atlantis

When using a configuration file, simply define multiple hosts:

# xample.conf
host mariner
host atlantis

and use such file to install:

unix$ lxinst -f xample.conf

To specify a number of components we can still use the command line, like in:

unix$ lxinst atlantis kvds x 2 mariner kvds x 2

or

…​ # xample.conf host atlantis kvds x 2 host mariner kvds x 2 …​

For high availability (HA), it is possible to install using replicated components. As a convenience, replication is configured using mirror hosts where a host can be installed as a mirror of another to replicate its components. Either all hosts must have mirrors, or none of them.

To install using mirrors, for high availability, combine host names in the command line using “+”, like in:

unix$ lxinst atlantis+mariner:/usr/local/leanxcale

or in

unix$ lxinst atlantis+mariner

This configures mariner as a mirror for atlantis. Either all hosts must have mirrors, or none of them.

When using a configuration file, use the mirror property to specify that a host is a mirror for another. In our example:

…​ # xample.conf host atlantis kvds x 2 host mariner mirror atlantis …​

The mirror host should not specify anything other than its mirror property. It takes its configuration from the mirrored host.

4. Docker Installs

4.1. Docker Installs with no Preinstalled Image

When a preinstalled leanXcale docker image is available, disregard this section and proceed as described in the next one.

4.1.1. Prerequisites

To install, you need:

  • A valid LeanXcale license file

  • The LeanXcale zip distribution

  • A Linux system with

    • docker version 20.10.6 or later.

    • python 3.7 or later.

  • Access to the internet to permit docker to download standard docker images and software packages for them.

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

Download the latest LeanXcale zip distribution from:

https://artifactory.leanxcale.com/artifactory/lxpublic

There is a zip file per version.

4.1.2. Install

Working on the directory where the distribution has been downloaded, unzip the LeanXcale zipped distribution:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To create a docker image for leanXcale, use the following command:

unix$ lxinst docker
sysinfo lx1...
install #cfgfile: argv docker...
...
install done
docker images:
REPOSITORY   TAG     IMAGE ID           CREATED         SIZE
uxbase       2         434dfeaedf0c     3 weeks ago     1.06GB
lx           2         6875de4f2531     4 seconds ago  1.33GB

docker network:
NETWORK ID     NAME          DRIVER    SCOPE
471c52155823   lxnet         bridge       local
to start:
    docker run -dit --name lx1 --network lxnet lx:2 lx1

The image created is named lx:2. It is configured to run a container with hostname lx1 on a docker network named lxnet.

To list the image we can execute

unix$ docker images lx
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
lx           2         6875de4f2531   About a minute ago   1.33GB

And, to list the networks we can execute

unix$ docker network ls
NETWORK ID     NAME          DRIVER    SCOPE
471c52155823   lxnet         bridge       local

The created image is a single one for all containers. The name given when creating the container determines the host name used (in this example, lx1).

Before using the system, a license file must be installed on each container created. This is explained later.

To remove the image when so desired, use this command:

unix$ docker rmi lx:2

4.2. Docker Installs with Preinstalled Image

4.2.1. Prerequisites

To install, you need:

  • A valid LeanXcale license file

  • The LeanXcale docker image

  • A Linux system with

    • docker version 20.10.6 or later.

    • python 3.7 or later.

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

Download the latest LeanXcale docker image from:

https://artifactory.leanxcale.com/artifactory/lxpublic/

There is a docker image file per version.

4.2.2. Install

Working on the directory where the image file has been downloaded, add the image to your docker system:

unix$ docker load --input lx.2.3.docker.tgz
Loaded image: lx:2

Double check that the image has been loaded:

unix$ docker images lx:2
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
lx           2         bd350f734448   28 hours ago   1.33GB

Create the docker network lxnet:

unix$ docker network create --driver bridge lxnet

The image is a single one for all containers. The name given when creating the container determines the host name used (in this example, lx1).

Before using the system, a license file must be installed on each container created. This is explained in the next section.

To remove the image when so desired, use this command:

unix$ docker rmi lx:2

4.3. Running leanXcale Docker Containers

This section explains how to create containers from leanXcale images and how to install licenses and change the lxadmin password for them.

Provided the leanXcale image named lx:2, and the docker network lxnet, this command runs a leanXcale docker container:

unix$ docker run -dit --name lx1 --network lxnet -p0.0.0.0:14420:14420 lx:2 lx1
b28d30702b80028f8280ed6c55297b23203540387d3b4cfbd52bc78229593e27

In this command, the container name is lx1, the network used lxnext, and the image used lx:2. The port redirection -p…​ exports the SQL port to the underlying host.

It is important to know that:

  • starting the container will start leanXcale if a valid license was installed;

  • stopping the container should be done after stopping leanxcale in it.

The container name (lx1) can be used to issue commands. For example, this removes the container after stopping it:

unix$ docker rm -f lx1

The installed container includes the lx command. Use it to execute commands to operate the installed DB system.

It is possible to attach to the container and use the ''lx'' command as it can be done on a bare metal host install:

unix$ docker attach lx1
lx1$ lx version
...

Here, we type docker attach lx1 on the host, and lx version on the docker container prompt.

Note that if you terminate the shell reached when attaching the docker container, it will stop. Usually, this is not desired.

It is possible to execute commands directly on the executed container. For example:

unix$ docker exec -it lx1 lx version

executes lx version on the container.

4.3.1. Setting up a License and Admin Password

Starting the container starts leanxcale as well, but not if there is no license installed. In this case we must install a license file in the container.

To install a license file, it must be copied to the container as shown here:

unix$ docker cp ./lxlicense lx1:/usr/local/leanxcale/.lxlicense
unix$ docker exec -it lx1 sudo chown lx /usr/local/leanxcale/.lxlicense

To change the password for the lxadmin user, do this after starting the database:

unix$ docker exec -it lx1 lx kvcon addusr lxadmin
pass? *****

4.3.2. Stopping the Container

The docker command to stop a container may not give enough time for leanXcale to stop. First, stop leanXcale:

unix$ docker exec -it lx1 lx stop

And now the container may be stopped:

unix$ docker stop lx1

5. AWS Installs

AWS Installs require installing the AWS command line client, used by the installer. For example, on Linux:

unix$ sudo apt-get install awscli

You need a PEM file to be used for accessing the instance, once created. The PEM file and the public key for it can be created with this command:

unix$ ssh-keygen -t rsa -m PEM -C "tkey" -f tkey

Here, we create a file tkey with the private PEM and a file tkey.pub with the public key. Before proceeding, rename the PEM file to use .pem:

unix$ mv tkey tkey.pem
unix$ chmod 400 tkey.pem

When installing supply the path to the PEM file (without the .pem extension) to flag -K, so that lxinst can find it and use its base file name as the AWS key pair name).

Now, define your access and secret keys for the AWS account:

unix$ export AWS_ACCESS_KEY_ID=___your_key_here___
unix$ export AWS_SECRET_ACCESS_KEY=___your_secret_here___
...

To install the public distribution, go to

https://artifactory.leanxcale.com/artifactory/lxpublic

and download the zip for the last version (for example, lx.2.3.232129.zip).

Before extracting the zip file contents, make sure that there is no lxdist directory from previous versions, or it will include packages that are not for this one.

unix$ rm -f lxdist/*

Extract the zip file:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To install in AWS, use lxinst using the aws property and the -K option to supply the key file/name to use. For example:

unix$ lxinst -K tkey aws
...
aws instance i-03885ca519e8037a1 IP 44.202.230.8
...
aws lx1 instance id i-0d2287deeb3d45a82

Or, specify one or more properties to set the region, instance type, disk size, and AWS tag. The disk size is in GiB.

The tag is a name and should not include blanks or dots. It will be used for .aws.leanxcale.com domain names.

For example:

unix$ lxinst -K tkey aws awsregion us-west-1 awstype t3.large \
	awsdisk 30 awstag 'client-tag'
...
config: lxinst.conf
install done.

	# To remove resources:
		./lxinst.uninstall
	#To dial the instances:
		ssh -o StrictHostKeyChecking=no -i xkey.pem lx@18.232.95.2.2

It suffices to specify the key and give a tag, so this command is ok:

unix$ lxinst -K tkey awstag xample

When installing using an awstag, domain names are set for installed instances as a convenience. Each instance is named with the tag and a number. For example, with the tag xample, the first instance is xample1.aws.leanxcale.com, the second instance in the same install (if any) is xample2, etc.

Arguments follow the semantics of a configuration file. Therefore, if a host name is specified, it must come after global properties.

Try to follow the conventions and call lx1 the first host installed, lx2 the second one, etc. Also, do not specify directories and similar attributes, leave them to the AWS install process.

If something fails, or the install completes, check for a lxinst.uninstall command, created by the install process, to remove allocated resource when so desired.

The detailed configuration file built by the install process is saved at lxinst.conf as a reference. This is not a configuration file written by the user, but a configuration file including all the install details. This is an example:

#cfgfile lxinst.conf
awstag client-tag
awsvpc vpc-0147a6d8e9d6e6910
awssubnet subnet-0b738237bbce66037
awsigw igw-0a160524cf4edab30
awsrtb rtb-040894a1050b58a3f
awsg sg-05ee9e0232599026d
host lx1
	awsinst i-0e0167a4233f76bb1
	awsvol vol-01629a4b3694bb5fd
	lxdir /usr/local/leanxcale
	JAVA_HOME /usr/lib/jvm/java-1.11.0-openjdk-amd64
	addr 10.0.120.100
	kvms 100
		addr lx1!14400
	lxmeta 100
		addr lx1!14410
	lxqe 100
		addr lx1!14420
		mem 1024m
	kvds 100
		addr lx1!14500
		mem 1024m

In the installed system, the user running the DB is lx, and LeanXcale is added to the UNIX system as a standard service (disabled by default). The instance is left running. You can stop it on your own if that is not desired.

The command lxinst.uninstall, created by the installation, can be used to remove the created AWS resources:

unix$ lxinst.uninstall
...

To use an instance (and host) name other than lx1, supply your desired host name. And use a name, not something with dots or special characters. For example:

unix$ lxinst -K tkey aws lxhost1 lxhost2
...

creates a network and two instances named lxhost1 and lxhost2, and leaves them running.

The tradition is to use names lx1, lx2, etc. for the installed hosts.

Once installed, use lx as in any other system:

unix$ ssh -o StrictHostKeyChecking=no -i tkey.pem lx@44.202.230.8
lx1$ lx stop -v

When installing using a tag, DNS names can be used. The installer output reminds this at the end of the install process:

unix$ lxinst -K tkey awstag xample lxhost1 lxhost2
...
open ports: 14420
install done.

	# To remove resources:
		./lxinst.uninstall
	# To dial the instances:
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@98.81.32.197
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@18.204.227.103
	# or
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@xample1.aws.leanxcale.com
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@xample2.aws.leanxcale.com

By default, AWS installs use compression but not encryption. To change this, use the compress and/or the crypt global property with values yes or no as desired. For example:

unix$ lxinst -K tkey aws crypt lxhost1 lxhost2
...

installs with encryption enabled, and this disables compression:

unix$ lxinst aws -K tkey compress no lxhost1 lxhost2
...

5.1. The lx System Service

AWS installs creates a systemd system service for leanXcale. The service is installed as a disabled service.

This is meant to be used only for single host installs.

When multiple instances are used, instances must be started before using lx start to start the system, and lx stop must be used to stop the system before stopping the instances.

This section is here as a reference, and also because it might be useful on single-host installs.

For example:

unix$ ssh -o StrictHostKeyChecking=no -i tkey.pem lx@44.202.230.8
lx$ systemctl status lx
○ lx.service - LeanXcale
     Loaded: loaded (/etc/systemd/system/lx.service; disabled;
     Active: inactive (dead)

In this case, and the rest of this section, we use commands running on the installed AWS instance.

To enable the service, we can:

lx$ sudo su
root@lx1# systemctl enable lx
Created symlink /etc/systemd/system/multi-user.target.wants/lx.service → /etc/systemd/system/lx.service.

To disable it again:

lx$ sudo su
root@lx1# systemctl disable lx
Removed /etc/systemd/system/multi-user.target.wants/lx.service.

To start the service by hand:

lx$ sudo su
root@lx1# systemctl start lx
Removed /etc/systemd/system/multi-user.target.wants/lx.service.

And now we can see its status:

lx$ systemctl status lx
● lx.service - LeanXcale
     Loaded: loaded (/etc/systemd/system/lx.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-08-12 07:19:48 UTC; 1min 14s ago
    Process: 6822 ExecStart=sh -c /usr/local/leanxcale/bin/lx start -l -v |tee /usr/local/leanxcale/log/start.log (code=exited, status=0/SUCCESS)
      Tasks: 116 (limit: 9355)
     Memory: 3.6G
        CPU: 13.370s
     CGroup: /system.slice/lx.service
             ├─6828 bin/spread -c lib/spread.conf
             ├─6829 bin/kvlog -D log/spread
     ...
Aug 12 07:19:48 lx1 sh[6824]: bin/kvds: started pid 6836
Aug 12 07:19:48 lx1 sh[6824]: bin/kvds -D -c 1536m ds101@lx1!14504 /usr/local/leanxcale/disk/kvds101 ...
...

The DB status should now be as follows:

lx$ lx status
status: running

When more than a single instance is installed, the service must be enabled, disabled, started, and/or stopped on all instances involved.

Note that enabling the service makes the DB start when the instance starts, and stop before the instance stops.

This is done by each instance on its own. At start time, the instance runs

lx$ lx start -l

and at stop time, the instance runs

lx$ lx stop -l

The last start and stop output is saved at /usr/local/leanxcale/log/start.log, for inspection.

It might be more convenient to leave the service disabled and, after starting the instances for the install, use lx start command to start the system.

5.2. Listing and handling AWS resources (lxaws)

The program lxaws is used to list or remove AWS installs, and to start and stop instances and find their status.

usage: lxaws [-h] [-e] [-v] [-d] [-D] [-r region] [-n] [-askpeer] [-yespeer]
             [-netpeer] [-delpeer] [-p] [-o] [-c] [-status] [-start] [-stop]
             [tag [tag ...]]

lx AWS cmds

positional arguments:
  tag         aws tag and/or command args

optional arguments:
  -h, --help  show this help message and exit
  -e          define env vars
  -v          verbose
  -d          remove resources
  -D          enable debug diags
  -r region   AWS region
  -n          dry run
  -askpeer    ask peer: tag owner reg vpc
  -yespeer    accept peer:tag pcxid
  -netpeer    set peer net: tag pcxid cidr sec
  -delpeer    del peer: pcxid
  -p          print open ports
  -o          open ports: tag proto port0 portn cidr name
  -c          close ports: tag proto port cidr
  -status     show instance status: tag [host]
  -start      start instances: tag [host]
  -stop       stop instances: tag [host]

Given a region, without any tags, it lists the tags installed:

unix$ lxaws -r us-east-1
xtest.aws.leanxcale.com

Given a tag, it lists the tag resources as found on AWS:

unix$ lxaws -r us-east-1 xtest.aws.leanxcale.com
#xtest.aws.leanxcale.com:
	vpc vpc-0bb89fa4f83fc69c6
	subnet subnet-0b5fb20a5372f89da
	igw igw-08c3cdec1dc865b84
	rtb rtb-0e40ace79169b2e08
	assoc rtbassoc-0248017196d4be19c
	sec sg-028614274a930d0ef
	inst i-041b70633666af01b	xtest1.aws.leanxcale.com	18.209.59.230
	vol vol-04310af65774fc5e7

It is also possible to supply just the base tag without the domain, as in

unix$ lxaws -r us-east-1 xtest

With flag -e prints commands to set environment variables with resources found, as an aid to run other scripts.

unix$ lxaws -e -r us-east-1 xtest.aws.leanxcale.com
#xtest.aws.leanxcale.com:
	export vpc='vpc-0bb89fa4f83fc69c6'
	export subnet='subnet-0b5fb20a5372f89da'
	export igw='igw-08c3cdec1dc865b84'
	export rtb='rtb-0e40ace79169b2e08'
	export assoc='rtbassoc-0248017196d4be19c'
	export sec='sg-028614274a930d0ef'
	export inst='i-041b70633666af01b'
		export addr='peer11.aws.leanxcale.com'
	export vol='vol-04310af65774fc5e7'

When more than one tag is asked for, or more than one instance/volume is found, variable names are made unique adding a number to the name, for example:

unix$ lxaws -e peer1 peer2
#peer1.aws.leanxcale.com:
	export vpc0='vpc-0a50a6e989aa9da9a'
	export subnet0='subnet-0d9fd3a7d03eca61b'
	export igw0='igw-0af4279169fd8cab6'
	export rtb0='rtb-0f5d93a83239c3ada'
	export assoc0='rtbassoc-0e7d0f74cd780e121'
	export sec0='sg-01afb3d3c985f7881'
	export inst0='i-072326e86bcc77e9f'
		export addr0='peer11.aws.leanxcale.com'
	export vol0='vol-08ed1c4acdc0eae61'
#peer2.aws.leanxcale.com:
	export vpc1='vpc-023ce3e3c47bbb48f'
	export subnet1='subnet-0f9af7190d758d6d6'
	export igw1='igw-0ae3c860a69969a83'
	export rtb1='rtb-0d45f2059b4696cf4'
	export assoc1='rtbassoc-0a365cb4472f0b89e'
	export sec1='sg-04b122e86debcd735'
	export inst1='i-0b9cf8cff4b46d657'
		export addr1='peer21.aws.leanxcale.com'
	export vol1='vol-0f9f344a3a2b9bf38'

With flag -d, it removes the resources for the tags given. In this case, tags must be given explicitly in the command line.

unix$ lxaws -d -r us-east-1 xtest.aws.leanxcale.com

5.3. AWS Ports

To list, open, and close ports exported by the AWS install to the rest of the world, use lxaws flags -p (print ports), -o (open ports), and -c (close ports).

In all cases, the first argument is the tag for the install. The tag can be just the AWS install tag name, without .aws.leanxcale.com.

For example, this command lists the open ports:

unix$ lxaws -p xample.aws.leanxcale.com
port: web:	tcp 80	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: comp:	tcp 14420	0.0.0.0/0

Here, the protocol and port (or port range) is printed for each set of open ports. The CIDR printed shows the IP address range that can access the ports, and is 0.0.0.0/0 when anyone can access them.

Before with each port range, the name for the open port range is printed. This name is set by the default install, and can be set when opening ports as shown next.

To open a port range, use -o and give as arguments the tag for the install, the protocol, first and last port in range, the CIDR (or any if open to everyone), and a name to identify why this port range is open (no spaces). For example:

unix$ lxaws -o xample.aws.leanxcale.com tcp 6666 6666 any opentest

The new port will be shown as open if we ask for ports:

unix$ lxaws -p xample.aws.leanxcale.com
port: web:	tcp 80	0.0.0.0/0
port: opentest:	tcp 6666	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: comp:	tcp 14420	0.0.0.0/0

As another example:

unix$ lxaws -o xample tcp 8888 10000 212.230.1.0/24 another
unix$ lxaws -p xample
port: web:	tcp 80	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: another:	tcp 8888-10000	212.230.1.0/24
port: comp:	tcp 14420	0.0.0.0/0

To close a port, use -c and give as arguments the tag for the install, the protocol, a port within the range of interest, and the CIDR used to export the port. Note that any can be used here too instead of the CIDR 0.0.0.0/0 For example:

unix$ lxaws -vc xample tcp 6666 any
searching aws...
close ports tcp 6666-6666 to 0.0.0.0/0

Here, we used the -v flag (verbose) to see what is going on.

As another example, this can be used to close the open port range 8888-10000 from the example above:

unix$ lxaws -c xample tcp 9000 2.2.230.1.0/24

5.4. AWS Instance status

To find out the instance statuses for a given AWS tag, use the -status flag for lxaws:

unix$ lxaws -status lxi24
#lxi24.aws.leanxcale.com:
	inst i-05e0708c0e4965ef0	lxi241.aws.leanxcale.com	stopped
	inst i-02bbf1473c01ea6ae	lxi242.aws.leanxcale.com	44.203.139.222	running

With just a tag name, it lists the instances for the given tag name.

Supplying extra arguments with a host name, IP, or instance ID, lists just the status for the given instances.

unix$ lxaws -status lxi24 lxi241
#lxi24.aws.leanxcale.com:
	inst i-05e0708c0e4965ef0	lxi241.aws.leanxcale.com	stopped

5.5. AWS Instance start/stop

Flags -start and -stop in lxaws can be used to start/stop instances. Their use is similar to that of the -status flag. Given a tag, all instances are started/stopped. Given a tag and a host (or more hosts), matching instances are started/stopped.

For example:

unix$ lxaws -stop lxi24
#lxi24.aws.leanxcale.com:
	inst i-05e0708c0e4965ef0	lxi241.aws.leanxcale.com	stopped

Note that this does not start/stop the DB in the instances. The command just starts/stops the instances.

Once instances are started, connect to one of them and use lx start to start the DB.

before stopping, connect to one of the instances and use lx stop to stop the DB before stopping the instances.

5.6. AWS VPC Peering Connections

Peering connections can be used to bridge two VPCs at AWS.

One peer asks the other peer to accept a peering connection, the peer accepts the connection, and network routes and security group rules for access are configured.

Peering connections are handled using lxaws with the peering connection flags. If you do not have lxaws, copy lxinst to a file named lxaws and give it execution permissions.

These are the flags for peering connections:

  -askpeer    ask peer: tag owner reg vpc
  -yespeer    accept peer:tag pcxid
  -netpeer    set peer net: tag pcxid cidr sec
  -delpeer    del peer: pcxid
  • With flag -askpeer, lxaws requests VPC peering connection.

  • With flag -yespeer, lxaws accepts a VPC peering request.

  • With flag -netpeer, lxaws sets up the routes and port access rules.

  • With flag -delpeer, lxaws removes a peering connection.

To request a peering connection, supply as arguments

  • the tag for the installed system where to setup a peer VPC.

  • the peer AWS owner id (user id).

  • the peer region

  • the peer VPC id

For example:

unix$ lxaws -askpeer  xample 232967442225 us-east-1 vpc-0cf1a3b5c1232d172
peervpc pcx-06548783d83ddaba9

Here, we could have used xample.aws.leanxcale.com instead. The command prints the peering connection identifier, to be used for setting up networking and asking the peer administrator to accept the peering request.

To accept a peer request, supply as an argument the peering connection id, as in:

unix$ lxaws -yespeer pcx-06548783d83ddaba9

In either case, once the dialed peer accepts the request, networking must be set supplying as arguments

  • the tag for the installed system where to setup a peer VPC.

  • the peering connection identifier

  • the peer CIDR block

  • the peer security group id

For example:

unix$ lxaws -netpeer xample pcx-06548783d83ddaba9 10.0.130.0/24 sg-0f277658c2328a955

Our local CIDR block is 10.0.120.0/24. This must be given to the peer, so the peer system can setup routing for this block to our network, along with the VPC id and our security group id.

This information can be retrieved using lxaws as described in the previous section. For example:

unix$ lxaws xample.aws.leanxcale.com
#xample.aws.leanxcale.com:
	vpc vpc-0f69c4a92b0a78523
	peervpc pcx-0e88c49635ed2e59e
	subnet subnet-0ca70fab7476c2a04
	igw igw-0cd6a7bfa99981659
	rtb rtb-06e4994a57d37a054
	assoc rtbassoc-022b54b9a0216e9f5
	sec sg-0df3a6ec01a4ee5ee
	inst i-0b8144548f0e0f1d8	peer11.aws.leanxcale.com	44.200.78.14
	vol vol-0c40de22d908e40bd

Should it be necessary, the CIDR block used by the install can be set when installing the system (but not later), using the property awsnet, as in

unix$ lxinst -K mykey aws awsnet 10.0.120 awstag xample

Note that only the network address bytes used are given, instead of using 10.0.120.0/24.

Once done with a peering connection, it can be dismantled supplying both the tag and the peering connection identifier. The identifier is always given because, when accepting a peering request, the peering connection does not belong to us. But, using the command above you can retrieve such identifier easily.

When peering is no longer desired, the peering connection can be removed.

unix$ lxaws -delpeer xample pcx-005c2b84b89377737

Removing the peering connection also removes the routing entries for it and the security group access rules added when it was setup.

6. HP Greenlake Installs

We currently support VM installs at HP Greenalake. For installing, first create the VM image and in this VM image, install like in bare metal. Follow the steps in Bare Metal Installs. The administration of the database instance will be the same as a bare metal. Just follow the bare metal sections in administration guide.

7. Distributed and Replicated Installs

In general, the system is used the same way it is used when installing using a single host. Refer to the section for the kind of install of interest to learn how to start, stop, and operate the system before reading this section.

As described in the reference manual, most commands take arguments to select particular replicas, hosts, or components. And this is the case for start and stop commands. On replicated installs it is important to start and stop the whole system.

Starting the whole system checks out that replicas are synchronized and takes care of updating outdated metadata of a previously failing or stopped replica.

If a replica is not reachable and start cannot ensure that the system would start with the most recent metadata, the system will not start.

On distributed and replicated installs it is possible to ask start to proceed with just a single replica or a single host or set of components. This is done by calling start with arguments that name just a replica (or perhaps a host or a set of components).

On replicated installs, two useful names are repl1 and repl2, to ask a command to operate on the first or the second replica.

By convention, the first replica is the set of hosts configured that are not mirrors, and the second replica is the set of hosts that are mirrors of former hosts.

As an example, we use this configuration file

# lxinst.conf
host blade110
	kvds
host blade161
	mirror blade110

In this case, the first replica is just blade110 and the second replica is just blade161.

Installing the system on bare metal is done using

unix$ lxinst -f lxinst.conf

To start the system we execute

unix$ lx start
start...
blade110 [
	bin/spread -c lib/spread.conf ...
	forked bin/spread...
	...
]
blade161 [
	bin/spread -c lib/spread.conf ...
	forked bin/spread...
	...
]
unix$

We can ask for the system status or wait for a status as usual:

unix$ lx status
status: running

To stop the system:

unix$ lx stop
stop...
blade110: [
	stop: term lxqe100.r1 pid 460953
	stop: term lxmeta100.r1 pid 460932
	stop: term kvds100.r1 pid 460927
	stop: term kvms100.r1 pid 460923
	stop: term spread pid 460919
]
blade161: [
	stop: term lxqe100.r2 pid 250984
	stop: term lxmeta100.r2 pid 250959
	stop: term kvds100.r2 pid 250955
	stop: term kvms100.r2 pid 250950
	stop: term spread pid 250946
]

7.1. Partial Starts and Stops

When using multiple hosts and replication, it is possible to start and stop individual hosts or replicas and force the system to run using just those.

For example,

unix$ lx stop repl1
stop...
blade110: [
	stop: term lxqe100.r1 pid 443056
	stop: term lxmeta100.r1 pid 443035
	stop: term kvds100.r1 pid 443030
	stop: term kvms100.r1 pid 443026
	stop: term spread pid 443022
]

stops the processes in replica-1.

To start it again, we can proceed in a similar way:

unix$ lx start repl1
blade110 [
	bin/spread -c lib/spread.conf ...
	forked bin/spread...
	bin/spread: started pid 446756
	...
]
blade110 [
	kvds100.r1	pid 446764	alive disk open
	spread	pid 446756	alive
	lxmeta100.r1	pid 446769	alive starting
	lxqe100.r1	pid 446790	alive
	kvms100.r1	pid 446760	alive
]

Stopping a replica while the system is running is strongly discouraged. Using it again requires restoring the replica state to make it work with the rest of the system.

In this example, if the whole system was running when lx stop repl1 was used, starting repl1 again will reintegrate it into the system if possible.

However, if we have a fully stopped system, and run

unix$ lx start repl1

the system will run just the first replica. This will happen even if the second replica is not reachable and there is no way to ensure that the metadata in the first replica is up-to-date.

To ensure that the metadata is up-to-date in partial starts, use flag -c. This performs the same checks made when starting the whole system, and ensures that metadata is up to date, before attempting an start of the named replica or components.

7.2. System Status and Replication

The command lx status reports the status for the system or waits for a given status, as dsecribed for other installs in this document. For example,

unix$ lx status
status: running

To see the replication mirror status for the system, use flag -r

unix$ lx status -r
status: running
replica: ok

And, to see detailed information about components, use flag -p, perhaps in addition to -r:

unix$ lx status -rp
status: running
replica: ok
kvds100.r1 mirror alive snap 1173999 running
kvds100.r2 mirror alive snap 1173999 running
kvms100.r1 mirror alive
kvms100.r2 mirror alive
lxmeta100.r1 mirror alive snap 1173999 running
lxmeta100.r2 mirror alive running
lxqe100.r1 mirror alive snap 929999 running
lxqe100.r2 mirror alive snap 916999 running

In this example, all components are running with their mirror set as ok.

When some components failed, or part of the system was stopped, we can see a different output.

For example, after

unix$ lx stop repl1

We can see

unix$ lx status -rp
status: running
replica: single
kvds100.r2 single alive snap 228678999 single running
kvds100.r1 outdated stopped snap 228161999
kvms100.r1 mirror stopped
kvms100.r2 mirror alive
lxmeta100.r2 mirror alive snap 362692999 running
lxmeta100.r1 mirror stopped snap 108718999
lxqe100.r2 mirror alive snap 362664999 running
lxqe100.r1 mirror stopped snap 227825999

The first thing to note here is that the replica is not ok, but single. This means we have single processes (without their mirrors) and the system is running in degraded mode.

Also, kvds100.r2 status with respect to replication is single. This means that it was running while its peer (kvds100.r1, in the first replica) was stopped.

This server will not be used again until it has been brought up to date with respect to the rest of the system.

Note how kvds100.r1 status with respect to replication is outdated. This means it passed away (failed or halted) while its peer was still in use.

The same happens to lxqe100.r2, but in this case both query engines could synchronize their mirrors after restarting lxqe100.r1 and nothing else was needed to permit the restarted process work with the rest of the system.

7.3. System Metadata and Replication

To inspect the status for a replicated system it is useful to look at DB metadata as stored on disk. On replicated systems the whole system is using a master metadata server (kvms), which synchronizes metadata with its mirror server.

Looking at the disk information may aid in diagnosing the state for the system when some replica is not running, or has been retired from service.

The dbmeta command can be used to do this. For example, after running

unix$ lx start repl1

on a replicated system previously halted, we can see

lx status -rp
status: running
replica: ok
kvds100.r1 mirror alive snap 163191999 running
kvds100.r2 mirror stopped snap 110005999
kvms100.r1 mirror alive
kvms100.r2 mirror stopped
lxmeta100.r2 mirror stopped snap 497103000
lxmeta100.r1 mirror alive snap 163670999 running
lxqe100.r1 mirror alive snap 163422999 running
lxqe100.r2 mirror stopped snap 109917999

The system has not been used, so the mirror status is still ok. The output for dbmeta returns what is known by the running kvms server:

unix$ lx dbmeta /srv
dbmeta...
# kvms100.r1
kvms blade110!14400
kvms blade161!14400
kvds ds100.r1 at blade110!14500  snap 172793999 rts 112774999
lxmeta mm100.r2 at blade161!14410  snap 497103000
kvds ds100.r2 at blade161!14500  snap 110005999 rts 109069999
lxmeta mm100.r1 at blade110!14410  snap 172793999
lxqe qe100.r1 at blade110!14420  snap 173016999
lxqe qe100.r2 at blade161!14420  snap 109917999

We asked just for metadata of servers using the /srv resource path name.

The interesting part is that we can ask for metadata as known by both replicas:

unix$ lx dbmeta -a /srv
dbmeta...
blade110:
	# kvms100.r1
	kvms blade110!14400
	kvms blade161!14400
	kvds ds100.r1 at blade110!14500  snap 174236999 rts 112774999
	lxmeta mm100.r2 at blade161!14410  snap 497103000
	kvds ds100.r2 at blade161!14500  snap 110005999 rts 109069999
	lxmeta mm100.r1 at blade110!14410  snap 174236999
	lxqe qe100.r1 at blade110!14420  snap 174457999
	lxqe qe100.r2 at blade161!14420  snap 109917999
blade161:
	# kvms100.r2
	kvms blade161!14400
	kvms blade110!14400
	kvds ds100.r1 at blade110!14500  snap 110005999 rts 109069999
	kvds ds100.r2 at blade161!14500  snap 110005999 rts 109069999
	lxmeta mm100.r2 at blade161!14410  snap 497103000
	lxmeta mm100.r1 at blade110!14410  snap 110005999
	lxqe qe100.r1 at blade110!14420  snap 109920999
	lxqe qe100.r2 at blade161!14420  snap 109917999

It can be seen how replica-2 (that for kvms100.r2) is way out of date at least with respect to snapshots. This is not a surprise because it is stopped.

We can ask for the full metadata using

unix$ lx dbmeta -a

or for that for a particular table or index.

7.4. Failures

When there is a failure, the system continues to operate using the mirror processes that remain alive.

Here we describe example failures, and provide details about reparing specific failed components. Then we describe how to use lx restore to try to restore things in a more convenient way.

For example, if qe100.r2 fails (we killed it to make it so), this can be seen:

unix$ lx status
status: running with failures

Further details are reported by flags -r (replication) and -p (process):

unix$ lx status -rp
status: running with failures
replica: ok
kvds100.r1 mirror alive snap 232084999 running
kvds100.r2 mirror alive snap 232084999 running
kvms100.r1 mirror alive
kvms100.r2 mirror alive
lxmeta100.r1 mirror alive snap 232564999 running
lxmeta100.r2 mirror alive running
lxqe100.r1 mirror alive snap 232416999 running
lxqe100.r2 mirror dead snap 228005999

Component lxqe100.r1 is alive and running, and lxqe100.r2 is dead.

Using now the database produces a change in status:

lxqe100.r1 single alive snap 267003999 single running
lxqe100.r2 outdated dead snap 228005999

This means that lxqe100.r1 is known to be single, i.e., it has been used while its mirror was dead or halted.

Also, lxqe100.r2 is known to be outdated, i.e., its mirror has been used while it was dead or halted.

7.5. Using the recover command

The command lx recover inspects metadata and tries to restore the disk for failed components.

It can be used before starting the system, to let lx start start an already restored replicated system.

To make an example, with the example system running, we killed both kvds100.r1 and lxqe100.r2 and stopped the system after that. This is the resulting server metadata:

unix$ lx dbmeta /srv
dbmeta...
# kvms100.r1
meta ts 15264000
kvms blade110!14400
kvms blade161!14400
lxmeta mm100.r2 at blade161!14410
kvds ds100.r1 at blade110!14500 fail snap 2101999
kvds ds100.r2 at blade161!14500 single snap 14804999
lxmeta mm100.r1 at blade110!14410  snap 15264000
lxqe qe100.r1 at blade110!14420 single snap 14736999
lxqe qe100.r2 at blade161!14420  snap 925999

A dry run (flag -n) for lx recover shows this:

unix$ lx recover -n
#disk...
copy blade110 kvds100.r2 blade161 kvds100.r1
clear kvds100.r2 flag single
clear kvds100.r1 flag fail
copy blade110 lxqe100.r1 blade161 lxqe100.r2
clear lxqe100.r1 flag single
clear lxqe100.r2 flag fail

We can use lx copy and lx dbmeta -w to copy disks and clear flags, but it is more convenient to run this command without the dry run flag once we are decided to do so.

The recover command can take arguments to select what should be recovered, following the style of most other commands. For example:

lx recover kvds100
#recover...
copy blade110 kvds100.r2 blade161 kvds100.r1
clear kvds100.r2 flag single
clear kvds100.r1 flag fail

After a recover, the system can be started normally.

8. High Availability (Active-Active Replication)

High availability is attained by means of active-active replication (also called synchronous replication). This means that each write is made to each pair of replicas as part of the same transaction.

This means that if one of the replicas fails the other one will remain operational without any loss of data as it would happen in a database with master-slave (also called asynchronous replication). If one replica fails, the system remains operational, and we say that the system is working in degraded mode (since now there is now high availability, and a failure of the replica still alive will mean that the database is down).

When the failed replica is recovered it will be in sync with the live replica (same data) and the system will become fully operational, that is, highly available again.

Backups are performed with the same command as to perform backups without high availability. The backup for a fully operational highly available system backups one of the replicas (since both are identical). The backup of a degraded system (one replica up and one replica down) will backup the data of the live replica. It should be noted that if the system is shutdown in degraded mode, one the live replica contains the up-to-date data. We call it single (as opposed to married when both replicas are alive). The backup in the case of a degraded system will backup the data of the single replica.

If the system was stopped in degraded mode, it can only be restarted with the single replica.

9. Monitoring Tool

Monitoring in LeanXcale is performed with Prometheus to collect metrics and Grafana as dashboard. Utilizing advanced tools such as Prometheus and Grafana allows administrators to gain complete visibility into the system’s health, quickly identify issues, and optimize performance in real-time.

Prometheus takes care of metrics collection and and its storage. Prometheus is an open-source monitoring system designed to collect and store real-time numerical metrics. Thanks to the integration with Prometheus the integration with other monitoring tools such as Datadog, Instana, or Site24x7 is straightforward. LeanXcale relying on Prometheus, exposes detailed metrics across various aspects of its operation, including:

Database Performance: Metrics related to the throughput of transactions and read and write operations. Resource Consumption: CPU, memory, and disk usage, at component, node, and cluster levels. Prometheus collects these metrics through endpoints exposed by LeanXcale and stores them in its time-series store. This enables querying and analyzing performance data over historical periods, facilitating the identification of patterns and trends.

Grafana is devoted to the visualization and analysis of metrics. Grafana is a powerful data visualization tool integrated with Prometheus, providing an interactive and graphical environment for metric analysis. With Grafana, LeanXcale administrators can:

Create Custom Dashboards: Users can build personalized dashboards that display the most relevant metrics according to their operational needs. These dashboards enable real-time monitoring of the system’s most critical metrics. Configurable Alerts: Grafana allows setting up alerts based on specific thresholds for any metric collected by Prometheus. When a metric exceeds a predefined value, Grafana can send notifications through multiple channels (email, Slack, etc.), enabling teams to respond quickly to potential issues. Historical Analysis: Users can leverage Grafana to perform in-depth analysis of historical metrics, allowing comparisons of performance over different periods, identifying trends, and conducting detailed diagnostics of any incidents. The combination of Prometheus and Grafana provides:

Real-Time Visibility: Administrators gain a clear and up-to-date view of the database’s status, helping maintain high levels of performance and availability. Proactive Diagnostics: With the ability to configure alerts and analyze historical data, administrators can identify and resolve issues before they severely impact end users. Scalability: Both Prometheus and Grafana are designed to handle large volumes of data, making them ideal for distributed and large-scale environments like LeanXcale.

10. LX Command

10.1. LeanXcale Commands

The command lx is a Shell for running LeanXcale control programs. This simply fixes the environment for the installed host and runs the command given as an argument:

usage: lx [-d] cmd...

The command operates on the whole LeanXcale system, even when multiple hosts are used.

It is convenient to have lx in the PATH environment variable, as suggested in the install program output.

Command output usually includes information on a per-host basis reporting the progress of the used command.

Most commands follow the same conventions regarding options and arguments. We describe them here as a convenience.

Arguments specify what to operate (e.g., what to start, stop, etc.) may be empty to rely on the defaults (whole DB) or may specify a particular host and/or component name:

  • when only component names are given, and only those components will be involved (e.g., lxqe101).

  • when a component type name is given, components for that type are selected. (e.g., lxqe).

  • when a host name is given, any component following is narrowed to that host. If no components follow the host name, all components from the host are selected.

This may be repeated to specify different hosts and/or components.

The special host names db, repl, and repl2 may be used and stand for hosts without the nodb attribute, hosts for the first replica, and hosts for the second replica (hosts that are a mirror of other ones).

10.1.1. LeanXcale Commands on Bare Metal Installs

For bare metal installs, it suffices to have the lx command in the PATH. It can run on any of the installed hosts.

For example, on an installed host, lx version prints the installed version:

unix$ lx version
leanXcale v2.2
    kv         v2.2.2023-09-29.115f5fba70e3af8dc203953399088902c4534389
    QE         v2.2.2023-09-30.1e5933900582.26a7a5c3420cd3d5d589d1fa6cc
    libs       v2.2.2023-09-29.67535752acf19e092a6eaf17b11ad17597897956
    avatica    v2.2.2023-09-27.0b0a786b36e8bc7381fb2bb01bc8b3ed56f49172
    TM         v2.2.2023-09-29.9a9b22cfdc9b924dbc3430e613cddab4ed667a57

10.1.2. LeanXcale Commands on Docker Installs

To use the lx command on a docker install, an installed container must be running, and the command must be called on it.

For example, assume that the container named lx1 is running on a Docker install. The container could be started using the following command, assuming the leanXcale image is named lx:2, and the docker network used is lxnet:

unix$ docker run -dit --name lx1 --network lxnet -p0.0.0.0:14420:14420 lx:2 lx1
b28d30702b80028f8280ed6c55297b2e203540387d3b4cfbd52bc78229593e27

It is possible to attach to the container and use the ''lx'' command as it can be done on a bare metal host install:

unix$ docker attach lx1
lx1$ lx version
...

Here, we type docker attach lx1 on the host, and lx version on the docker container prompt.

Note that if you terminate the shell reached when attaching the docker container, it will stop. Usually, this is not desired.

It is possible to execute commands directly on the executed container. For example:

unix$ docker exec -it lx1 lx version

executes lx version on the lx1 container.

In what follows, lx1 is used as the container name in the examples for docker installs.

10.1.3. LeanXcale Commands on AWS Installs

Using lx on AWS hosts is similar to using it on a bare-metal install. The difference is that you must connect on the AWS instance to run the command there.

For example, after installing xample1.aws.leanxcale.com, and provided the PEM file can be found at xample.pem, we can run this:

unix$ ssh -i xample.pem xample1.aws.leanxcale.com lx version

to see the installed version.

In what follows, xample.pem is used as the PEM file name and xample1.aws.leanxcale.com is used as the installed instance name, for all AWS install examples.

11. Starting the System

11.1. Bare Metal System Start

The start command starts LeanXcale:

unix$ lx start
start...
atlantis [
    cfgile: /ssd/leandata/xamplelx/lib/lxinst.conf...
    bin/spread -c lib/spread.conf ...
    forked bin/spread...
    bin/spread: started pid 1056053
    bin/kvms -D 192.268.1.224!9999 /ssd/leandata/xamplelx/disk/kvms100/kvmeta ...
    forked bin/kvms...
    ...
]
atlantis [
    kvds103 pid 1056084 alive
    kvms100 pid 1056057 alive
    spread pid 1056053 alive
    kvds102 pid 1056075 alive
    kvds100 pid 1056062 alive
    kvds101 pid 1056066 alive

]
unix$

Here, atlantis started a few processes and, once done, the start command checked out if the processes are indeed alive.

In case not all components can be started successfully, the whole LeanXcale system is halted by the start command.

By default not watcher or automatic restart is setup. Using flag -r asks start to start the system asking it to restart any QE that was running, failed, and was not restarted less than one minute ago.

using flag -w asks start to start lx watch. The watch tool will wait until the system becomes operational and, upon failures, try to restart the whole system.

To start a single host or component, use its name as an argument, like in:

# start the given host
unix$ lx start atlantis
# start the named components
unix$ lx start kvds
# start the named components at the given host
unix$ lx start atlantis kvds

Start does not wait for the system to be operational. To wait until the system is ready to handle SQL commands, the status command can be used with the -w (wait for status) flag, as in:

unix$ lx status -w running
status: running

Without the -w flag, the command prints the current status, which can be stopped, failed, running, or waiting.

11.2. Docker System Start

To start LeanXcale installed on Docker containers, you must start the containers holding the installed system components.

For example, consider the default docker install

unix$ lxinst docker
...
install done
docker images:
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
uxbase       2         7c8262008dac   3 months ago   1.07GB
lx           2         cafd60d35886   3 seconds ago   2.62GB

docker network:
NETWORK ID     NAME      DRIVER    SCOPE
a8628b163a21   lxnet     bridge    local
to start:
	docker run -dit --name lx1 --network lxnet lx:2 lx1

The install process created a docker image named lx:2, installed for the docker host lx1, and the docker network lxnet.

To list the image we can

unix$ docker images lx
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
lx           2         75b8c9ffa245   About a minute ago   2.62GB

And, to list the networks

unix$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a8628b163a21   lxnet     bridge    local

The created image is a single one for all containers. The name given when creating the container determines the host name used The install process specified the host names, and containers must be starte using the corresponding host name(s), so they know which leanXcale host they are for.

For example, to start the container for lx1:

unix$ docker run -dit --name lx1 --network lxnet -p0.0.0.0:14420:14420 lx:2 lx1
b28d30702b80028f8280ed6c55297b2e203540387d3b4cfbd52bc78229593e27

In this command, the container name is lx1, the network used lxnext, and the image used lx:2. The port redirection -p…​ exports the SQL port to the underlying host.

Listing docker processes shows now the running container

unix$ docker ps
CONTAINER ID   IMAGE     COMMAND             STATUS          PORTS  NAMES
e81d9d01f40a   lx:2      "/bin/lxinit lx1"   Up 56 seconds   14410  lx1

It is important to know that:

  • starting the container will start leanXcale if a valid license was installed;

  • stopping the container should be done after stopping leanxcale in it.

The container name (lx1) can be used to issue commands. For example,

unix$ docker exec -it lx1 lx version
leanXcale v2.1 unstable
	kv         v2.1.2.14-02-15.c26f496706918e610831c02e99da3676a1cffa47
	lxhibernate v2.1.2.14-02-07.f65c5a628afede27c15c77df6fbbccd6d781d3ee
	TM         v2.1.2.14-02-06.bfc9f92216481dd05f51900ac522e5ccfb6d2555
	QE         v2.1.2.14-02-15.4a8ff4200dc3d3656c8469b6f74c05a296fbdfb3
	avatica    v2.1.2.14-02-14.1c442ac9e630957ace3fdb5c4faf92bb85510099
	...

executes lx version on the lx1 container.

The status for the system can be seen in a similar way:

unix$ docker exec -it lx1 lx status
status: running

Note that the container will not start the DB if no valid license is found.

11.3. AWS System Start

To start LeanXcale installed on AWS, you must start the AWS instances holding the installed system components.

This can be done by hand using the AWS console, or using the lxaws command.

For example, after installing using xample as an AWS tag, this command starts the instances:

unix$ lxaws -start xample

Once instances are started, the lx command is available at any of them.

For example, provided the PEM file can be found at xample.pem, we can run this:

unix$ ssh -i xample.pem xample1.aws.leanxcale.com lx version

to see the installed version. Here xample1 is the DNS host name registered as the first host for the install AWS tag xample. In the same way, xample2 would be the name for the second host, and so on.

12. Checking System Status

12.1. Bare Metal System Status

The command lx status reports the status for the system or waits for a given status. For example,

unix$ lx status
status: waiting
	kvds100: recovering files
	kvds101: recovering files

Or, to wait until the status is running:

unix$ lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running
unix$

To see the status for each one of the processes in the system, use lx procs. For example:

unix$ lx procs
procs...
atlantis [
    kvds103 pid 1057699 alive running
    kvms100 pid 1057672 alive running
    spread pid 1057668 alive
    kvds102 pid 1057690 alive running
    kvds100 pid 1057677 alive running
    kvds101 pid 1057681 alive running

]

12.2. Docker System Status

Before looking at the LeanXcale system status, it is important to look at the status of the docker containers running LeanXcale components.

unix$ docker ps
CONTAINER ID   IMAGE     COMMAND             STATUS          PORTS  NAMES
e81d9d01f40a   lx:2      "/bin/lxinit lx1"   Up 56 seconds   14410  lx1

When containers are running, the command lx status reports the status for the system or waits for a given status. For example,

unix$ docker exec -it lx1 lx status

executes lx status on the lx1 container. The status is reported for the whole system, and not just for that container.

To wait until the status is running:

unix$ docker exec -it lx1 lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running

To see the status for each one of the processes in the system, use lx procs. For example:

unix$ docker exec -it lx1 lx procs
procs...
atlantis [
    kvds103 pid 1057699 alive running
    kvms100 pid 1057672 alive running
    spread pid 1057668 alive
    kvds102 pid 1057690 alive running
    kvds100 pid 1057677 alive running
    kvds101 pid 1057681 alive running

]

12.3. AWS System Status

Before looking at the LeanXcale system status, it is important to look at the status of the AWS instances running LeanXcale components.

This can be done using the lxaws -status flag with the installed AWS tag name:

unix$ lxaws -status xample
#xample.aws.leanxcale.com:
	inst i-02bbf1473c01ea6ae	xample2.aws.leanxcale.com	stopped
	inst i-05e0708c0e4965ef0	xample1.aws.leanxcale.com	54.84.39.77	running

When instances are running, the command lx status reports the status for the system or waits for a given status. For example,

unix$ ssh -i xample.pem xample1.aws.leanxcale.com lx status

to see the system status.

To wait until the status is running:

unix$ ssh -i xample.pem xample1.aws.leanxcale.com lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running

To see the status for each one of the processes in the system, use lx procs. For example:

unix$ ssh -i xample.pem xample1.aws.leanxcale.com lx procs
procs...
atlantis [
    kvds103 pid 1057699 alive running
    kvms100 pid 1057672 alive running
    spread pid 1057668 alive
    kvds102 pid 1057690 alive running
    kvds100 pid 1057677 alive running
    kvds101 pid 1057681 alive running

]

13. Stopping the System

13.1. Bare Metal System Stop

The stop command halts LeanXcale:

unix$ lx stop
stop...
atlantis [
    kvcon[1056801]: halt
]
atlantis [
    term 1056062 2056066 1056075 1056084 1056057 1056053...
    kill 1056062 2056066 1056075 1056084 1056057 1056053...

]
unix$

13.2. Docker System Stop

Stopping the LeanXcale containers should be done after stopping leanxcale. The reason is that docker might timeout the stop operation if the system is too busy updating the disk during the stop procedure.

To stop the database,

unix$ docker exec -it lx1 lx stop

stops the components for the whole system (being lx1 an installed container).

We can double check this

unix$  docker exec -it lx1 lx status
status: stopped

Once this is done, we can stop the docker container.

unix$ docker ps
CONTAINER ID   IMAGE     COMMAND             STATUS          PORTS  NAMES
e81d9d01f40a   lx:2      "/bin/lxinit lx1"   Up 56 seconds   14410  lx1
unix$ docker stop lx1
lx1
unix$ docker ps
unix$

We can also remove the container, but, note that doing this removes all data in the container as well.

unix$ docker rm lx1
lx1
unix$

13.3. AWS System Stop

To stop leanXcale on AWS, you must stop leanXcale before stopping the AWS instances running it. For example

unix$ ssh -i xample.pem xample1.aws.leanxcale.com lx stop

This stops the system on all the instances it uses.

Then, the instances can be stopped. This can be done on the AWS console, or using the lxaws -stop flag with the installed AWS tag name:

unix$ lxaws -stop xample
#xample.aws.leanxcale.com:
	inst i-02bbf1473c01ea6ae	xample2.aws.leanxcale.com	stopping
	inst i-05e0708c0e4965ef0	xample1.aws.leanxcale.com	stopping

14. Start & Stop Particularities on Different Installs

System start depends on how the system has been installed. For bare-metal installations, the administrator installing the system is responsible for adding a system service that brings LeanXcale into operation when the machine starts, and stops LeanXcale before halting the system.

For AWS installations, LeanXcale is added as a service, disabled by default. Do not use the service on multi-host installs, the service starts/stops the DB and that requires DB processes to be accessible. That might not be the case with multiple instances.

When the service is enabled, starting the instance starts the LeanXcale service, and stopping the instance stops LeanXcale before the instance stops.

Otherwise, starting leanXcale requires to dial into the installed instance (one of them) and issue the lx start command.

For Docker installations, starting a container starts the LeanXcale service on it, and, for safety, LeanXcale should be halted before halting the container (otherwise Docker might decide to time-out and stop the container before LeanXcale did fully stop).

15. Configuring the System

The lx config command prints or updates the configuration used for the LeanXcale system:

unix$ lx config
cfgile: /usr/local/leanxcale/lib/lxinst.conf...
#cfgfile /usr/local/leanxcale/lib/lxinst.conf
host localhost
    lxdir /usr/local/leanxcale
    JAVA_HOME /usr/lib/jvm/java-1.11.0-openjdk-amd64
    addr 127.0.0.1
    odata 100
        addr localhost!14004
    kvms 100
        addr 127.0.0.1!14400
    lxmeta 100
        addr 127.0.0.1!16500
    lxqe 100
        addr 127.0.0.1!16000
    kvds 100
        addr 127.0.0.1!15000
    kvds 101
        addr 127.0.0.1!15002
    kvds 102
        addr 127.0.0.1!15004
    kvds 103
        addr 127.0.0.1!15006

The configuration printed provides more details than the configuration file (or command line arguments) used to install.

NB: when installing into AWS or docker, the configuration printed might lack the initial aws or docker property used to install it.

It is possible to ask for particular config entries, like done here:

unix$ lx config kvms addr

It is also possible to adjust configured values, like in

unix$ lx config -s lxqe mem=500m

used to adjust the lxqe mem property to be 500m in all lxqe components configured.

16. System Recovery

The lxmeta component watches the status for other components and will stop the system when there is a failure that cannot be recovered online.

Should the system crash or fail-stop, upon a system restart, lxmeta will guide the system recovery.

At start time, each system component checks out its on-disk information and decides to start either as a ready component or as a component needing recovery.

The lxmeta process, guides the whole system start process following these steps:

  • Wait for all required components to be executing.

  • Look up the component status (ready/recovering).

  • If there are components that need recovery, their recovery process is executed.

  • After all components are ready, the system is made available by accepting queries.

The command lx status can be used both to inspect the system status and the recovering process, or to wait until the recovery process finishes and the system becomes available for queries.

17. System Logs

Logs are kept on a per-system directory, named log, kept at the install directory for each system.

Log files have names similar to

	kvms100.240214.1127.log

Here, the component name comes first, and then, the date and time when the log file was created. When a log file becomes too big, a new one is started for the given component.

It is convenient to use the lx logs command to list and inspect logs. It takes care of reaching the involved log files on the involved hosts.

For example, to list all log files:

	unix$ lx logs
	logs...
	atlantis: [
		log/kvds100.240214.1127.log	250.00K
		log/kvms100.240214.1127.log	35.39K
		log/lxmeta100.240214.1127.log	24.10K
		log/lxqe100.240214.1127.log	445.80K
		log/spread.240214.1127.log	963
		log/start.log	426
	]

Here, atlantis was the only system installed.

We can give host and/or component names as in many other commands to focus on those systems and/or components.

For example, to just just logs for kvms processes:

	unix$ lx logs kvms
	logs...
	atlantis: [
		log/kvms100.240214.1127.log	35.39K
	]

Or, to list only those for the kvms100:

	unix$ lx logs kvms100

To list logs for the atlantis host:

	unix$ lx logs atlantis

To list logs for kvms components within atlantis:

	unix$ lx logs atlantis kvms

To list logs for kvms components at atlantis and kvds at orion:

	unix$ lx logs atlantis kvms orion kvds

With flag -p, logs are printed in the output instead of being listed.

	unix$ lx logs -p lxmeta
	atlantis: [
		log/lxmeta100.240214.1127.log	24.10K [
			# pid 3351755 cmd bin/javaw com.leanxcale.lxmeta.LXMeta -a atlantis!14410
	...

Flag -g greps the logs for lines with the given expression. For example:

	unix$ lx logs -g fatal

And, flag -c copies the logs to the given directory

	unix$ lx logs -c /tmp

When printing and copying the logs, only the last log file for each component are used. To operate on all the logs and not just on the last one, use flag -a too:

	unix$ lx logs -a -c /tmp

18. Backups

Backups can be made to a external location (recommended to tolerate disk failures) or within an installed host. External backups (i.e., to an external location) are made using the lxdisk tool, installed at the bin directory, which works with a given configuration file. Internal backups (i.e., to a directory on installed hosts) are made using the lx disk tool instead.

usage: lxdisk [-h] [-v] [-D] [-n] [-d dir] [-F] [-f cfgfile] [-i] [-e] [-r]
              [-o]
              cmd [what [what ...]]

leanXcale disk tool

positional arguments:
  cmd         cmd: disk command
  what        cmd args

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command
  -f cfgfile  config for the install
  -i          incremental backups
  -e          encrypt
  -r          restore
  -o          online backup

	disk commands (short names):
		backup (bck): full backup
			args: none, or host/comps under -F
		list (lst[, l): list backups
			args: [backupname [comp...]]
		restore (res): restore a backup
			args: [last|backupname [repl|host|comp...]]
		recover (rec): recover replica disks
			args: [repl|host|comp...]
		copy (cp): copy disk files
			args: src... dst
			src/dst: repl|host|comp|path

Using lxdisk is exactly like using lx disk with a few differences:

  • Flag -f is mandatory on the first invocation and provides the installed configuration file.

  • The default backup directory is not $LXDIR/dump, but ./lxdump.

  • The command kvtar must be available at the host running lxdisk.

The configuration file used must be the one retrieved using lx config, and not the one written by the user to perform the install, because lx config reports addresses and details needed by the lxdisk command.

Encryption/decryption happens at the source/destination of data when creating/extracting backups. Therefore, there is no key kept at the host running lxdisk.

As a convenience, the command can be copied to files lxbackup, lxbackups (for list), lxrestore, lxrecover, and lxcopy. When using those names, the command usage adapts to the respective command. For example:

usage: lxbackup [-h] [-v] [-D] [-n] [-d dir] [-F] [-f cfgfile] [-i] [-e] [-r]
                [-o]
                [what [what ...]]

make a backup

positional arguments:
  what        host/comps

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command
  -f cfgfile  config for the install
  -i          incremental backups
  -e          encrypt
  -r          restore
  -o          online backup

To prepare a host to use lxdisk, get the lxdisk command, the configuration, and the kvtar command and copy them to the host.

The lxdisk and kvtar commands can be found at the installed $LXDIR/bin directory on any installed host. If you are not sure regarding the $LXDIR value, use this command to find it:

unix$ lx -d pwd
/usr/local/leanxcale
unix$

Flag -d for lx makes it change to the $LXDIR directory before running the given command.

The detailed configuration file to be used is kept at $LXDIR/lib/lxinst.conf. The configuration can be retrieved using the lx config command. For example, this creates ./lxinst.conf with the detailed configuration:

unix$ lx config -o lxinst.conf
saved lxinst.conf

As an example, we can setup an external host named orion to perform backups in this way:

unix$ lx -d pwd
/usr/local/leanxcale
unix$ lx config -o lxinst.conf
saved lxinst.conf
unix$ dir=/usr/local/leanxcale
unix$ scp $dir/bin/lxdisk $dir/bin/kvtar lxinst.conf orion:~

And then just:

orion$ lxdisk -f lxinst.conf backup

Or perhaps:

orion$ cp lxdisk lxbackup
orion$ lxbackup -f lxinst.conf

To create a full, cold, backup when leanXcale is not running:

orion$ lxbackup -f lxinst.conf
#disk...
new 240809
make atlantis disk/kvms100 /usr/local/leanxcale/dump/240809/kvms100
make atlantis disk/kvds100 /usr/local/leanxcale/dump/240809/kvds100
make atlantis disk/kvds101 /usr/local/leanxcale/dump/240809/kvds101
make atlantis disk/lxqe100/log /usr/local/leanxcale/dump/240809/lxqe100
unix$

Flag -f must be used the first time at least. Once a backup has been made, the configuration is saved and there is no need to supply it again.

The printed name is the name for the directory keeping the backup, as used when restoring it. In this case, 240809.

The files kept at the backup are compressed, and must uncompressed if copied by hand. The restore command takes care of this.

Using flag -e both encrypts and compresses the backup files. Flags belong to disk, note that backup is the argument given to disk.

The key used to encrypt the backed files is kept at $LXDIR/lib/lxkey.pem on the installed sites as set up by the installer.

orion$ lxdisk -e backup
#disk...
new 24080901
make atlantis disk/kvms100 /usr/local/leanxcale/dump/24080901/kvms100 crypt
make atlantis disk/kvds100 /usr/local/leanxcale/dump/24080901/kvds100 crypt
make atlantis disk/kvds101 /usr/local/leanxcale/dump/24080901/kvds101 crypt
make atlantis disk/lxqe100/log /usr/local/leanxcale/dump/24080901/lxqe100 crypt

To perform a backup while the system is running, supply flag -o (for online).

18.1. Incremental Backups

To create an incremental backup use the flag -i of lxdisk backup or lxbackup.

orion$ lxbackup -i
#disk...
new 24080902+
incr: lxqe100 += lxqe100

The output reports which components got files backed up.

The incremental backup is encrypted if the total backup it refers to is encrypted too. No flag -e should be given.

The incremental backup can be performed while the system is running.

18.2. Listing Backups

To list the backups known use the lxdisk list command, or lxbackups

orion$ lxdisk list
#disk...
240810 ts 1419000
24081002+ ts 1419000
...

Those with a + in their names are incremental backups.

Verbosity can be increased adding one or more -v flags:

orion$ lxbackups -v
#disk...
240810 ts 1419000
240810/kvms100 ts 2551000
240810/kvds100 ts 1292000
...
orion$ lxbackups -vv
#disk...
240810 ts 1419000
240810/kvms100 ts 2551000
240810/kvms100/kvmeta sz 239 ts 2551000 mt 1723285636
240810/kvds100 ts 1292000
240810/kvds100/dbf00001.kv sz 393216 ts 1419000 mt 1723285636
...
orion$ lxbackups -vvv
#disk...
240810 ts 1419000
	240810/kvms100 ts 2551000
		240810/kvms100/kvmeta sz 239 mt 1723285636
			localhost
			ts 1 2551000 bck 0 0 na 0
	240810/kvds100 ts 1292000
		240810/kvds100/dbf00001.kv sz 393216 mt 1723285636
			localhost
			ts 1292000 1419000 bck 0 0 na 3
			tname db-APP-PERSONS
			rmin: tpmin
			rmax: tpmax
...

It is possible to list a single backup by suppling its name and/or specific components using arguments as done with most lx commands:

orion$ lxdisk list 24080901

or

orion$ lxbackups 24080901 lxqe

18.3. Removing Old Backups

To remove a backup, it suffices to remove its directory. For example:

unix$ lx -d rm -rf dump/230720

Here we used the flag “-d” for lx to change to $LXDIR before executing the remove command, which makes it easy to name the directory used for the dump.

Or, from our external backup example host:

orion$ rm -rf lxdump/230720.2

Beware that if you remove a backup, you should remove also those incremental backups that follow up to the next total backup.

18.4. Restore

To restore a backup, use the restore command.

orion$ lxdisk -v restore
#disk...
check...
restore 24080904...
restore atlantis disk/kvds100 from /usr/local/leanxcale/dump/24080904/kvds100 crypt
dbf00001.kv
dbf00003.kv
...

Or perhaps

orion$ cp lxdisk lxrestore
orion$ lxrestore -v

Do this while the system is stopped. By default, it selects the last backup made.

To restore a particular backup, supply its name:

orion$ lxrestore 24080904

This can be done also for incremental backups. When restoring an incremental backup, the restore takes data also from previous incremental backups and the corresponding total backup.

To restore only specific hosts or components, supply their names or types as done for other commands:

orion$ lxrestore lxqe

Or perhaps:

orion$ lxrestore 24080904 lxqe

To restore a backup, it is usually desirable to format the disk for the involved components before using backup to restore their disks.

18.5. Backup Automation

To automate system backups, use crontab(8) to run lx disk when backing up within a installed host, or to run lxdisk at an external host, on the desired times.

19. Reporting Issues

To report an issue, use lx report to gather system information. This program collects information from the system and builds an archive to be sent to support.

unix$ lx report
report: lxreport.231009...
version...
procs...
logs...
stacks...
stack lxmeta100...
stack kvds103...
stack kvms100...
stack spread...
stack kvds102...
stack kvds100...
stack kvds101...
stack lxqe100...

# send this file to support.
-rw-rw-r-- 1 leandata leandata 54861 Oct  9 14:58 lxreport.231009.tgz

As printed by the command output, the resulting tar file should be sent to support.

The archive includes:

  • installed version numbers

  • underlying OS names are versions

  • complete disk usage for the installed systems

  • complete process list for the installed systems

  • memory usage for the installed systems

  • lx process list

  • logs for components (last log file only, for each one)

  • stacks for each component

  • stacks for each core file found

When kvms is still running, the archive includes also:

  • statistics for the sytem

  • long list of kv resources

  • process list for each kvds

  • file list for each kvds

20. Command Reference Manual

This section details all the commands available and how to use them, starting with the install command.

20.1. lxinst

This is the install program:

usage: lxinst [-h] [-v] [-D] [-s] [-m] [-n] [-f cfgfile] [-d dist] [-k key]
              [-K awspem] [-u location] [-c] [-i]
              [where ...]

leanXcale installer

positional arguments:
  where        [aws|docker|stats] dir, host, or host:dir

options:
  -h, --help   show this help message and exit
  -v           verbose
  -D           enable debug diags
  -s           small install
  -m           medium install
  -n           dry run
  -f cfgfile   use this configuration file
  -d dist      distrib file, dir, or url
  -k key       key to download the distribution.
  -K awspem    AWS pem file name w/o .pem
  -u location  update inst at location
  -c           clean. download everything
  -i           ignore system limits

Given arguments specify where to install the DB. They may be paths for directories in the local host, or host names, or host and directory names using the syntax host:dir.

Instead of arguments, a configuration file may be given using the -f flag. The configuration file has been described before in this document.

Flag -k may be used to specify the key to download the distribution.

Flag '-K' must be used when installing on AWS to supply the path (without file name extension) to the pem/pub key files.

By default, the distribution is retrieved from the leanXcale artifactory. However, it is possible to supply one or more times the -d flag giving it the name for a .tgz file, a directory, or a URL. The special argument leanxcale stands for the official repository for the distribution.

Distribution packages will be retrieved by looking at those distribution sources. By convention, the first source is usally the ./lxdist directory, used to keep the distribution when downloaded.

For example, this installs just what is downloaded at ./lxdist, without downloading anything else:

unix$ lxinst -d ./lxdist -f lxinst.conf

Once downloaded, files are not downloaded again. Flag ''-c'' cleans the ''./lxdist'' directory to force a download of everything. If a different directory is specified by the user, it is not cleaned, although packages not found will be still downloaded there. To force a download of particular packages, remove the packages desired from the ''./lxdist'' directory (or the directory specified in the arguments).

For example, this downloads a fresh copy of the distribution and installs it:

unix$ lxinst -c -f lxinst.conf

And this tries first the ../lxdist directory and then the standard leanxcale repository:

unix$ lxinst -d ./lxdist -d leanxcale -f lxinst.conf

This is actually the default when no distribution source is specified. Also, if no directory source is specified as a first option, lxdist is used as a directory to download components (other than files found on the local host).

The database has users (different from UNIX users). The user lxadmin is the administrator for the whole system. During the install you will be asked to type a password for lxadmin. To specify the password without being asked, you can set the password in the LXPASS environment variable:

unix$ export LXKEY=nemo:APA...uQy
unix$ export LXPASS=fds92f3c
unix$ lxinst /usr/local/leanxcale
...

When the users file ./lxowner exists, that file is used as the file describing the lxadmin user and its secret, instead of creating one using the user supplied password for lxadmin.

Do not use a existing file unless you know what you are doing. The reason is that this file must define lxadmin as the first user and that such user must have access to all resources, as configured.

Flag -i makes lxinst ignore system limits, to install on places with reduced disk or memory and in a hurry.

By default, the install performs a large install, trying to use all the machine resources for the service. Flag -s selects a small instead, and flag -m selects a medium install.

The install size affects components and sizes not specified by the user in command line arguments or in the configuration file. In a small install, at most 4 kdvs are added per host, and component memory is limited to 1GiB. In a medium install, at most 8 kvds components are added, and component memory is left with default values.

Flag -u is used to update a previous install. Still WIP, do not use it for now.

20.2. lx

Shell for running LeanXcale control programs. This simply fixes the environment for the installed host and runs the command given as an argument:

usage: lx [-d] cmd...

When flag -d is used, the current working directory is set to the install directory at the host before running the given command.

Most commands follow the same conventions regarding options and arguments. We describe them here for convenience:

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -l          local run only
  -D          enable debug diags

Flag -l is used when executing the command locally. This is used by the lx command framework, and should not be used in general by the end user.

Flag -D enables verbose diagnostics

Arguments specify what to operate (e.g., what to start, stop, etc.) may be empty to rely on the defaults (whole DB) or may specify a particular host and/or component name:

  • when only component names are given, and only those components will be involved (e.g., lxqe101).

  • when a component type name is given, components for that type are selected. (e.g., lxqe).

  • when a host name is given, any component following is narrowed to that host. If no components follow the host name, all components from the host are selected.

This may be repeated to specify different hosts and/or components.

The special host names db, repl, and repl2 may be used and stand for hosts without the nodb attribute, hosts for the first replica, and hosts for the second replica (hosts that are a mirror of other ones).

20.3. lx addlib

Add the given file(s) to the lib directory of the installed hosts, to add or update a library or jar in the installation:

usage: addlib [-h] [-l] [-v] [-D] [-a host] [-x] [files [files ...]]

add libs to installed lx

positional arguments:
  files       files to add

optional arguments:
  -h, --help  show this help message and exit
  -l          local run only
  -v          verbose
  -D          enable debug diags
  -a host     copy to this installed host(s)
  -x          internal use only

20.4. lx backup

usage: backup [-h] [-v] [-D] [-n] [-d dir] [-F] [-i] [-e] [-r] [-o]
              [what [what ...]]

make a backup

positional arguments:
  what        host/comps

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command
  -i          incremental backups
  -e          encrypt
  -r          restore
  -o          online backup

By default, backups are kept at $LXDIR/dump at the host used to issue backup commands. The convention is to use the first host to keep backups.

That is for internal backups (backups at a installed host).

For external backups, lxbackup (or lxdisk backup) is used at the host keeping the backup, and the backup top-level directory is given with flag -d or defaults to ./lxdump otherwise.

Directories under dump/ are named using the date as their name. When more than one backup is made on the same date, further digits are added to number succesive backups. A sorted list for the directory reports backups from oldest to newest.

For incremental backups, a final + is added to the directory name.

To create a full, cold, backup when leanXcale is not running:

unix$ lx backup
#backup...
new 240809
make atlantis disk/kvms100 /usr/local/leanxcale/dump/240809/kvms100
make atlantis disk/kvds100 /usr/local/leanxcale/dump/240809/kvds100
make atlantis disk/kvds101 /usr/local/leanxcale/dump/240809/kvds101
make atlantis disk/lxqe100/log /usr/local/leanxcale/dump/240809/lxqe100
unix$

The printed name is the name for the directory keeping the backup, as used when restoring it. In this case, 240809.

The files kept at the backup are compressed, and must uncompressed if copied by hand. The restore command takes care of this.

Using flag -e both encrypts and compresses the backup files. Flags belong to disk, note that backup is the argument given to disk.

The key used to encrypt the backed files is kept at $LXDIR/lib/lxkey.pem on the installed sites as set up by the installer.

unix$ lx backup -e
#backup...
new 24080901
make atlantis disk/kvms100 /usr/local/leanxcale/dump/24080901/kvms100 crypt
make atlantis disk/kvds100 /usr/local/leanxcale/dump/24080901/kvds100 crypt
make atlantis disk/kvds101 /usr/local/leanxcale/dump/24080901/kvds101 crypt
make atlantis disk/lxqe100/log /usr/local/leanxcale/dump/24080901/lxqe100 crypt

To create an incremental backup use the flag -i.

unix$ lx backup -i
#backup...
new 24080902+
incr: lxqe100 += lxqe100

The output reports which components got files backed up.

The incremental backup is encrypted if the total backup it refers to is encrypted too. No flag -e should be given.

The incremental backup can be performed while the system is running.

To create a backup while the system is running (known as an online backup), use flag -o for lx backup.

unix$ lx backup -o
#backup...
new 24080902
...

This is more expensive than doing a backup with the system down. The flag is required to prevent mistakes. If the system is down when an online backup is asked for, the backup is still made as a cold backup. That is, the flag says that it is Ok to backup while the system is online.

20.5. lx backups

To list the backups known use the backups command:

usage: backups [-h] [-v] [-D] [-n] [-d dir] [-F] [what [what ...]]

list backups

positional arguments:
  what        [last|backupname] host/comps

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command

For example:

unix$ lx backups
240810 ts 1419000
24081002+ ts 1419000
...

Those with a + in their names are incremental backups.

Verbosity can be increased adding one or more -v flags.

unix$ lx backups -v
240810 ts 1419000
240810/kvms100 ts 2551000
240810/kvds100 ts 1292000
240810/lxqe100 ts 1419000
240810/lxqe101 ts 1419000
24081001+ ts 0
24081002+ ts 1419000
24081002+/lxqe100 ts 1419000
...
unix$ lx backups -vv
240810 ts 1419000
240810/kvms100 ts 2551000
240810/kvms100/kvmeta sz 239 ts 2551000 mt 1723285636
240810/kvds100 ts 1292000
240810/kvds100/dbf00001.kv sz 393216 ts 1419000 mt 1723285636
240810/kvds100/seqs sz 4 ts 0 mt 1723285599
240810/lxqe100 ts 1419000
...
unix$ lx backups -vvv
240810 ts 1419000
	240810/kvms100 ts 2551000
		240810/kvms100/kvmeta sz 239 mt 1723285636
			localhost
			ts 1 2551000 bck 0 0 na 0
	240810/kvds100 ts 1292000
		240810/kvds100/dbf00001.kv sz 393216 mt 1723285636
			localhost
			ts 1292000 1419000 bck 0 0 na 3
			tname db-APP-PERSONS
			rmin: tpmin
			rmax: tpmax
...

It is possible to list a single backup by suppling its name and/or specific components using arguments as done with most lx commands. The name last refers to the last backup made. For example:

unix$ lx backups -v 24080901

or

unix$ lx backups -v 24080901 lxqe

This command is actually lx disk list. It is named backups when used outside lx disk for clarity.

For external backups, lxdisk list can be used, or the lxdisk command copied to lxbackups and then used as an external version of lx backups.

20.6. lx copy

The lx copy command can be used to copy disks from hosts, directories or components to other hosts, directories or components.

usage: copy [-h] [-v] [-D] [-n] [-d dir] [-F] [what [what ...]]

copy component disks

positional arguments:
  what        src... dst

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command

		src/dst: repl|host|comp|path

The special source or target repl can be used to copy to or from the replica of a component to restore a mirror from another.

For example

unix$ lx copy kvds100 /tmp

copies the files for kvds100 to /tmp/kvds100.

The same is achieved using

unix$ lx copy kvds100 /tmp/kvds100

As another example:

unix$ lx copy repl1 repl

copies the whole set of disks at replica-1 hosts into their mirrors.

The same thing can be achieved by copying one component at a time using

unix$ lx copy lxqe100.r1 repl

This command is like lx disk copy.

20.7. lxdisk

Create backups, restore them, recover replicas, and copy disk data.

usage: lxdisk [-h] [-v] [-D] [-n] [-d dir] [-F] [-f cfgfile] [-i] [-e] [-r] [-o]
              cmd [what [what ...]]

leanXcale disk tool

positional arguments:
  cmd         cmd: disk command
  what        cmd args

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command
  -f cfgfile  config for the install
  -i          incremental backups
  -e          encrypt
  -r          restore
  -o          online backup

	disk commands (short names):
		backup (bck): full backup
			args: none, or host/comps under -F
		list (lst[, l): list backups
			args: [backupname [comp...]]
		restore (res): restore a backup
			args: [last|backupname [repl|host|comp...]]
		recover (rec): recover replica disks
			args: [repl|host|comp...]
		copy (cp): copy disk files
			args: src... dst
			src/dst: repl|host|comp|path

This command is exactly the lx disk command, but packaged for use from external hosts. Like it happens with the internal command, it can be also used as lxbackup, lxrestore, etc., as a convenience, and adjusts its usage for the corresponding command.

The external host must have ssh access to the installed hosts.

See lx disk for more details an examples.

Using lxdisk is exactly like using lx disk with a few differences:

  • Flag -f is mandatory on the first invocation and provides the installed configuration file.

  • The default backup directory is not $LXDIR/dump, but ./lxdump.

  • The command kvtar must be available at the host running lxdisk.

The configuration file used must be the one retrieved using lx config, and not the one written by the user to perform the install, because lx config reports addresses and details needed by the lxdisk command.

Once a backup has been created, the configuration used is saved along with the backup data and there is no need to use -f to supply it.

Encryption/decryption happens at the source/destination of data when creating/extracting backups. Therefore, there is no key kept at the host running lxdisk.

To prepare a host to use lxdisk, get the lxdisk command, the configuration, and the kvtar command and copy them to the host.

The lxdisk and kvtar commands can be found at the installed $LXDIR/bin directory on any installed host. If you are not sure regarding the $LXDIR value, use this command to find it:

unix$ lx -d pwd
/usr/local/leanxcale
unix$

Flag -d for lx makes it change to the $LXDIR directory before running the given command.

The detailed configuration file to be used is kept at $LXDIR/lib/lxinst.conf. The configuration can be retrieved using the lx config command. For example, this creates ./lxinst.conf with the detailed configuration:

unix$ lx config -o lxinst.conf
saved lxinst.conf

As an example, we can setup an external host named orion to perform backups in this way:

unix$ lx -d pwd
/usr/local/leanxcale
unix$ lx config -o lxinst.conf
saved lxinst.conf
unix$ dir=/usr/local/leanxcale
unix$ scp $dir/bin/lxdisk $dir/bin/kvtar lxinst.conf orion:~

And then just:

orion$ lxdisk -f lxinst.conf backup

Or perhaps:

orion$ cp lxdisk lxbackup
orion$ lxbackup -f lxinst.conf

20.8. lx disk

Create backups, restore them, recover replicas, and copy disk data.

usage: disk [-h] [-v] [-D] [-n] [-d dir] [-F] [-i] [-e] [-r] [-o]
            cmd [what [what ...]]

leanXcale disk tool

positional arguments:
  cmd         cmd: disk command
  what        cmd args

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command
  -i          incremental backups
  -e          encrypt
  -r          restore
  -o          online backup

	disk commands (short names):
		backup (bck): full backup
			args: none, or host/comps under -F
		list (lst[, l): list backups
			args: [backupname [comp...]]
		restore (res): restore a backup
			args: [last|backupname [repl|host|comp...]]
		recover (rec): recover replica disks
			args: [repl|host|comp...]
		copy (cp): copy disk files
			args: src... dst
			src/dst: repl|host|comp|path
		move (mv): move tables or regions
			args: tbl [rid] dstds

This command handles disk data for components to backup them, restore them from a backup, recover replica disks, and copy disk files for hosts or components.

See lxdisk for a similar tool that can be used from external, non-installed, hosts to maintain and use backups from them.

Instead of using directly lx disk, it may be more convenient to use one of its variants: lx backup, lx restore, lx backups, lx recover, and lx copy.

They behave as lx disk when given the command of the same name (but for backups, which is lx disk list).

Refer to the sections on these commands for a description of the respective lx disk commands.

For example,

unix$ lx disk backup

is like

unix$ lx backup

20.9. lx config

Inspect or update the configuration:

usage: config [-h] [-v] [-D] [-s value] [-d] [-n] [-o fname]
              [what [what ...]]

Inspect the configuration

positional arguments:
  what        kvaddr|grafana|cons|[host] [comp|prop...]

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -s          set property values
  -d          delete
  -n          dry run
  -o fname    write the output to fname

By default, lx config prints the whole configuration, or that for elements given as arguments.

The arguments follow the conventional syntax used by most commands, but knows property names also:

  • Giving a host name narrows the rest of the arguments to that host, until another host name is given

  • Giving a component kind (e.g., lxqe) selects those components.

  • Giving a component name with included id (e.g., lxqe101) selects just that component

  • Giving a property name (not a host and not a component) selects just that property.

For example,

unix$ lx config

prints all configuration.

unix$ lx config atlantis mariner

prints the configuration for hosts atlantis and mariner (assuming those are configured host names).

unix$ lx config atlantis kvms mariner lxqe

prints the configuration for kvms components found at atlantis and lxqe components found at mariner

unix$ lxconfig kvms kvds101

prints the configuration for any kvms component and the kvds101 one.

unix$ lxconfig kvms addr

prints the addr attribute for any kvms component.

unix$ lxconfig awsdisk

prints the awsdisk property from the (global) configuration.

Use flag -o to save the configuration (or the parts selected) to the given file.

Use flag -d to remove the selected configuration entries. Do not remove hosts or components, and use this with caution.

Use flag -s to update the selected properties with new values. In this case, the argument for a property includes both its name and the new value (e.g., mem=50m)

For example:

unix$ lx config -s lxqe mem=500m kvds mem=500m

20.10. lx ctl

Issue control requests to lx components

usage: ctl [-h] [-v] [-D] comp [ctl [ctl ...]]

issue control requests

positional arguments:
  comp        component or command
  ctl         ctl arg

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags

	comps:
	ds, ms (master), qe, mm, meta (all ms), all, compname
commands:
	sync: sync the disk
	debug: add/remove/set debug flags
	stats: print statistics
useful ctls:
	all: debug +flags|-flags|flags
	qe: pull start|stop|addrs

This command issues control requests to running processes. Use with care and do not issue a control request unless you know what you are doing.

The comp argument selects the target for the control request or is a known command. It can be any of:

  • a component name, like lxqe100

  • a process name, like qe100

  • a process address, like orion!14440

  • all, to address all processes taking control requests

  • ms or kvms, to address the master kvms process

  • meta, to address all kvms processes

  • ds or kvds, to address all kvds processes

  • cli, to ask for a control request to be forwarded to all running kvms clients.

  • a specific command, like sync, in which case addressed components are implicit.

Different processes have different control requests. In general, ? can be used to learn the control requests supported.

For example,

unix$ lx ctl qe ?
help [cmd]
sync
debug flags
mdebug on/off
dump conflicts|txns|cfg|ni
print conflicts|txns|cfg|ni
txdebug on|off
up [txn|conn]
down [txn|conn]
run
halt: disable txns
abort qe
pull [stop | start | qeaddr...]
test crash|hup now|begin|commit|msgs|xfer|publish|end|rescue

This command calls kvcon to actually issue control requests, after adjusting a bit the target names as a convenience.

For example, adding no extra debug flags can be used to see current debug flags set:

unix$ lx ctl debug +
local:
ms100: DMMOO
ds100: DMMOO
mm100: dDMMOO
qe100: dDMMOO

For lxmeta (mm) and lxqe (qe) processes, lowercase debug flags correspond to log4j loggers and uppercase flags correspond to standard flags used everywhere in the system.

Repeating a flag increases the debug level. Popular debug flags are:

  • D: general debug

  • O: operations

  • M: metadata

Popular debug flags for java are also:

  • j: lxjdbc debug. JDBC requests

  • m: lxmeta debug. Lxmeta processing.

  • q: lxqe debug. Query engine processing.

  • n: lxnet debug. Network operations.

  • l: lxlogger debug. DB Logger and records.

  • s: lxss debug. Snapshot server.

  • c: lxcm debug. Conflict manager.

  • t: lxtxns debug. Transactions.

  • e: lxenum debug. Enumerations.

Other usage examples follow:

unix$ lx ctl lxqe100 stats
...
unix$ lx ctl stats
...
unix$ lx ctl sync
unix$ lx ctl debug +

20.11. lx dbdata

List database data files:

usage: dbdata [-h] [-v] [-D] [-l] [-a] [-t tbl] [-i idx] [what [what ...]]

list database data files

positional arguments:
  what        path|cfg

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -a          print all content
  -t tbl      table name to list data files for
  -i idx      idx name to list data files for

The command lists DB data files as found on disk.

For example:

unix$ lx dbdata
blade110 kvds100 dbf00001.kv tree db-APP-PAYMENTS5
blade110 kvds100 dbf00003.kv deriv db-APP-PAYMENTS5_OA
blade110 kvds100 dbf00005.kv tree db-APP-TBL2

Each line in the output lists the machine and component where the file is kept before the file information.

Flag -v makes the program more verbose:

unix$ lx dbdata
blade110 kvds100 dbf00001.kv tree db-APP-PAYMENTS5 1:2 0x8000000002
	ts 596000 2692000 snapts 0 snapno 31 bckts 0 bckdts 0 0 syncs
	rmin: tpmin
	rmax: tpmax
blade110 kvds100 dbf00003.kv deriv db-APP-PAYMENTS5_OA 3:2 0x18000000002
	ts 596000 2692000 snapts 0 snapno 6 bckts 0 bckdts 0 0 syncs
	rmin: tpmin
	rmax: tpmax
	deriv from db-APP-PAYMENTS5
blade110 kvds100 dbf00005.kv tree db-APP-TBL2 5:2 0x28000000002
	ts 2796000 2796000 snapts 0 snapno 4 bckts 0 bckdts 0 0 syncs
	rmin: tpmin
	rmax: tpmax

The information listed, and the table and index names, is taken from the on-disk files, as known by the data servers.

For example, db-APP-PAYMENTS5 refers to database db, schema APP, and table PAYMENTS5.

To ask for a particular table, use flag -t and give the table name as an argument in the format just described:

unix$ lx dbdata -t db-APP-PAYMENTS5

The region minimum and maximum values are listed also as known by the data servers (i.e., by kvds) and not by the user or the query engines.

It is possible to list only for particular hosts or components by supplying the usual names as arguments.

For example:

unix$ lx dbdata blade110

or

unix$ lx dbdata kvds100

20.12. lx dblog

Print the transaction log contents:

usage: dblog [-h] [-v] [-D] [-l] [-i] [-r] [-a] [-n] [-m] [-s ts] [-e ts]
                [what [what ...]]

print database log files

positional arguments:
  what        host|comp|dir|file|addr...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -i          log information only
  -r          log records only
  -a          print all record content
  -n          options -s and -e refer to record nbs.
  -m          print the mirror log
  -s ts       start from this ts
  -e ts       end at this ts

The command prints the log contents for the given names. Names can be host and or component names to select the addressed components, or a log directory or a logger address.

By default the command prints the log header and the records in it.

Under flag -i, only the information in the log header is printed:

unix$ lx dblog  -i
logs...
blade105: [
	hdr vers1 sealed
		18 records 0:18
		2831 bytes
		1113002:1358002 ts
		1 meta records
		12 commit records
		12 xfer records
]

Under flag -r, only the records are printed.

For data records, only the first line of the recorded message is printed. Use flag -a to print the full record contents (eg, all tuples recorded).

Flags -s and/or -e may be used to focus the search on a timestamp interval, or, if flag -n is given, on a record number interval.

For example, print records with numbers from 10 to 14 for lxqe100:

unix$ lx dblog -r -n -s 10 -e 15 lxqe100
logs...
blade105: [
	commit[10] t1311002 txn.0x14011a r1 boff 1370
	xfer[11] t1311002 txn.0x14011a r1 boff 1759
	data[12] t1348002 db-APP-PERSONS txn.0x1491a2 r1 boff 1808
		Tadd tag 26 sz 286 flg 0x40 0x1491a2 0x0 0x1491a2 <db-APP-PERSONS> 0x0 0x1485e9
			[0]<> [0]<> tpzero [2]<0509> tpzero tpzero
	commit[13] t1348002 txn.0x1491a2 r1 boff 1857
	xfer[14] t1348002 txn.0x1491a2 r1 boff 2246
]

20.13. lx dbmeta

Print DB metadata as found on disk.

usage: dbmeta [-h] [-v] [-D] [-l] [-a] [-n] [-w prop] [-f [file]]
              [what [what ...]]

print database meta files

positional arguments:
  what        path|cfg

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -a          print all content
  -n          dry run
  -w prop     write flag
  -f [file]   meta file

This command prints the DB metadata as kept on disk.

For example, on a replicated install:

unix$ lx dbmeta
dbmeta...
blade110:
	# kvms100.r1
	meta ts 0
	kvms blade110!14400
	usr:	none
	usr:	lxadmin
blade161:
	# kvms100.r2
	meta ts 0
	kvms blade161!14400
	usr:	none
	usr:	lxadmin

The command accept a kv metadata path to list, similar to kvcon list arguments. For example, to list server metadata as kept on disk:

unix$ lx dbmeta /srv
dbmeta...
blade110:
	# kvms100.r1
	kvms blade110!14400
	kvms blade161!14400
	kvds ds100.r1 at blade110!14500
	lxmeta mm100.r2 at blade161!14410
	kvds ds100.r2 at blade161!14500
	lxmeta mm100.r1 at blade110!14410
	lxqe qe100.r1 at blade110!14420
	lxqe qe100.r2 at blade161!14420

Flag -w can be used to set or clear metadata server flags. For example:

unix$ lx dbmeta -w single /srv/ds100.r1

sets the single flag for the named server, and

unix$ lx dbmeta -w nosingle /srv/ds100.r1

clears the flag.

After restoring a replica, the fail flag for the restored component is usually cleared, and the single flag for the component used as the restore source is cleared as well.

As another example, this lists the metadata information for a table:

unix$ lx dbmeta /db/APP/tbl/INDEX2T0001

Using flag -a reports all information:

unix$ lx dbmeta -a /db/APP/tbl/INDEX2T0001
		owner 'lxadmin'
		fmt#0x0: 16 flds 1 key 16 usr 15 delta 0 sflds 0 old
		... more output removed ...
		2 regs 2 repls
		reg [ds101:69:2]
			min: tpmin
			max: tpmax
		reg [ds100:69:2]
			min: tpmin
			max: tpmax

20.14. lx disable

Disable transactions.

usage: disable [qe|qename]

For example,

	unix$ lx disable

disables transactions for just the entire system, until the they are enabled.

While transactions are disabled, lx status shows the system status as waiting (for transactions to become available).

Before using this command, stop the watcher if the system was starting with flag -w or lx watch is running.

Otherwise, because new JDBC connections are not accepted while transactions are disabled, the watcher might consider that the system is not in good health, and then restart the system.

20.15. lx enable

Enable transactions.

usage: enable [qe|qename]

For example,

	unix$ lx enable qe100

enables transactions for just the qe100 query engine.

20.16. lx fmt

Format the whole DB or the indicated hosts or components:

usage: fmt [-h] [-D] [what ...]

fmt the store

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -D          enable debug diags

For example, format the kvds101 disk:

unix$ lx fmt kvds101

20.17. lx heap

Dump java process heaps and heap histograms.

usage: heap [-h] [-v] [-D] [-l] [-d] [-s] [-c] [-r] [-o] [-n num]
            [what [what ...]]

get heap dumps and histograms

positional arguments:
  what        host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -d          save a heap dump
  -s          save a heap histogram
  -c          cmp with baseline heap histogram
  -r          remove saved dump and histogram
  -o          include non-live objects in histogram
  -n num      number of lines

The command understands the conventional syntax to select hosts and/or components. For example, save a base histogram to compare later and check out for leaks we can:

unix$ lx heap -s lxqe
heaps...
blade105 [
	saved log/lxqe100.r1.hist
]
blade110 [
	saved log/lxqe100.r2.hist
]

Later, we can compare with another histogram using

lx heap -c lxqe
heaps...
blade105 [
	saved log/lxqe100.r1.hist2
	lxqe100.r1:
]
blade110 [
	saved log/lxqe100.r2.hist2
	lxqe100.r2:
		[B: 9919 objs 674512 bytes
		java.lang.String: 9696 objs 232704 bytes
		java.lang.Class: 1779 objs 212480 bytes
		[I: 923 objs 157680 bytes
]

To print the current histogram (top 50 lines by default), run it without arguments:

lx heap -c lxqe
heaps...
blade105 [
	lxqe100.r1:
		 num     #instances         #bytes  class name (module)
		-------------------------------------------------------
		   1:         29587        7329264  [B (java.base@11.0.22)
		   2:          2905         814536  [I (java.base@11.0.22)
		   3:          6303         756888  java.lang.Class (java.base@11.0.22)
	...

20.18. lx help

Ask for help:

usage: help [cmd]

Print usage information

Prints the list of known commands with quick usage information, or detailed usage information about the given command.

20.19. lx license

Checks the license status or installs a new license:

usage: license [-h] [-v] [-D] [-f file]

inspect or update license files.

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -f file     license file to install

For example, ask for the current status:

unix$ lx license
    license expires: Mon Dec 30 00:00:00 2024

or install a new file lxlicense with the desired license:

unix$ lx license lxlicense

20.20. lx logs

List and inspect logs for installed components:

usage: logs [-h] [-D] [-g rexp] [-a] [-p] [-c dst] [-s start]
            [-e end] [what ...]

list logs

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -D          enable debug diags
  -g rexp     grep rexp
  -a          all logs, not the last one
  -p          print the last log (all if -a)
  -c dst      copy the last logs to this dir (all if -a)
  -s start    start fname time (yymmdd.hhmm or prefix)
  -e end      end fname time (yymmdd.hhmm or prefix)

For example, list the logs for kvds at atlantis, but only those after the time 230518.0551 (yymmdd.hhmm, or a prefix of this, can be used):

unix$ lx logs  -s 230518.0551 atlantis kvds

Print the last log for kvds101:

unix$ lx logs  -p kvds101

Grep all the logs for lines with fatal:

unix$ lx logs  -g fatal

Copy all (not just the last one) kvds101 logs to /tmp:

unix$ lx -a -c /tmp kvds101

20.21. lx move

The lx move command can be used to move tables or regions from one data server to another while the system is down.

usage: move [-h] [-v] [-D] [-n] [-d dir] [-F] [what [what ...]]

move tables or regions

positional arguments:
  what        tbl [rid] dstds | srcds dstds

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command

For example

unix$ lx move db-APP-PERSONS kvds100

moves the files for the named table to the given server. The table name uses the data server syntax, not the SQL syntax. This command is not meant for users.

This command is like lx disk move.

20.22. lx procs

List the processes for the DB:

usage: procs [-h] [-v] [-D] [-p] [what ...]

list processes

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -p          report ports in use

For example, list the processes at host blade123:

unix$ lx procs blade123
procs...
blade123 [
	lxmeta100	pid 813750	alive running
	kvds100	pid 813734	alive running
	kvms100	pid 813729	alive
	kvds103	pid 813746	alive running
	kvds101	pid 813738	alive running
	spread	pid 813725	alive
	kvds102	pid 813742	alive running
	lxqe100	pid 813773	alive running
]

Or to see the port ustage status:

unix$ lx procs -p blade123
blade123 [
		spread	14444	busy
		kvms100	14400	busy
		lxmeta100	14410	idle
		lxqe100	14420	busy
		kvds100	14500	busy
		kvds101	14504	busy
		kvds102.14508	busy
		kvds103	14512	busy
]

See also flag -p of lx status.

20.23. lx recover

usage: recover [-h] [-v] [-D] [-n] [-d dir] [-F] [what [what ...]]

recover replica disks

positional arguments:
  what        repl|host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command

The command lx recover locates disks for failed replicas or replica components and restored them from their mirrors.

Arguments given restrict the repairs to them.

For example, after replica failures, we can run

unix$ lx recover

To try to recover their disks before restarting.

Use flag -n for a dry run to see what would be repaired before running it, unless you are sure to recover failed replicas.

20.24. lx report

Report system status and debug information.

usage: report [-h]

report system information for debugging

positional arguments:
  what        host|comp...

options:
  -h, --help  show this help message and exit
  -D          enable debug diags

This program collects information from the system and builds an archive to be sent to support.

unix$ lx report
report: lxreport.231009...
version...
procs...
logs...
stacks...
stack lxmeta100...
stack kvds103...
stack kvms100...
stack spread...
stack kvds102...
stack kvds100...
stack kvds101...
stack lxqe100...

# send this file to support.
-rw-rw-r-- 1 leandata leandata 54861 Oct  9 14:58 lxreport.231009.tgz

As printed by the command output, the resulting tar file should be sent to support.

The archive includes:

  • installed version numbers

  • underlying OS names are versions

  • complete disk usage for the installed systems

  • complete process list for the installed systems

  • memory usage for the installed systems

  • lx process list

  • logs for components (last log file only, for each one)

  • stacks for each component

  • stacks for each core file found

When kvms is still running, the archive includes also:

  • statistics for the sytem

  • long list of kv resources

  • process list for each kvds

  • file list for each kvds

20.25. lx restore

To restore a backup, use the restore command.

usage: restore [-h] [-v] [-D] [-n] [-d dir] [-F] [what [what ...]]

restore a backup

positional arguments:
  what        last|backupname [repl|host|comp...]

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -n          dry run
  -d dir      root backup dir
  -F          force command

For example:

unix$ lx restore
#disk...
check...
restore 24080904...
restore atlantis disk/kvds100 from /usr/local/leanxcale/dump/24080904/kvds100 crypt
dbf00001.kv
dbf00003.kv
...

Do this while the system is stopped. By default, it selects the last backup made.

To restore a particular backup, supply its name:

unix$ lx restore 24080904

This can be done also for incremental backups. When restoring an incremental backup, the restore takes data also from previous incremental backups and the corresponding total backup.

To restore only specific hosts or components, supply their names or types as done for other commands:

unix$ lx restore lxqe

Or perhaps:

unix$ lx restore 24080904 lxqe

To restore a backup, it is usually desirable to format the disk for the involved components before using restore to restore their disks.

To remove a backup, it suffices to remove its directory. For example:

unix$ lx -d rm -rf dump/230720

Here we used the flag “-d” for lx to change to $LXDIR before executing the remove command, which makes it easy to name the directory used for the dump.

When restoring encrypted backups, the disk command knows it has to decrypt, and it uses the key at $LXDIR/lib/lxkey.pem, the one used when making the backup.

If you did reinstall, update the key so it matches the backup key used.

For external backups, lxdisk restore can be used, or the lxdisk command copied to lxrestore and then used as an external version of lx restore. Al alternative is to use lxbackup with flag -r.

20.26. lx run

Run a command on the selected installed hosts, using the lx environment to run it:

usage: run [-h] [-v] [-D] [-d] ...

run a command on the installed host(s).

positional arguments:
  hosts       host|comp... cmd...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -d          cd $LXDIR before running

Arguments specify hosts where to run (as usual in the rest of commands), perhaps none (to imply all DB hosts), and then the keyword cmd must be given, followed by the command and arguments to run on each host.

For example, discover where this thing is installed, by running pwd and using flag -d to set the current directory to the install directory on each host.

unix$ lx run -d cmd pwd

Or, as an exceptional measure, kill all processes running:

unix$ lx run cmd killprocs

Or, kill just those at blade123:

unix$ lx run blade123 cmd killprocs

Or kill -9 any kvds at blade123:

unix$ lx run blade123 cmd killprocs -9 kvds

20.27. lx stack

Dump process stacks

usage: stack [-h] [-v] [-D] [-l] [what [what ...]]

print stacks

positional arguments:
  what        host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only

The command understands the conventional syntax to select hosts and/or components. For example, to dump the stack of kvms components:

unix$ lx stack kvms
localhost [
	stack kvms100: [
		Thread 12 (Thread 0x7f8fc57fa700 (LWP 9272)):
		#0  __libc_read (nbytes=40, buf=0x7f8fa8000bd8, fd=13) at linux/read.c:26
		#1  __libc_read (fd=13, buf=0x7f8fa8000bd8, nbytes=40) at linux/read.c:24
		...
	]
]

With flag -v, local variables are printed too.

For java processes, both the native stack and the java stacks are printed.

20.28. lx start

Starts the DB for operation:

usage: start [-h] [-v] [-D] [-l] [-w] [-k] [-r] [what [what ...]]

start the service

positional arguments:
  what        host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -w          start watch
  -k          keep core files
  -r          restart failed QEs

For example, to start the kvds components at host atlantis and leave the rest of the installation alone:

unix$ lx start atlantis kvds

With flag -w, start will run the watch service. This pings the DB to make sure it can answer queries, and, when that is not the case, try to stop and restart the system. See the Section 20.32 section for details.

By default, start will remove files on the installed directories with names suggesting they are core dumps. Flag -k prevents this from happening and can be used to keep core dump files for debugging.

Under flag -r, start will ask any lxmeta process started to try to restart a failed QE that was running, unless it was restarted less than one minute ago.

20.29. lx status

Prints the status for the system or waits for a given status:

usage: status [-h] [-v] [-D] [-w status] [-t tout] [-r] [-p]

show or wait for a DB status

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -w status   wait for the given status
  -t tout     timeout for -w (secs)
  -r          print replica status
  -p          print process status

For example, to learn the status:

unix$ lx status
status: waiting
	kvds100: recovering files
	kvds101: recovering files

Or, to wait until the status is running:

unix$ lx status -v -w running
status: waiting
	kvds100: recovering files
	kvds101: recovering files
status: running
unix$

The status can be any of:

  • stopped: no process is running.

  • failed: some processes did fail.

  • waiting: processes are running but there is no SQL service.

  • running: processes are running and SQL connections are available.

It can be also running with failures when using replication and suffering failures.

Flag -r reports the replica status:

unix$ lx status -r
status: stopped
replica: ok
unix$

The replica status can be any of:

  • no: no replication used.

  • ok: replication used and state ok.

  • single: replication used and suffering single processes (mirror failures).

For example:

unix$ lx status -r
status: stopped
replica: no
unix$

Flag -p reports the process status for each known component For example:

unix$ lx status -p
kvds100 alone stopped
kvms100 alone stopped
lxmeta100 alone stopped
lxqe100 alone stopped
unix$

When using replicas, the preferred mirror process is listed first.

unix$ lx status -rp
status: stopped
replica: ok
kvds100.r2 mirror stopped snap 890191999
kvds100.r1 mirror stopped snap 352036999
kvms100.r1 mirror stopped
kvms100.r2 mirror stopped
lxmeta100.r1 mirror stopped snap 890191999
lxmeta100.r2 mirror stopped
lxqe100.r1 mirror stopped snap 889261999
lxqe100.r2 mirror stopped snap 889260999
unix$

The mirror status for each process can be any of:

  • mirror: replicated and its mirror is ok.

  • alone: not replicated.

  • single: did survive its mirror process.

  • outdated: did stop or fail before its mirror process.

  • conflict: did survive its mirror and its mirror thinks the same.

20.30. lx stop

Halts the DB or stops individual hosts or components:

usage: stop [-h] [-v] [-D] [-l] [-w] [-k] [what [what ...]]

stop the service

positional arguments:
  what        host|comp...

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags
  -l          local run only
  -w          do not stop watch
  -k          kill -9

For example, halt the DB:

unix$ lx stop

Stop just the kvds servers:

unix$ lx stop kvds

If the watch service is running, make sure to stop it before stopping individual components. Otherwise it might decide to wait for lxmeta to stop and then restart the system.

Without arguments, stop will first stop the watch service and there is no extra caution needed.

Under flag -w, stop will not stop the watch process.

Flag -k may be used to kill components. Do not use this unless you know what you are doing. For example, to kill the lxqe100

unix$ lx stop -k lxqe100
stop...
atlantis: [
	stop: kill -9 lxqe100 pid 791944...
]

20.31. lx version

Print the installed version:

usage: version [-h]

print installed version

optional arguments:
  -h, --help  show this help message and exit

The program reports the leanXcale version name, along with detailed version information for distribution packages installed:

unix$ lx version
leanXcale v2.2
    kv         v2.2.2023-09-29.115f5fba70e3af8dc203953399088902c4534389
    QE         v2.2.2023-09-30.1e5933900582.26a7a5c3420cd3d5d589d1fa6cc
    libs       v2.2.2023-09-29.67535752acf19e092a6eaf17b11ad17597897956
    avatica    v2.2.2023-09-27.0b0a786b36e8bc7381fb2bb01bc8b3ed56f49172
    TM         v2.2.2023-09-29.9a9b22cfdc9b924dbc3430e613cddab4ed667a57
    lxlbins    v2.2.2023-09-29.79e7e04fb16b38d08c2d5df1fe08e103d49cb22a
    lxinst     v2.2.2023-10-02.b341e6545913aee8e0b0daf255362.273b33ea6d
    calcite    v2.2.2023-09-27.d3dfcf24285d38add3f4e29a9c2e9eacbcd0b913
    lxodata    v2.2.2023-09-23.b84fa4c7d2ca3e778edd9de29389b2aa6e1a9fb8

20.32. lx watch

Watch out the system and restart it if needed:

usage: watch [-h] [-v] [-D]

watch out the db and stop/start it if needed

optional arguments:
  -h, --help  show this help message and exit
  -v          verbose
  -D          enable debug diags

This program is started by lx start when the flag -w is given to it. It is usually a bad idea to execute this command explicitly.

It waits until the DB is running, doing nothing until that point. From that point on, if the DB ceases to be running, the program will wait for lxmeta to stop, and, then try to stop and start the whole system, and finally exit.

When restarting the system, the flag -w is used, to start a new watcher for the new system.

The implications are that a failure to start will not restart the system more times, and that stopping individual components requires to stop this program first, or it might take actions on its own.

20.33. killprocs

This is a command for local use only:

usage: killprocs [-h] [-9] [-a] [procs]

kill processes

Use lx run to run it at any/all of the installed hosts.

Without any flag, locates the DB process pids looking at the files reporting them, and then kills them.

With flag -a, locates any process in the system by process name (all the ones started by the DB use bin/…​ as a name), and all java processes starting with LX, and kills them.

A TERM signal is sent, and then a KILL signal after a few seconds.

Flag -9 can be used to send only a KILL signal.

If process/component names are given (eg, kvds, or lxqe100), only those processes are killed.

For example, send a kill signal to any kvds at blade123:

unix$ lx run blade123 cmd killprocs -9 kvds

21. Notice

LeanXcale system uses spread as communication bus under the following license:

Version 1.0
June 26, 2001

Copyright (c) 1993-2016 Spread Concepts LLC. All rights reserved.

This product uses software developed by Spread Concepts LLC for use in the Spread toolkit.
For more information about Spread, see http://www.spread.org