Installation Guide

1. Background

Here we provide background information about the different components of LeanXcale as well as the ports where they run.

1.1. LeanXcale Components

Before installing, it is important to know that LeanXcale has a distributed architecture and it consists of several components:

  • lxqe: Query engine in charge of processing SQL queries.

  • kvds: Data server of the storage subsystem. There might be multiple instances.

  • kvms: Metadata server of the storage subsystem.

  • lxmeta: Metadata process for LeanXcale. It keeps metadata and services needed for other components.

  • stats: Optional monitoring subsystem to see resource usage and performance KPIs of LeanXcale database.

  • odata: Optional OpenDATA server to support a SQL REST API.

There are other components used by the system, that are not relevant for the user and are not described here. For example, spread is a communication bus used by LeanXcale components.

1.2. LeanXcale Ports

The components listen on the following network ports:

  • Ports 9100, 9090 and 3003 are used by the stats subsystem, when installed.

  • The base port for all other components is 14000. (i.e. ports are in the 14000-14999 range).

  • Each component has a port assigned, but is free to use the following 4 ports too.

  • lxqe uses ports at base plus 420. The first instance at a host is at port 14420, the next one at 14424, another one at 14428, etc.

  • kvds uses ports at base plus 500, i.e. 14500. When more than one kvds is configured, following instances at the host will use 14504, 14508, etc.

  • kvms uses ports at base plus 400, i.e. 14400.

  • lxmeta uses ports at base plus 410, i.e. 14410.

  • spread uses ports at base plus 444, i.e., 14444.

  • odata (when installed) uses ports at base plus 4, i.e., 14004.

2. Bare Metal and IaaS Installs

In here we describe how to install in bare metal and IaaS instances. Note that an IaaS (or virtualized) instance is equivalent to a bare metal one. IaaS that have been tried include AWS, Google, Azure and OCI.

2.1. Prerequisites

Never install or run leanXcale as root.

To install, you need:

  • 2 cores & 2 GB of memory

  • A valid LeanXcale license file. Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

  • The LeanXcale zip distribution. The LeanXcale zip distribution can be found at:

https://artifactory.leanxcale.com/artifactory/lxpublic
  • Ubuntu 20.04 LTS or 22.04 LTS with a standard installation, including

    • openjdk-11-jdk

    • python3

    • python3-dev

    • python3-pip

    • gdb

    • ssh tools as included in most UNIX distributions.

    • zfsutils-linux

    • libpam

  • Availability for ports used by leanXcale (refer to the [LeanXcale Components and Ports] section for details).

The install program itself requires a unix system with python 3.7 (or later).

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

The LeanXcale zip distribution can be found at:

https://artifactory.leanxcale.com/artifactory/lxpublic

There is a zip file per version. Download the latest version.

If needed, install the required software dependencies on your system:

unix$ sudo apt update
unix$ sudo apt -y install python3 python3-dev python3-pip gdb openjdk-11-jdk curl

Here, and in what follows, unix$ is the system prompt used in commands and examples.

2.2. Install

Working on the directory where the distribution has been downloaded, unzip the LeanXcale zipped distribution:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Here, you type the command after the unix$ prompt, and the following lines are an example of the expected output.

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To install, run the lxinst program and, as an option, supply the name of the installation directory (/usr/local/leanxcale by default):

unix$ ./lxinst /usr/local/leanxcale
sysinfo localhost...
install #cfgfile: argv /usr/local/leanxcale...
cfgcomps…
...
config: lxinst.conf
install done.

To run lx commands:
    localhost:/usr/local/leanxcale/bin/lx
Or adjust your path:
    # At localhost:~/.profile:
    export PATH=/usr/local/leanxcale/bin:$PATH
To start (with lx in your path):
    lx start

The command takes the zipped LeanXcale distribution from ./lxdist, and installs it at the target folder.

The detailed configuration file built by the install process is saved at lxinst.conf, as a reference.

The database has users. The user lxadmin is the administrator for the whole system and during the install you are asked to provide its password.

To complete the install, a license must be added, as explained in the next section.

As another example, this installs at the given partition /dev/sdc2, with both encryption and compression, mounting the installed system at /usr/local/leanxcale:

unix$ ./lxinst compress crypt /dev/sdc2:/usr/local/leanxcale
sysinfo localhost...
install #cfgfile: argv /usr/local/leanxcale...
cfgcomps…
...
config: lxinst.conf
install done.

To list installed file systems (on each host):
	zfs list -r lxpool -o name,encryption,compression
To remove installed file systems (on each host):
	sudo zpool destroy lxpool
To run lx commands:
    localhost:/usr/local/leanxcale/bin/lx
Or adjust your path:
    # At localhost:~/.profile:
    export PATH=/usr/local/leanxcale/bin:$PATH
To start (with lx in your path):
    lx start

The final messages printed by the install program will remind you of how to list or remove the installed file systems.

2.3. Using leanXcale Commands

The installation creates the lx command. Use it to execute commands to operate the installed system.

At the end of the installation, the install program prints suggestions for adjusting the PATH variable so that lx will be available as any other command. For example, as done with:

export PATH=/usr/local/leanxcale/bin:$PATH

when the install directory is /usr/local/leanxcale.

The lx command can be found at the bin directory in the install directory. For example, at /usr/local/leanxcale/bin/lx when installing at /usr/local/leanxcale.

In what follows, we assume that lx can be found using the system PATH.

The first command used is usually the one to install a license file:

unix$ lx license -f lxlicense

It is also possible to put the license file at ~/.lxlicense, and the install process will find it and install it on the target system(s).

2.4. Multiple Hosts and Replicated Installs

To install at multiple hosts, you may indicate target host names and install paths in the command line, like in the following example:

unix$ lxinst mariner:/opt/leanxcale atlantis:/usr/local/leanxcale

or simply provide the target hostnames:

unix$ lxinst mariner atlantis

When using a configuration file, simply define multiple hosts:

# xample.conf
host mariner
host atlantis

and use such file to install:

unix$ lxinst -f xample.conf

To specify a number of components we can still use the command line, like in:

unix$ lxinst atlantis kvds x 2 mariner kvds x 2

or

…​ # xample.conf host atlantis kvds x 2 host mariner kvds x 2 …​

For high availability (HA), it is possible to install using replicated components. As a convenience, replication is configured using mirror hosts where a host can be installed as a mirror of another to replicate its components. Either all hosts must have mirrors, or none of them.

To install using mirrors, for high availability, combine host names in the command line using “+”, like in:

unix$ lxinst atlantis+mariner:/usr/local/leanxcale

or in

unix$ lxinst atlantis+mariner

This configures mariner as a mirror for atlantis. Either all hosts must have mirrors, or none of them.

When using a configuration file, use the mirror property to specify that a host is a mirror for another. In our example:

…​ # xample.conf host atlantis kvds x 2 host mariner mirror atlantis …​

The mirror host should not specify anything other than its mirror property. It takes its configuration from the mirrored host.

3. Docker Installs

3.1. Docker Installs with no Preinstalled Image

When a preinstalled leanXcale docker image is available, disregard this section and proceed as described in the next one.

3.1.1. Prerequisites

To install, you need:

  • A valid LeanXcale license file

  • The LeanXcale zip distribution

  • A Linux system with

    • docker version 20.10.6 or later.

    • python 3.7 or later.

  • Access to the internet to permit docker to download standard docker images and software packages for them.

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

Download the latest LeanXcale zip distribution from:

https://artifactory.leanxcale.com/artifactory/lxpublic

There is a zip file per version.

3.1.2. Install

Working on the directory where the distribution has been downloaded, unzip the LeanXcale zipped distribution:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To create a docker image for leanXcale, use the following command:

unix$ lxinst docker
sysinfo lx1...
install #cfgfile: argv docker...
...
install done
docker images:
REPOSITORY   TAG     IMAGE ID           CREATED         SIZE
uxbase       2         434dfeaedf0c     3 weeks ago     1.06GB
lx           2         6875de4f2531     4 seconds ago  1.33GB

docker network:
NETWORK ID     NAME          DRIVER    SCOPE
471c52155823   lxnet         bridge       local
to start:
    docker run -dit --name lx1 --network lxnet lx:2 lx1

The image created is named lx:2. It is configured to run a container with hostname lx1 on a docker network named lxnet.

To list the image we can execute

unix$ docker images lx
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
lx           2         6875de4f2531   About a minute ago   1.33GB

And, to list the networks we can execute

unix$ docker network ls
NETWORK ID     NAME          DRIVER    SCOPE
471c52155823   lxnet         bridge       local

The created image is a single one for all containers. The name given when creating the container determines the host name used (in this example, lx1).

Before using the system, a license file must be installed on each container created. This is explained later.

To remove the image when so desired, use this command:

unix$ docker rmi lx:2

3.2. Docker Installs with Preinstalled Image

3.2.1. Prerequisites

To install, you need:

  • A valid LeanXcale license file

  • The LeanXcale docker image

  • A Linux system with

    • docker version 20.10.6 or later.

    • python 3.7 or later.

Your sales representative will provide you with a valid license file. In case you have any issues with it, please send an email to sales@leanxcale.com.

Download the latest LeanXcale docker image from:

https://artifactory.leanxcale.com/artifactory/lxpublic/

There is a docker image file per version.

3.2.2. Install

Working on the directory where the image file has been downloaded, add the image to your docker system:

unix$ docker load --input lx.2.3.docker.tgz
Loaded image: lx:2

Double check that the image has been loaded:

unix$ docker images lx:2
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
lx           2         bd350f734448   28 hours ago   1.33GB

Create the docker network lxnet:

unix$ docker network create --driver bridge lxnet

The image is a single one for all containers. The name given when creating the container determines the host name used (in this example, lx1).

Before using the system, a license file must be installed on each container created. This is explained in the next section.

To remove the image when so desired, use this command:

unix$ docker rmi lx:2

3.3. Running leanXcale Docker Containers

This section explains how to create containers from leanXcale images and how to install licenses and change the lxadmin password for them.

Provided the leanXcale image named lx:2, and the docker network lxnet, this command runs a leanXcale docker container:

unix$ docker run -dit --name lx1 --network lxnet -p0.0.0.0:14420:14420 lx:2 lx1
b28d30702b80028f8280ed6c55297b23203540387d3b4cfbd52bc78229593e27

In this command, the container name is lx1, the network used lxnext, and the image used lx:2. The port redirection -p…​ exports the SQL port to the underlying host.

It is important to know that:

  • starting the container will start leanXcale if a valid license was installed;

  • stopping the container should be done after stopping leanxcale in it.

The container name (lx1) can be used to issue commands. For example, this removes the container after stopping it:

unix$ docker rm -f lx1

The installed container includes the lx command. Use it to execute commands to operate the installed DB system.

It is possible to attach to the container and use the ''lx'' command as it can be done on a bare metal host install:

unix$ docker attach lx1
lx1$ lx version
...

Here, we type docker attach lx1 on the host, and lx version on the docker container prompt.

Note that if you terminate the shell reached when attaching the docker container, it will stop. Usually, this is not desired.

It is possible to execute commands directly on the executed container. For example:

unix$ docker exec -it lx1 lx version

executes lx version on the container.

3.3.1. Setting up a License and Admin Password

Starting the container starts leanxcale as well, but not if there is no license installed. In this case we must install a license file in the container.

To install a license file, it must be copied to the container as shown here:

unix$ docker cp ./lxlicense lx1:/usr/local/leanxcale/.lxlicense
unix$ docker exec -it lx1 sudo chown lx /usr/local/leanxcale/.lxlicense

To change the password for the lxadmin user, do this after starting the database:

unix$ docker exec -it lx1 lx kvcon addusr lxadmin
pass? *****

3.3.2. Stopping the Container

The docker command to stop a container may not give enough time for leanXcale to stop. First, stop leanXcale:

unix$ docker exec -it lx1 lx stop

And now the container may be stopped:

unix$ docker stop lx1

4. AWS Installs

AWS Installs require installing the AWS command line client, used by the installer. For example, on Linux:

unix$ sudo apt-get install -y awscli

You need a PEM file to be used for accessing the instance, once created. The PEM file and the public key for it can be created with this command:

unix$ ssh-keygen -t rsa -m PEM -C "tkey" -f tkey

Here, we create a file tkey with the private PEM and a file tkey.pub with the public key. Before proceeding, rename the PEM file to use .pem:

unix$ mv tkey tkey.pem
unix$ chmod 400 tkey.pem

When installing supply the path to the PEM file (without the .pem extension) to flag -K, so that lxinst can find it and use its base file name as the AWS key pair name).

Now, define your access and secret keys for the AWS account:

unix$ export AWS_ACCESS_KEY_ID=___your_key_here___
unix$ export AWS_SECRET_ACCESS_KEY=___your_secret_here___
...

To install the public distribution, go to

https://artifactory.leanxcale.com/artifactory/lxpublic

and download the zip for the last version (for example, lx.2.3.232129.zip).

Before extracting the zip file contents, make sure that there is no lxdist directory from previous versions, or it will include packages that are not for this one.

unix$ rm -f lxdist/*

Extract the zip file:

unix$ unzip lx.2.3.232129.zip
Archive:  lx.2.3.232129.zip
   creating: lxdist/
  inflating: lxdist/lxinst.v2.3.port.tgz
  inflating: lxdist/lxavatica.v2.3.libs.tgz
  ...

Verify that the process completed successfully and there is a directory lxdist (with the distribution packages) and the lxinst program:

unix$ ls
lx.2.3.232129.zip  lxdist  lxinst

To install in AWS, use lxinst using the aws property and the -K option to supply the key file/name to use. For example:

unix$ lxinst -K tkey aws
...
aws instance i-03885ca519e8037a1 IP 44.202.230.8
...
aws lx1 instance id i-0d2287deeb3d45a82

Or, specify one or more properties to set the region, instance type, disk size, and AWS tag. The disk size is in GiB.

The tag is a name and should not include blanks or dots. It will be used for .aws.leanxcale.com domain names.

For example:

unix$ lxinst -K tkey aws awsregion us-west-1 awstype t3.large \
	awsdisk 30 awstag 'client-tag'
...
config: lxinst.conf
install done.

	# To remove resources:
		./lxinst.uninstall
	#To dial the instances:
		ssh -o StrictHostKeyChecking=no -i xkey.pem lx@18.232.95.2.2

It suffices to specify the key and give a tag, so this command is ok:

unix$ lxinst -K tkey awstag xample

When installing using an awstag, domain names are set for installed instances as a convenience. Each instance is named with the tag and a number. For example, with the tag xample, the first instance is xample1.aws.leanxcale.com, the second instance in the same install (if any) is xample2, etc.

Arguments follow the semantics of a configuration file. Therefore, if a host name is specified, it must come after global properties.

Try to follow the conventions and call lx1 the first host installed, lx2 the second one, etc. Also, do not specify directories and similar attributes, leave them to the AWS install process.

If something fails, or the install completes, check for a lxinst.uninstall command, created by the install process, to remove allocated resource when so desired.

The detailed configuration file built by the install process is saved at lxinst.conf as a reference. This is not a configuration file written by the user, but a configuration file including all the install details. This is an example:

#cfgfile lxinst.conf
awstag client-tag
awsvpc vpc-0147a6d8e9d6e6910
awssubnet subnet-0b738237bbce66037
awsigw igw-0a160524cf4edab30
awsrtb rtb-040894a1050b58a3f
awsg sg-05ee9e0232599026d
host lx1
	awsinst i-0e0167a4233f76bb1
	awsvol vol-01629a4b3694bb5fd
	lxdir /usr/local/leanxcale
	JAVA_HOME /usr/lib/jvm/java-1.11.0-openjdk-amd64
	addr 10.0.120.100
	kvms 100
		addr lx1!14400
	lxmeta 100
		addr lx1!14410
	lxqe 100
		addr lx1!14420
		mem 1024m
	kvds 100
		addr lx1!14500
		mem 1024m

In the installed system, the user running the DB is lx, and LeanXcale is added to the UNIX system as a standard service (disabled by default). The instance is left running. You can stop it on your own if that is not desired.

The command lxinst.uninstall, created by the installation, can be used to remove the created AWS resources:

unix$ lxinst.uninstall
...

To use an instance (and host) name other than lx1, supply your desired host name. And use a name, not something with dots or special characters. For example:

unix$ lxinst -K tkey aws lxhost1 lxhost2
...

creates a network and two instances named lxhost1 and lxhost2, and leaves them running.

The tradition is to use names lx1, lx2, etc. for the installed hosts.

Once installed, use lx as in any other system:

unix$ ssh -o StrictHostKeyChecking=no -i tkey.pem lx@44.202.230.8
lx1$ lx stop -v

When installing using a tag, DNS names can be used. The installer output reminds this at the end of the install process:

unix$ lxinst -K tkey awstag xample lxhost1 lxhost2
...
open ports: 14420
install done.

	# To remove resources:
		./lxinst.uninstall
	# To dial the instances:
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@98.81.32.197
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@18.204.227.103
	# or
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@xample1.aws.leanxcale.com
		ssh -o StrictHostKeyChecking=no -i xxkey.pem lx@xample2.aws.leanxcale.com

By default, AWS installs use compression but not encryption. To change this, use the compress and/or the crypt global property with values yes or no as desired. For example:

unix$ lxinst -K tkey aws crypt lxhost1 lxhost2
...

installs with encryption enabled, and this disables compression:

unix$ lxinst aws -K tkey compress no lxhost1 lxhost2
...

4.1. The lx System Service

AWS installs creates a systemd system service for leanXcale. The service is installed as a disabled service.

This is meant to be used only for single host installs.

When multiple instances are used, instances must be started before using lx start to start the system, and lx stop must be used to stop the system before stopping the instances.

This section is here as a reference, and also because it might be useful on single-host installs.

For example:

unix$ ssh -o StrictHostKeyChecking=no -i tkey.pem lx@44.202.230.8
lx$ systemctl status lx
○ lx.service - LeanXcale
     Loaded: loaded (/etc/systemd/system/lx.service; disabled;
     Active: inactive (dead)

In this case, and the rest of this section, we use commands running on the installed AWS instance.

To enable the service, we can:

lx$ sudo su
root@lx1# systemctl enable lx
Created symlink /etc/systemd/system/multi-user.target.wants/lx.service → /etc/systemd/system/lx.service.

To disable it again:

lx$ sudo su
root@lx1# systemctl disable lx
Removed /etc/systemd/system/multi-user.target.wants/lx.service.

To start the service by hand:

lx$ sudo su
root@lx1# systemctl start lx
Removed /etc/systemd/system/multi-user.target.wants/lx.service.

And now we can see its status:

lx$ systemctl status lx
● lx.service - LeanXcale
     Loaded: loaded (/etc/systemd/system/lx.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-08-12 07:19:48 UTC; 1min 14s ago
    Process: 6822 ExecStart=sh -c /usr/local/leanxcale/bin/lx start -l -v |tee /usr/local/leanxcale/log/start.log (code=exited, status=0/SUCCESS)
      Tasks: 116 (limit: 9355)
     Memory: 3.6G
        CPU: 13.370s
     CGroup: /system.slice/lx.service
             ├─6828 bin/spread -c lib/spread.conf
             ├─6829 bin/kvlog -D log/spread
     ...
Aug 12 07:19:48 lx1 sh[6824]: bin/kvds: started pid 6836
Aug 12 07:19:48 lx1 sh[6824]: bin/kvds -D -c 1536m ds101@lx1!14504 /usr/local/leanxcale/disk/kvds101 ...
...

The DB status should now be as follows:

lx$ lx status
status: running

When more than a single instance is installed, the service must be enabled, disabled, started, and/or stopped on all instances involved.

Note that enabling the service makes the DB start when the instance starts, and stop before the instance stops.

This is done by each instance on its own. At start time, the instance runs

lx$ lx start -l

and at stop time, the instance runs

lx$ lx stop -l

The last start and stop output is saved at /usr/local/leanxcale/log/start.log, for inspection.

It might be more convenient to leave the service disabled and, after starting the instances for the install, use lx start command to start the system.

4.2. Listing and handling AWS resources (lxaws)

The program lxaws is used to list or remove AWS installs, and to start and stop instances and find their status.

usage: lxaws [-h] [-e] [-v] [-d] [-D] [-r region] [-n] [-askpeer] [-yespeer]
             [-netpeer] [-delpeer] [-p] [-o] [-c] [-status] [-start] [-stop]
             [tag [tag ...]]

lx AWS cmds

positional arguments:
  tag         aws tag and/or command args

optional arguments:
  -h, --help  show this help message and exit
  -e          define env vars
  -v          verbose
  -d          remove resources
  -D          enable debug diags
  -r region   AWS region
  -n          dry run
  -askpeer    ask peer: tag owner reg vpc
  -yespeer    accept peer:tag pcxid
  -netpeer    set peer net: tag pcxid cidr sec
  -delpeer    del peer: pcxid
  -p          print open ports
  -o          open ports: tag proto port0 portn cidr name
  -c          close ports: tag proto port cidr
  -status     show instance status: tag [host]
  -start      start instances: tag [host]
  -stop       stop instances: tag [host]

Given a region, without any tags, it lists the tags installed:

unix$ lxaws -r us-east-1
xtest.aws.leanxcale.com

Given a tag, it lists the tag resources as found on AWS:

unix$ lxaws -r us-east-1 xtest.aws.leanxcale.com
#xtest.aws.leanxcale.com:
	vpc vpc-0bb89fa4f83fc69c6
	subnet subnet-0b5fb20a5372f89da
	igw igw-08c3cdec1dc865b84
	rtb rtb-0e40ace79169b2e08
	assoc rtbassoc-0248017196d4be19c
	sec sg-028614274a930d0ef
	inst i-041b70633666af01b	xtest1.aws.leanxcale.com	18.209.59.230
	vol vol-04310af65774fc5e7

It is also possible to supply just the base tag without the domain, as in

unix$ lxaws -r us-east-1 xtest

With flag -e prints commands to set environment variables with resources found, as an aid to run other scripts.

unix$ lxaws -e -r us-east-1 xtest.aws.leanxcale.com
#xtest.aws.leanxcale.com:
	export vpc='vpc-0bb89fa4f83fc69c6'
	export subnet='subnet-0b5fb20a5372f89da'
	export igw='igw-08c3cdec1dc865b84'
	export rtb='rtb-0e40ace79169b2e08'
	export assoc='rtbassoc-0248017196d4be19c'
	export sec='sg-028614274a930d0ef'
	export inst='i-041b70633666af01b'
		export addr='peer11.aws.leanxcale.com'
	export vol='vol-04310af65774fc5e7'

When more than one tag is asked for, or more than one instance/volume is found, variable names are made unique adding a number to the name, for example:

unix$ lxaws -e peer1 peer2
#peer1.aws.leanxcale.com:
	export vpc0='vpc-0a50a6e989aa9da9a'
	export subnet0='subnet-0d9fd3a7d03eca61b'
	export igw0='igw-0af4279169fd8cab6'
	export rtb0='rtb-0f5d93a83239c3ada'
	export assoc0='rtbassoc-0e7d0f74cd780e121'
	export sec0='sg-01afb3d3c985f7881'
	export inst0='i-072326e86bcc77e9f'
		export addr0='peer11.aws.leanxcale.com'
	export vol0='vol-08ed1c4acdc0eae61'
#peer2.aws.leanxcale.com:
	export vpc1='vpc-023ce3e3c47bbb48f'
	export subnet1='subnet-0f9af7190d758d6d6'
	export igw1='igw-0ae3c860a69969a83'
	export rtb1='rtb-0d45f2059b4696cf4'
	export assoc1='rtbassoc-0a365cb4472f0b89e'
	export sec1='sg-04b122e86debcd735'
	export inst1='i-0b9cf8cff4b46d657'
		export addr1='peer21.aws.leanxcale.com'
	export vol1='vol-0f9f344a3a2b9bf38'

With flag -d, it removes the resources for the tags given. In this case, tags must be given explicitly in the command line.

unix$ lxaws -d -r us-east-1 xtest.aws.leanxcale.com

4.3. AWS Ports

To list, open, and close ports exported by the AWS install to the rest of the world, use lxaws flags -p (print ports), -o (open ports), and -c (close ports).

In all cases, the first argument is the tag for the install. The tag can be just the AWS install tag name, without .aws.leanxcale.com.

For example, this command lists the open ports:

unix$ lxaws -p xample.aws.leanxcale.com
port: web:	tcp 80	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: comp:	tcp 14420	0.0.0.0/0

Here, the protocol and port (or port range) is printed for each set of open ports. The CIDR printed shows the IP address range that can access the ports, and is 0.0.0.0/0 when anyone can access them.

Before with each port range, the name for the open port range is printed. This name is set by the default install, and can be set when opening ports as shown next.

To open a port range, use -o and give as arguments the tag for the install, the protocol, first and last port in range, the CIDR (or any if open to everyone), and a name to identify why this port range is open (no spaces). For example:

unix$ lxaws -o xample.aws.leanxcale.com tcp 6666 6666 any opentest

The new port will be shown as open if we ask for ports:

unix$ lxaws -p xample.aws.leanxcale.com
port: web:	tcp 80	0.0.0.0/0
port: opentest:	tcp 6666	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: comp:	tcp 14420	0.0.0.0/0

As another example:

unix$ lxaws -o xample tcp 8888 10000 212.230.1.0/24 another
unix$ lxaws -p xample
port: web:	tcp 80	0.0.0.0/0
port: ssh:	tcp 22	0.0.0.0/0
port: another:	tcp 8888-10000	212.230.1.0/24
port: comp:	tcp 14420	0.0.0.0/0

To close a port, use -c and give as arguments the tag for the install, the protocol, a port within the range of interest, and the CIDR used to export the port. Note that any can be used here too instead of the CIDR 0.0.0.0/0 For example:

unix$ lxaws -vc xample tcp 6666 any
searching aws...
close ports tcp 6666-6666 to 0.0.0.0/0

Here, we used the -v flag (verbose) to see what is going on.

As another example, this can be used to close the open port range 8888-10000 from the example above:

unix$ lxaws -c xample tcp 9000 2.2.230.1.0/24

4.4. AWS Instance status

To find out the instance statuses for a given AWS tag, use the -status flag for lxaws:

unix$ lxaws -status lxi24
#lxi24.aws.leanxcale.com:
	inst i-05e0708c0e4965ef0	lxi241.aws.leanxcale.com	stopped
	inst i-02bbf1473c01ea6ae	lxi242.aws.leanxcale.com	44.203.139.222	running

With just a tag name, it lists the instances for the given tag name.

Supplying extra arguments with a host name, IP, or instance ID, lists just the status for the given instances.

unix$ lxaws -status lxi24 lxi241
#lxi24.aws.leanxcale.com:
	inst i-05e0708c0e4965ef0	lxi241.aws.leanxcale.com	stopped

4.5. AWS Instance start/stop

Flags -start and -stop in lxaws can be used to start/stop instances. Their use is similar to that of the -status flag. Given a tag, all instances are started/stopped. Given a tag and a host (or more hosts), matching instances are started/stopped.

For example:

unix$ lxaws -stop lxi24
#lxi24.aws.leanxcale.com:
	inst i-05e0708c0e4965ef0	lxi241.aws.leanxcale.com	stopped

Note that this does not start/stop the DB in the instances. The command just starts/stops the instances.

Once instances are started, connect to one of them and use lx start to start the DB.

before stopping, connect to one of the instances and use lx stop to stop the DB before stopping the instances.

4.6. AWS VPC Peering Connections

Peering connections can be used to bridge two VPCs at AWS.

One peer asks the other peer to accept a peering connection, the peer accepts the connection, and network routes and security group rules for access are configured.

Peering connections are handled using lxaws with the peering connection flags. If you do not have lxaws, copy lxinst to a file named lxaws and give it execution permissions.

These are the flags for peering connections:

  -askpeer    ask peer: tag owner reg vpc
  -yespeer    accept peer:tag pcxid
  -netpeer    set peer net: tag pcxid cidr sec
  -delpeer    del peer: pcxid
  • With flag -askpeer, lxaws requests VPC peering connection.

  • With flag -yespeer, lxaws accepts a VPC peering request.

  • With flag -netpeer, lxaws sets up the routes and port access rules.

  • With flag -delpeer, lxaws removes a peering connection.

To request a peering connection, supply as arguments

  • the tag for the installed system where to setup a peer VPC.

  • the peer AWS owner id (user id).

  • the peer region

  • the peer VPC id

For example:

unix$ lxaws -askpeer  xample 232967442225 us-east-1 vpc-0cf1a3b5c1232d172
peervpc pcx-06548783d83ddaba9

Here, we could have used xample.aws.leanxcale.com instead. The command prints the peering connection identifier, to be used for setting up networking and asking the peer administrator to accept the peering request.

To accept a peer request, supply as an argument the peering connection id, as in:

unix$ lxaws -yespeer pcx-06548783d83ddaba9

In either case, once the dialed peer accepts the request, networking must be set supplying as arguments

  • the tag for the installed system where to setup a peer VPC.

  • the peering connection identifier

  • the peer CIDR block

  • the peer security group id

For example:

unix$ lxaws -netpeer xample pcx-06548783d83ddaba9 10.0.130.0/24 sg-0f277658c2328a955

Our local CIDR block is 10.0.120.0/24. This must be given to the peer, so the peer system can setup routing for this block to our network, along with the VPC id and our security group id.

This information can be retrieved using lxaws as described in the previous section. For example:

unix$ lxaws xample.aws.leanxcale.com
#xample.aws.leanxcale.com:
	vpc vpc-0f69c4a92b0a78523
	peervpc pcx-0e88c49635ed2e59e
	subnet subnet-0ca70fab7476c2a04
	igw igw-0cd6a7bfa99981659
	rtb rtb-06e4994a57d37a054
	assoc rtbassoc-022b54b9a0216e9f5
	sec sg-0df3a6ec01a4ee5ee
	inst i-0b8144548f0e0f1d8	peer11.aws.leanxcale.com	44.200.78.14
	vol vol-0c40de22d908e40bd

Should it be necessary, the CIDR block used by the install can be set when installing the system (but not later), using the property awsnet, as in

unix$ lxinst -K mykey aws awsnet 10.0.120 awstag xample

Note that only the network address bytes used are given, instead of using 10.0.120.0/24.

Once done with a peering connection, it can be dismantled supplying both the tag and the peering connection identifier. The identifier is always given because, when accepting a peering request, the peering connection does not belong to us. But, using the command above you can retrieve such identifier easily.

When peering is no longer desired, the peering connection can be removed.

unix$ lxaws -delpeer xample pcx-005c2b84b89377737

Removing the peering connection also removes the routing entries for it and the security group access rules added when it was setup.

5. HP Greenlake Installs

We currently support VM installs at HP Greenlake. For installing, first create the VM image and in this VM image, install like in bare metal. Follow these steps Bare Metal Installs. The administration of the database instance will be the same as a bare metal. Just follow the bare metal sections in administration guide.

6. High Availability (Active-Active Replication)

7. Licenses

To check for license status or to install a new license you can use the lx license command.

For a local installation, use just

unix$ lx license
	license expires: Mon Dec 30 00:00:00 2024

For docker installs, each container must include its own license. The DB in the container does not start the DB unless a valid license is found. But, the container must be running to check the license status and to install new licenses. Refer to the section on starting docker containers for help on that.

For example, to list the license status for the container lx1 we can run

unix$ docker exec -it lx1 lx license
lx1 [
	kvcon[1380]: license: no license file
  failed: failed: status 1
]
failed: status 1

To install a new license, just copy the license file to the container as shown here

unix$ docker cp ~/.lxlicense lx1:/usr/local/leanxcale/.lxlicense
unix$ docker exec -it lx1 sudo chown lx /usr/local/leanxcale/.lxlicense

The license status should be ok now:

unix$ docker exec -it lx1 lx license
	license expires: Mon Dec 30 00:00:00 2024

8. Customizing Installs

8.1. Predefined Installs

By default, lxinst installs components to exploit the machine(s) used for the install.

It is possible to perform small installs using flag -s like in

unix$ lxinst -s /usr/local/leanxcale

And, it is possible to perform medium size installs using flag -m, as in

unix$ lxinst -m /usr/local/leanxcale

A small install limits the number of kvds to 4, and per-component memory to 1GiB.

A medium install limits the number of kvds to 8, but does not limit per-component memory and tries to use the default allocation policy.

8.2. Custom Install Configurations

Installs are configured using a file that describes the hosts and components installed. The file may include properties for the whole installation, or for individual hosts and components.

To perform custom installations, use a configuration file. You can fine-tune your installation in order to adapt the default installation to your needs, resources, and requirements.

The configuration file describes hosts and components installed.

To install using a configuration file, use the lxinst flag -f to use it, as in:

unix$ lxinst -f myinstall.conf

8.3. Configuration File

The configuration file has an initial section describing global properties, then a series of hosts each one including a series of properties and a series of components with properties.

The file consists of plain text lines. The character # can be used to add comments (the rest of the line is ignored). Each line declares a property, perhaps with a value, or starts a host or component declaration.

Lines declaring hosts start with host, those declaring components start with a component type name. All other lines declare properties.

A host configuration starts using the host keyword and the name for the host (which usually is the host name used to reach the host on the network). It is recommended to use the host name instead of using localhost.

A property is declared with a single line with the property name, then blanks, and then the property value. If the property has no value, it is considered as a boolean and it is set to true.

Properties declared before the first host declaration are global, and may be overridden later by properties defined within a host or component.

All declarations after a host declaration are defined for that host, until another host declaration is found.

All properties defined a component declaration, before another one is declared, are defined for that component.

Those required components and properties not specified by the user are added by the install program with default values.

Variables HOME and USER may be used when defining properties. The variable LXDIR may be used as well and refers to the install (or disk) directory for the host or component.

A component may be repeated adding the option x N to specify how many instances are wanted. When not used, each component declared stands for a single component, but many of the same type can be added for components like kvds.

For docker installs, specify docker before the first host.

For example, when installing at the host named atlantis, this can be a configuration file:

# global properties
lxdir   $HOME/leanxcale
# one host
host atlantis
    # one component
    lxqe
        # properties for this component
        addr atlantis!3444
        mem 2G
    # another component kvds with 4 instances
    kvds x 4
        mem 1G

In this file, the property lxdir (leanXcale install directory) is defined as a global property, and applies to all hosts defined.

Then, a single host (atlantis) is defined. When no host is configured, localhost is used as the default.

For the configured host, two of the required components are configured:

  • For the lxqe component, the network address for the service is given (host name and port), and the memory size is set to 2GiB (Gibibyte, i.e. 1024 MiB).

  • For the kvds component, four instances are configured (x 4), each one with 1GiB of memory.

Note that components required (e.g. lxmeta) but not specified are added by the install program itself.

8.4. Docker configuration files

For a docker installation, specify docker and try not to specify specific paths or hostnames for the installed system. For example:

docker
host lx1
    kvds x2

8.5. Custom Memory Allocation

The installer has a default memory split allocation among components. To specify the memory size for the lxqe and kvds components we can include them in the configuration file and include the mem attribute for them, as done here:

host atlantis
    lxqe
        mem 4G
    kvds x4
        mem 1G

It is suggested not to use more memory in total than the physical memory available on the system (minus the one used by the operating system and leaving a small amount unused for safety).

We suggest to give half the memory to the lxqe component and the other half to all the kvds components (evenly distributed).

Note that by default the install tries to use available memory, unless a small or medium install was selected.

8.6. Custom Disk Allocation

To specify the maximum disk size for the disk usage, specify it for the components using a significant space on disk (most notably kvds for table storage, and lxqe for the transactional log). For example:

host atlantis
    lxqe
        disk 100G
    kvds x4
        disk 100G

8.7. Using TLS

LeanXcale provides TLS 1.3. By default, communication between client processes and the DB service are not encrypted, for speed. This may be reasonable when the machine(s) involved are controlled by the installer and the network used is secure (e.g., an VPN already encrypting the packets exchanged).

For installs where the queries are performed from the internet or from an insecure network, leanXcale can be installed to use always TLS in communications between the query engine(s) and the DB client processes.

Just set the global tls property when installing.

tls
host atlantis

Note this can be done also without using a configuration file. For example, like in

unix$ lxinst tls /usr/local/leanxcale

The SQL console for the installed system will use TLS in its connections. Other clients must use the tls=yes property in the connection to ask the client driver to use TLS.

8.8. Using LDAP

Users defined in the DB correspond to schemas and must be created before they can be used.

When not using LDAP, authentication is performed using the credentials given when users are created.

To authenticate users using LDAP, define the LXLDAP property in the configuration file. This can be done globally or as a host property. For example,

LXLDAP	ldap://ldapsrv:389|simple|ou=People,dc=leanxcale,dc=com
host atlantis
    lxqe
        disk 100G
    kvds x4
        disk 100G

arranges for the system to authenticate user (other than lxadmin) using the given LDAP server and properties.

Note that before LDAP users can use leanXcale, a DB user must be created for them, with the LDAP uid as their name.

8.9. Using PAM

To use UNIX PAM as authentication, define the LXPAM property as yes. This authenticates the DB users using PAM.

Recent Linux systems should include unix_chkpwd(8) to allow applications to check UNIX passwords without access to the /etc/shadow file. Otherwise, the user running the system must be a member of the shadow group or be able to read /etc/shadow.

Note that running as root is not an option. In fact, most systems would check the user id and prevent this from working, if implemented correctly.

LXPAM	yes
host atlantis
    lxqe
        disk 100G
    kvds x4
        disk 100G

If MFA is to be used with PAM, the Google authenticator PAM module must be installed on the system running the DB:

apt-get install llibpam-google-authenticator

The database uses PAM service lxmfa to verify authenticator codes. Add a new PAM configuration file /etc/pam.d/lxmfa, with the following contents

#%PAM-1.0

auth requisite pam_google_authenticator.so no_increment_hotp

@include common-auth

Finally, define the LXUSEMFA property as yes:

LXPAM	yes
LXUSEMFA	yes
host atlantis
    lxqe
        disk 100G
    kvds x4
        disk 100G

Each user must then create an authenticator key (if not already done) as described in the LeanXcale User’s Guide.

8.10. Using Disk Compression and Encryption

LeanXcale uses AES256 GCM for encrypting the storage. To compress and/or encrypt the disk data, define the compress and/or the crypt properties as yes or no. Or define them without a value, meaning yes.

To use this, bare-metal installs require using a partition for installing. It will be setup with a ZFS file system using the compression and encryption settings required.

For AWS installs, it suffices to define the compress and/or the crypt properties, and it is neither needed nor permitted to specify a disk partition for the install. Also, the default in AWS is to compress but not encrypt, unlike in bare metal installs, where the default is neither to compress nor to encrypt.

For example, this configuration installs using /dev/sdc2 with both compression and encryption, mounting the installed partition at /usr/local/leanxcale (the default lxdir):

compress
crypt
host atlantis
    lxpart /dev/sdc2

Note that compress and crypt properties might be defined for each host instead of being set globally for all the installation.

It is possible to achieve the same effect from the command line without creating a configuration file:

unix$ ./lxinst compress crypt atlantis:/dev/sdc2:/usr/local/leanxcale
...
install done.

To list installed file systems (on each host):
	zfs list -r lxpool -o name,encryption,compression
To remove installed file systems (on each host):
	sudo zpool destroy lxpool
To run lx commands:
    localhost:/usr/local/leanxcale/bin/lx
Or adjust your path:
    # At localhost:~/.profile:
    export PATH=/usr/local/leanxcale/bin:$PATH
To start (with lx in your path):
    lx start

The final messages printed by the install program remind you of how to list or remove the installed file systems. For example, this can be used to list the file systems created and their settings:

unix$ zfs list -r lxpool -o name,encryption,compression
NAME          ENCRYPTION  COMPRESS
lxpool               off       off
lxpool/disk  aes-256-gcm       on
lxpool/dump  aes-256-gcm       on

Beware that destroying the installed file systems also removes all the data.

8.11. Custom Query Engine Ports

The network address (including port) used by the query engine to listen for client connections is 14420. To use another port, specify the network address for the lxqe component:

host atlantis
    lxqe
        addr atlantis!3000

But it may be easier to move the base port address used to a range other than 14000. This can be done defining the global (or host) property baseport, as done here:

host atlantis
    baseport 10000

Refer to the [LeanXcale Components and Ports] section for more details regarding network port usage.

8.12. Custom Optional Components: OpenDATA and Monitoring

To install optional components, specify them in the configuration file. It is customary to add them on the first host installed. For example:

host atlantis
    stats
    odata

adds both stats and odata to the installation. As they are optional components, they are never installed by default.

The OpenDATA port is usually 14004 unless configured otherwise.

The default user and password for the stats web page is lxadmin.

8.13. Other Configuration Options

Here we provide more details on configuration files.

This property defines the directory used by the install, and relies on the HOME environment variable on each host involved:

lxdir   $HOME/xamplelx

To use a host just for stats, without any DB component, use only as the value for the stats property:

host atlantis
    # run stats here using other hosts for DB components
    stats only

To include a web interface in your install, add web as a property for the host. The server port is 5000. For example:

host atlantis
    # this enables a web server
    web

In some cases it is desirable to install one host with just the lx command, to operate the rest of the hosts installed (perhaps just during the install process). For example, when installing from a laptop and wanting to have the lx command installed at the laptop to operate a set of hosts being installed.

This can be done adding the nodb attribute to a host. For example:

# localhost is just to run lx cmds w/o db components
host localhost
    lxdir $HOME/xamplelx
    nodb

# installed host
host atlantis
...

Specific ssh and scp commands may be given to reach the installed hosts. Should the installed hosts use different ssh and scp command to reach each other, properties sshi and scpi can be used to given them.

This is an example:

host atlantis
    # from the current location:
    ssh     "ssh leandata@lsdclus04.ls.fi.upm.es -p 22124"
    scp     "scp -P 22124 {src} {user}@lsdclus04.ls.fi.upm.es:{dst}"
    # ssh/scp to be used within the installed hosts
    sshi    "ssh {user}@{host}"
    scpi    "scp {src} {user}@{host}:{dst}"

8.14. Transaction log replication

In all configurations, when installing multiple lxqe components, each lxqe keeps a mirror for the transaction log of another one.

The default configuration assigns mirrors from different hosts, and, when not possible, from the same host.

If the number of lxqe components is odd, one of them will not have a mirror, and this is not often the wanted configuration.

This applies only when host replication is not used. This is described next.

8.15. Replication

Hosts may be replicated, but either all hosts must have replicas or none of them. To define a replica for a host, add another host and define mirror for it using the replicated host name as the property value. For example:

host blade120
host blade121
host atlantis
    mirror blade120
host blade125
    mirror blade121

configures a system with two replicated hosts. That is, four hosts where two ones are a mirror of the other two ones.

In replicated configurations, the set of initial hosts is designated as the first replica (or replica-1) and the mirror hosts are known as the second replica.

Component names in this case include a final .r1 or .r2 to reflect the replica they belong to. For example, kvds100.r1 is likely to be a component in this configuration.

8.16. AWS Options

When using a configuration file, just define the property aws before anything. You can specify the region, instance type, disk size, and AWS tag. The disk size is in GiB. The tag is a string and should not include blanks.

For example, using this file:

#cfgfile aws.conf
aws
awstype m5.large
awsregion us-east-1
awsdisk 100
host lx1

you can run:

unix$ lxinst -K key -f aws.conf

This can be done also using the command line:

unix$ lxinst  -K xkey aws awsregion us-east-1 awstype m5.large awsdisk 100

The partition mounted at /usr/local/leanxcale for the DB storage is 30GiB by default. You can change this using the aswdisk property. It is formatted using a compressed ZFS.

In the installed system, the user running the DB is lx, and LeanXcale is added to the UNIX system as a standard (disabled) systemd service. The instance is left running. You can stop it on your own if that is not desired.

To install multiple instances, install using multiple host names:

unix$ lxinst -K key aws lx1 lx2
install #cfgfile: argv aws lx1 lx2...
...
aws instance i-03c781120293253fd IP 54.159.42.233
setup instance #1...
...
aws instance i-0d232403b202223e1 IP 3.89.224.85
setup instance #2...
...
install done
created lxinst.uninstall

    #To dial the instances:
        ssh -o StrictHostKeyChecking=no -i lxinst.pem lx@54.159.42.233
        ssh -o StrictHostKeyChecking=no -i lxinst.pem lx@3.89.224.85

By default, private IP addresses for hosts are in 10.0.120.0/24. To use a different network, keep 10.0 and pick your own 3rd byte as in

unix$ lxinst -K xkey aws awsnet 10.0.130

9. Install Examples

Several configuration examples follow, for typical and more elaborate cases. Each example shown is a full configuration file, not a part of a file.

Local install at the default directory (/usr/local/leanxcale):

host localhost

Local install with a single directory:

host localhost
    lxdir /ssd/leandata/xamplelx

It is strongly suggested to use the host name instead of localhost.

Local install using ZFS on a the /dev/sdc1 partition with both compression and encryption, mounting it on /usr/local/leanxcale:

host localhost
	compress yes
	crypt yes
	lxpart /dev/sdc1
    lxdir /usr/local/leanxcale

Install at hosts named blade124 and blade123 at the default directory:

host blade124
host blade123

Install at hosts named blade124 and blade123 at the default directory, but using the partitions named for each host and also encryption:

crypt yes
host blade124
	lxpart /dev/sdc1
host blade123
	lxpart /dev/sda1

Install at hosts named blade124 and blade123, default directory, but include monitoring subsystem (runs at the first host):

host blade124
    stats
host blade123

Install at hosts named blade124 and blade123, at /ssd/xamplelx:

lxdir /ssd/xamplelx
host blade124
host blade123

Install at hosts named blade124 and blade123, at /ssd/xamplelx:

host blade124
    lxdir /ssd/xamplelx
host blade123
    lxdir /ssd/xamplelx

Install at host blade124 placing there the kvms, (2 instances of) kvds, lxmeta, and lxqe components. Install a default configuration at blade123:

host blade124
    lxmeta
    lxqe
    kvms
    kvds x 2
host blade123

Install in two docker containers, lx1 and lx2:

docker
host lx1
host lx2

Install the lx command at the local host, without DB components, and install the host blade124 with a detailed configuration, and the host blade123 as a mirror of the former host (see the comments for details):

# use always ~/xamplelx to install
lxdir   $HOME/xamplelx

# we might just write this property, to configure ssh and scp
# in this example, we add explicit configuration for ssh and scp too.
blades

# How to execute a remote shell from here to machines listed
# Use {host} for the target machine, {user} for the user,
ssh     "ssh {user}@{host}"

# How to copy a file to a remote host from here.
# Use {src} for the source file, {host} for the target machine,
# {user} for the user, and {dst} for the target file.
scp     "scp {src} {user}@{host}:{dst}"

# install just lx on the local host
host localhost
    nodb

# install for the DB host blade124
host blade124
    # no monitoring system included, uncomment next line to include
    #stats

    # use this user on that host (defaults to $USER otherwise)
    user    leandata
    ssh     "ssh leandata@lsdclus04.ls.fi.upm.es -p 22124"
    scp     "scp -P 22124 {src} {user}@lsdclus04.ls.fi.upm.es:{dst}"
    # ssh/scp to be used within the installed hosts
    sshi    "ssh {user}@{host}"
    scpi    "scp {src} {user}@{host}:{dst}"

    # Single kvms
    kvms
        addr blade124!9999

    # 4 different kvds
    kvds x 4
        # put their disks at /ssd/xamplelx
        lxdir /ssd/xamplelx/ds
        # 30GiB for disk
        disk 30
        # 200MiB for mem
        mem 200

    # the lxmeta server
    lxmeta

    # a query engine
    lxqe

# make blade123 a mirror of blade124
host blade123
    mirror blade124

10. Configuration Reference Manual

This section provides a reference for the leanXcale configuration file.

Any property may be used as a global property (before any host declaration), as a host property (within a host declaration and before any component), or as a component property (within a component declaration).

Here, we describe properties grouped by their usual scope, for your convenience. But, note that instead of using mem or any other property within a component, it can be used globally to apply to all components, or within a host, to apply to such host.

10.1. Global properties

  • aws: Used as a global property, means that this is to install and start an AWS instance using the (only) configured host name (or lx1 by default).

  • awsdisk N: Used as a global property, fixes the disk size for the AWS AMI. 30 GiB by default.

  • awsregion name: Used as a global property, fixes the AWS used to create the installed AMIs.

  • awstype type: Used as a global property, defines the instance type, by default, m5.large.

  • cache: When specified, the query engine is enabled to handle in-memory table replicas. By default, it is disabled. E.g.:

    host blade124
        lxqe
            cache
  • compress: Used as a global property, makes the install use compression. On non AWS installs this requires setting lxpart for each host installed.

  • crypt: Used as a global property, makes the install use disk encryption. On non AWS installs this requires setting lxpart for each host installed.

  • docker: Used as a global property, makes the install adapt to docker install. In this case, starts and stops are handled by starting/stopping docker containers.

  • fwd yes|no: Used as a global property, enables forwarding

  • lxdir path: Directory used by the install. Components and hosts may override its value. Defaults to /usr/local/leanxcale. E.g.:

    lxdir $HOME/leanxcale
  • size [small|medium|large]: When used as global property selects heuristics to perform a small, medium, or large install. A large install is the default, using the whole machine. A small install tries to use few resources, and a medium install sits in the middle. Using flags -s and -m for lxinst adds this property with the corresponding size.

  • stats [only]: When used as a host property (in one host only), configures hosts to supply statistics and runs prometheus and grafana on this node to gather stats and report them. If the value is only, the host is used just for statistics and DB components use other configured hosts.

  • tls: When used as a global property, makes the installed system use TLS for network exchanges between clients and the installed system.

  • user name: User used by the install Eg:

    user nemo
  • LXLDAP uri: URI used to reach the LDAP server for authentication. System users must still be created for LDAP users to be granted access. Defining this property changes the authentication method only. User lxadmin is never authenticated using LDAP.

    LXLDAP	ldap://ldapsrv:389|simple|ou=People,dc=leanxcale,dc=com
  • LXPAM yes: The PAM sudo service is used to authenticate user. Note that using sudo as a service requires the user to be able to read the file /etc/shadow, which is not a good idea.

  • logtrim mode: Mode can be no, ds (or persist), or backup. Sets the policy for removing old DB log transaction entries to: no removes, remove after persisted in data stores, or remove after backed up. Note that log entries may surive more than asked for, this is a limit for removal, not removal request.

10.2. Host properties

  • lxdir path: Directory used by the install in the host. E.g.:

    lxdir $HOME/xamplelx
  • lxpart device: Device for the partition used for the install. E.g.:

    lxpart /dev/sdc1
  • addr address: Network address used (without port).

10.3. Component properties

  • addr address: Network address used. E.g.:

    host atlantis
        kvms
            addr atlantis!14000
  • disk N: Size for the disk used by the component, using M or G as units or assuming GiB by default. E.g.:

    host atlantis
        kvds
            disk 50
        lxqe
            disk 500m
  • mem N: Size for the memory used by the component, using M or G as units or assuming MiB by default. E.g.:

    host atlantis
        kvds
            mem 500
        lxqe
            mem 1g
  • mirror nameid: Used for lxqe components. Specify the component for the mirror transaction log. E.g.:

    host atlantis
        lxqe 100
            mirror lxqe101
        lxqe 101
            mirror lxqe100
  • LXPULL addr1;addr2;…​;addrn: Used for lxqe components. Make this system pull transactions made on the query engines found at the given addresses. E.g.:

    host atlantis
        lxqe 100
            mirror lxqe101
        lxqe 101
            LXPULL orion!14420;orion!14424

10.4. Other properties

Properties with upper-case names are exported as environment variables for the process(es) executing the component involved. This includes both global properties and host properties as well. For example, to enable debug flags DOM for the kv datastore:

KVDEBUG DOM

These are other properties not described before.

  • mirror name: Defines a host as a mirror of the named host. Either all hosts must have mirrors or none of them can. (For command line arguments, the syntax is blade124+blade145). E.g.:

    host blade124
    host blade125
        mirror blade124
  • nodb: When specified as a host property, makes this host include just the lx command, but no DB component. E.g.:

    host atlantis
        nodb
  • scp template: Template for the scp command used to remotely copy files to a host. Use '{user}' and '{host}' where the user and host address should be included, and {src} ` and ` {dst} where the source and destination file names should be placed. E.g.:

    scp scp {src} {user}@{host}:{dst}
  • scpi template: Template for the (internal) scp command an installed host should use to reach other installed hosts. Use the same syntax described above. E.g.:

    scpi scp {src} {user}@{host}:{dst}
  • spread no|port: Port used by the spread process. Use no to disable it. E.g.:

    spread 4444
  • ssh template: Template for the ssh command used to execute remote commands on the host. Use {user}' and `{host} where the user and host names should be used. Eg.:

    ssh ssh -o StrictHostKeyChecking=no {user}@{host}
  • sshi template: Template for the (internal) ssh command an installed host should use to reach other installed hosts. Use the same syntax describe above. Eg.:

    sshi ssh -o StrictHostKeyChecking=no {user}@{host}
  • web [addr]: Run a web server as an interface for running Lx commands.

11. Notice

LeanXcale system uses spread as communication bus under the following license:

Version 1.0
June 26, 2001

Copyright (c) 1993-2016 Spread Concepts LLC. All rights reserved.

This product uses software developed by Spread Concepts LLC for use in the Spread toolkit.
For more information about Spread, see http://www.spread.org