Asynchronous Replication
The operational database due to its criticality is normally not used for doing ad hoc analytical queries. In this case, it is convenient to have another database with all changes being performed on the operational database in an asynchronous manner. LeanXcale enables this by means of its asynchronous replication feature. Basically, it allows to export all changes in one LeanXcale database to be propagated to another LeanXcale database. That way, one can have for instance, an operational database cluster and an analytical database cluster. The second being updated with the changes happening in the operational cluster automatically.
This can be configured by using an additional LeanXcale component called LXPULL (it pull changes from another LeanXcale instance) and can be configured by adding the following clause in the configuration file of the remote LeanXcale instance (the one pulling the changes from an operational LeanXcale) for each query engine on the operational LeanXcale:
lxqe 100 addr atlantis!14420 mem 1024m LXPULL blade161!14424
Asynchronous Replication from Another LeanXcale Instance
It is possible to configure an install to make it pull changes from a remote one. In this case, the pulling system will fetch transactions made from the source system and apply them as well.
Applied transactions are not subject to conflict checks and the like, because the aim is to update the target system with respect to the source one, and the source one did already perform those transactions.
The target keeps on trying to reach the source system and, when connected, pulls changes and applies them. Should there be any error during an apply, it is considered a fatal error and the query engine where that happen will stop and signal the error.
If there are more query engines, the system might still continue running, depending on how it has been configured. But it is suggested to use a single query engine on a system pulling changes.
As an example, this configuration file pulls changes from a system installed
at hosts orion
and rigel
.
#cfgfile host atlantis lxqe LXPULL orion!14420;rigel!14420
To learn the addresses for the query engines of a particular install, use lx config
:
orion$ lx config
It is important to add all the addresses for query engines, or some transactions in the source system might be missed.
Once running, the lx status
command can show that the system is pulling.
Use the flag -p
to see the status for each process.
atlantis$ lx status -p status: running kvds100 alone alive snap 66709999 running kvms100 alone alive lxmeta100 alone alive snap 66709999 running lxqe100 alone alive snap 66736999 pulling from orion!14420 rigel!14420
Here, we can see that lxqe100
is pulling from a couple of remote query engines.
To stop pulling for a while, you can use a control request for the pulling query engine. For example:
atlantis$ lx kvcon ctl qe100 pull stop
The status line for lxqe100
should now say not pulling
.
To start pulling again, there is a similar control request:
atlantis$ lx kvcon ctl qe100 pull start
To change the addresses, although another control request (not shown here) can be used, it is usually better to stop the pulling system, change the configuration, and restart it.
This can be done using lx config
with the -s
flag to set a property in the configuration.
For example:
atlantis$ lx config -s lxqe100 'LXPULL=blade161!14424'
changes the configuration as it can be seen:
atlantis$ lx config #cfgfile lib/lxconfig.conf size small host atlantis kvds 100 addr atlantis!14500 mem 1024m kvms 100 addr atlantis!14400 lxmeta 100 addr atlantis!14410 lxqe 100 addr atlantis!14420 mem 1024m LXPULL blade161!14424