Less known Solaris Features: Remote Mirror with AVS - Part 7: Truck based synchronisation
Andrew S. Tannenbaum said “Never underestimate the bandwidth of a truck full of tapes hurling down the highway”. This sounds counterintuitive at the first moment, but when you start to think about it, it´s really obvious.
The math behind the phrase
Let´s assume that you have two datacenters, thousand kilometers apart from each other. You have to transport 48 Terabytes of storage. We will calculate with the harddisk marketing system, 48.000.000 Megabytes. Okay … now we assume, that we have a 155Mps leased ATM line between the locations. Let´s assume that we can transfer 15,5 Megabytes per second of this line under perfect circumstances. Under perfect circumstances we can tranfer the amount of data in 3096774 seconds. Thus you would need 35 days to transmit the 48 Terabytes. Now assume a wagon car with two thumpers (real admins don´t use USB sticks, they use the X4500 for their data transportation needs)in the trunk driving at 100 kilometers per hour. The data would reach the datacenter within 10 hours. Enough time to copy the data to the transport-thumpers and after the arrival from the thumper to the final storage array.
Truck based replication with AVS
AVS Remote mirror supports a procedure to exactly support such a method: Okay, let´s assume you want to migrate a server to a new one. But this new server is 1000km away. You have multiple terabytes of storage, and albeit your line to the new datacenter is good enough for the updates, an full sync would take longer than the universe will exist because of the proton decay.
AVS Remote Mirror can be configured in a way, that relies on a special condition of primary and seconday volumes: The disks are already synchronized, before starting the replication. For example by doing a copy by
dd to the new storage directly or with the redirection of an transport media like tapes. When you configure AVS Remote Mirror in this way, you don´t need the initial full sync.
On our old server
To play around, we create at first a new filesystem:
Now mount it , play around with it and put a timestamp in a file.
Okay, now unmount it again.
Now we can generate a backup of this filesystem. You have to make a image of the volume, making a tar or cpio file backup isn´t sufficient.
Okay, now activate the replication on the primary volume. Don´t activate it on the secondary one! The important difference to a normal replication is the
-E. When you use this switch, the system assumes that the primary and secondary volume are identical already.
Okay, we´ve used the
-E switch again to circumvent the need for a full syncronisation. When you look at the status of volume, you will see the volume in the “logging” state:
This means, that you can do changes on the volume.
Now we transmit our image of the primary volume to our new system. In my case it´s scp, but for huge amount of data sending the truck with tapes would be more sensible.
On our new server
Okay, when the transmission is completeted, we write the image to the raw device of the secondary volume:
Okay, now we configure the replication on the secodary host:
A short look into the status of replication:
Okay, our primary and secondary volumes are still in logging mode. How do we get them out of this? In our first example we did an full syncronisation, this time we need only an update synchronisation. So login as root to our primary host and initiate such an update sync. This is the moment, where you have to stop working on the primary volume.
After this step all changes we did after creating the image from our primary volume will be synced to the secondary volume.
Testing the migration
Well … let´s test this: Do you remember, that we created
/mnt/test6 after the
dd for the image? Okay, at first, we put the replication in logging mode again. So login as root on our secondary host.
Now we mount the secondary volume:
By the virtues of update synchronisation, the
test6 appeared on the seondary volume.Let´s have a look in
Cool, isn´t it ?