Riverbed Granite – Divisional Servers…. Centrally!!
Many of you may already be aware of the Riverbed Granite solution and some of you may already be using the solution, but some of you out there will not have even heard of Riverbed Granite and therefore the following information may prove to be useful to you.
Riverbed Steelhead appliances allow for data to be optimised between two sites which in turns reduces the amount of data that is transferred between those sites. Okay, so that may be a bit of a mouthful but the idea behind this in the real world is that you can have a Riverbed Steelhead device located in your data centre and another at one of your divisional locations. When you have data transferring between the two sites, it will be cached and optimised to make sure that only unique data is transferred between the sites. This is then seen by your users as faster access to files (due to cached data) and reduced data going across WAN links.
Riverbed Granite takes this to the next level, utilising Riverbed Steelhead devices with a Granite license (and a Granite Core) to provide a virtual environment for the division with the storage presented directly from the data centre. So what does this actually mean… well, the Riverbed Steelhead appliance is basically split in half, there is half of the appliance (1 x CPU, some memory and local storage) that is dedicated to optimisation and the other half is turned into a VMware ESXi environment (1 x CPU, some memory and local storage). Storage is presented to the Granite Core, located in the data centre and then it is projected out to the Riverbed Steelhead at the division and appears as SAN storage (either iSCSI or FC) to the virtual environment. You can then build virtual machines on the storage with all changes being replicated back to the storage in the data centre.
Utilising the environment in this way is okay, but the virtual machines can appear to be laggy depending on the WAN speed you have between the data centre and the division… this is where pinning and prepopulation come into play. Pinning a LUN simply means that the disk that you are presenting to the Granite environment from the Core is copied completely to the divisional Steelhead… this means that the performance of the virtual machine is greatly improved because all of the data is stored locally on the divisional Riverbed appliance, any changes in the virtual machine are replicated back to the data centre LUN in the background. The prepopulation function allows for this data to be pinned in advance… when pinning a disk, the data will be copied down as the files are accessed… with prepopulation, it will send all of the data across regardless of whether it is being accessed or not.
Backups can be performed from an agent within the virtual machine which would see the backup data transferred across the WAN link with the updates or using a function within the Granite environment that will perform snapshots of the storage in the data centre and present the virtual machine to a virtual environment in the data centre to be backed up using your normal backup tools. There are various different ways to utilise these functions and each has their positives and negatives.
The key for me, with this design, is that we can now centralise servers whilst giving divisions local performance – and potentially keep backups in the data centre.
It is stunning technology, and there is room for improvement (mainly around how recent the ESXi environment is, currently they are running 5.0 whilst the latest is 5.5 U1)… but this is definitely showing signs of being an excellent technology. I’ll try to post up a build document for building a virtual machine on the Riverbed Granite environment.