Riverbed Granite – Divisional Servers…. Centrally!!
Many of you may already be aware of the Riverbed Granite solution and some of you may already be using the solution, but some of you out there will not have even heard of Riverbed Granite and therefore the following information may prove to be useful to you.
Riverbed Steelhead appliances allow for data to be optimised between two sites which in turns reduces the amount of data that is transferred between those sites. Okay, so that may be a bit of a mouthful but the idea behind this in the real world is that you can have a Riverbed Steelhead device located in your data centre and another at one of your divisional locations. When you have data transferring between the two sites, it will be cached and optimised to make sure that only unique data is transferred between the sites. This is then seen by your users as faster access to files (due to cached data) and reduced data going across WAN links.
Riverbed Granite takes this to the next level, utilising Riverbed Steelhead devices with a Granite license (and a Granite Core) to provide a virtual environment for the division with the storage presented directly from the data centre. So what does this actually mean… well, the Riverbed Steelhead appliance is basically split in half, there is half of the appliance (1 x CPU, some memory and local storage) that is dedicated to optimisation and the other half is turned into a VMware ESXi environment (1 x CPU, some memory and local storage). Storage is presented to the Granite Core, located in the data centre and then it is projected out to the Riverbed Steelhead at the division and appears as SAN storage (either iSCSI or FC) to the virtual environment. You can then build virtual machines on the storage with all changes being replicated back to the storage in the data centre.
Utilising the environment in this way is okay, but the virtual machines can appear to be laggy depending on the WAN speed you have between the data centre and the division… this is where pinning and prepopulation come into play. Pinning a LUN simply means that the disk that you are presenting to the Granite environment from the Core is copied completely to the divisional Steelhead… this means that the performance of the virtual machine is greatly improved because all of the data is stored locally on the divisional Riverbed appliance, any changes in the virtual machine are replicated back to the data centre LUN in the background. The prepopulation function allows for this data to be pinned in advance… when pinning a disk, the data will be copied down as the files are accessed… with prepopulation, it will send all of the data across regardless of whether it is being accessed or not.
Backups can be performed from an agent within the virtual machine which would see the backup data transferred across the WAN link with the updates or using a function within the Granite environment that will perform snapshots of the storage in the data centre and present the virtual machine to a virtual environment in the data centre to be backed up using your normal backup tools. There are various different ways to utilise these functions and each has their positives and negatives.
The key for me, with this design, is that we can now centralise servers whilst giving divisions local performance – and potentially keep backups in the data centre.
It is stunning technology, and there is room for improvement (mainly around how recent the ESXi environment is, currently they are running 5.0 whilst the latest is 5.5 U1)… but this is definitely showing signs of being an excellent technology. I’ll try to post up a build document for building a virtual machine on the Riverbed Granite environment.
Wow. That solve’s a lot of the old issues! So the storage for the VM is stored as a flat file on disk on the riverbed device? What’s the performance like? Could you run heavy transactional/iops heavy systems on there?
So, the projected LUN appears just like a full LUN on the Riverbed device and is formatted with VMFS etc. as normal. Unless the disk is pinned, it just caches files as they are accessed and writes them directly back to the LUN in the data centre. With a pinned LUN, the performance is nearly the same as a SAN LUN although at the moment, I wouldn’t necessarily put high transactional or heavy iops systems on the environment. There are discussions about introducing a Hyper-V version of the solution as well. Also check out the new announcement by Riverbed as they have just renamed Granite to SteelFusion with a new version due out soon. http://www.riverbed.com/about/news-articles/press-releases/riverbed-announces-steelfusion-the-first-branch-converged-infrastructure.html
Wow, very cool. So soon the centralised only data will be a dream. What will Lutz do 🙂