Riverbed Granite – Divisional Servers…. Centrally!!

Many of you may already be aware of the Riverbed Granite solution and some of you may already be using the solution, but some of you out there will not have even heard of Riverbed Granite and therefore the following information may prove to be useful to you.

Riverbed Steelhead appliances allow for data to be optimised between two sites which in turns reduces the amount of data that is transferred between those sites.  Okay, so that may be a bit of a mouthful but the idea behind this in the real world is that you can have a Riverbed Steelhead device located in your data centre and another at one of your divisional locations.  When you have data transferring between the two sites, it will be cached and optimised to make sure that only unique data is transferred between the sites.  This is then seen by your users as faster access to files (due to cached data) and reduced data going across WAN links.

Riverbed Granite takes this to the next level, utilising Riverbed Steelhead devices with a Granite license (and a Granite Core) to provide a virtual environment for the division with the storage presented directly from the data centre.  So what does this actually mean… well, the Riverbed Steelhead appliance is basically split in half, there is half of the appliance (1 x CPU, some memory and local storage) that is dedicated to optimisation and the other half is turned into a VMware ESXi environment (1 x CPU, some memory and local storage).  Storage is presented to the Granite Core, located in the data centre and then it is projected out to the Riverbed Steelhead at the division and appears as SAN storage (either iSCSI or FC) to the virtual environment.  You can then build virtual machines on the storage with all changes being replicated back to the storage in the data centre.

Utilising the environment in this way is okay, but the virtual machines can appear to be laggy depending on the WAN speed you have between the data centre and the division… this is where pinning and prepopulation come into play.  Pinning a LUN simply means that the disk that you are presenting to the Granite environment from the Core is copied completely to the divisional Steelhead… this means that the performance of the virtual machine is greatly improved because all of the data is stored locally on the divisional Riverbed appliance, any changes in the virtual machine are replicated back to the data centre LUN in the background.  The prepopulation function allows for this data to be pinned in advance… when pinning a disk, the data will be copied down as the files are accessed… with prepopulation, it will send all of the data across regardless of whether it is being accessed or not.

Backups can be performed from an agent within the virtual machine which would see the backup data transferred across the WAN link with the updates or using a function within the Granite environment that will perform snapshots of the storage in the data centre and present the virtual machine to a virtual environment in the data centre to be backed up using your normal backup tools.  There are various different ways to utilise these functions and each has their positives and negatives.

The key for me, with this design, is that we can now centralise servers whilst giving divisions local performance – and potentially keep backups in the data centre.

It is stunning technology, and there is room for improvement (mainly around how recent the ESXi environment is, currently they are running 5.0 whilst the latest is 5.5 U1)… but this is definitely showing signs of being an excellent technology.  I’ll try to post up a build document for building a virtual machine on the Riverbed Granite environment.

About the Author


I have been in IT for the past 15 years and using virtualisation technologies for around the past 8 years. I started, as quite a lot of people do, working with PCs after playing with such iconic systems like the ZX81, ZX Spectrum and then progressing through 386s, 486s, Pentiums etc. After being headhunted at sixth form to work for a small company based around Hertfordshire, UK. I began working with small businesses and gaining a lot of hardware experience. Three years later, after helping to increase the size of the business, I needed to gain exposure to a larger environment to progress my own career. I joined a large manufacturing company around Electronic Test and Measurement which progressed my skills onto more PC work, hardware work and then onto Server Operating Systems. I progressed again onto a consultancy company based in Reading, UK. Initially working as an engineer performing hardware / software installations for larger companies contracted out to the consultancy company, I moved up into a Consultant position continuing my travel across the UK assisting and providing solutions to companies. I finally moved on again to my current position, working back in Hertfordshire, UK. Again working for a large manufacturing company, this time with over 50,000 users worldwide. I am responsible for the datacenter hardware, the storage environment, the vmware environment and also implementing their new Citrix XenApp farm. My days are busy but also productive, its a friendly environment and in my four years of being with the company, I have seen many changes in technology and infrastructure in use within the company. About the site I started this site as I had been thinking of having more of a presence on the web for a while. On a daily basis, I perform tasks and use tools that others may not use or may not think to do and therefore I thought that I would share some of these experiences and tips with others to help with their day to day work. Currently, my main focus of work is around VMware and Veeam Backup & Replication but hopefully as my tasks progress, I’ll be able to share useful bits of information about other areas of IT as well.

3 thoughts on “Riverbed Granite – Divisional Servers…. Centrally!!

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.