Can Server-side caching really alter datacenter costs ?
A very interesting era of everything being branded with ‘software-defined’ is upon us but is server-side caching one of these and can it really a business to avoid storage costs and increase performance. Read on for my thoughts….
Server-side caching is the process by which IO is not forced to reach to the hidden depths of a spindle on a SAN/NAS/DAS array but alleviate the main bottleneck of storage by using the hypervisor or a piece of software in the hypervisor to cache the IO and deliver this back to the virtual machine or virtual desktop minimizing the latency and response times. Most of the software companies are caching reads and then delivering these back at nano second speeds and some are also doing both read and write.
It is a great technology in the new software defined datacenter to have in your arsenal and the benefits can be increased performance to your highly utilised and sensitive applications. It is designed to avoid every increasing storage costs and there are many software solutions that are already on the market that can aid with your bottlenecks in the datacenter. Examples where it can be beneficial are virtual desktop workloads, database applications such as Oracle and SQL and also any application that is IO intensive.
So how does it all work as this seems to be good to be true ?? RAM is the best medium to store this cache as traditionally there are no bottlenecks and data transfer is lighting speed. SSD is also another favoured platform used by many vendors. To deliver 5x or 10x VM performance and also increased pools of virtual desktops without making any investments in hardware based caching solutions or additional trays of storage can be a big seller of this technology. Right now there are several companies doing this :
Each company has varying degrees of maturity, ease of deployment and cost but I would say the third parties are leading with this technology right now as VMware looks to concentrate more on its VSAN feature and leave its Flash Read Cache feature as a mere add-on for Enterprise Plus customers.
Global Deduplication, Compression and IO acceleration are key to most of the third party vendors and all these benefits bring about true software solutions that are here to stay and go places to contend with the big storage vendors who are also trying to play in this competitive segment. If you are an enterprise company with big budgets, you may be thinking why you should look at these smaller companies and not just invest in high end storage solutions to aid your performance hurdles. Stop there !!! It doesn’t matter how much you can spend. The key statement you want to make to your end users is that they can realise faster applications and faster desktops and all without downtime or disruption to their normal working day. You can’t do this via the traditional approach which is why it is vitally important that these niche players should not be discounted.
I urge you all to read up more on this technology and feel free to test yourselves as most of the software is available to trial for free. I look forward to trying out some of the solutions myself in the lab and can safely say that it will be time well invested.
The key road forward for most of these companies will be to cater towards other hypervisors such as Hyper-V and KVM so I see a lot more development in these areas as they seek to make this solution enterprise wide. Some may already be there but I’m sure we can see some more roadmap discussions at the forthcoming VMworld’s in the US and Europe.
Some of these companies produced some great presentation at both Storage Field Day and Virtualisation Field Day so I’d encourage you to follow these links if you want to get more in depth knowledge and how the solutions work.