Cloud computing, a hyperconverged infrastructure, or the best of both worlds? David Trossell, CEO and CTO of Bridgeworks, examines the pros, cons and what to consider.
With the performance improvements over the past few years with Intel x86 processors, we have seen a rapid movement to software-defined storage, software-defined networks, software defined data centres, and software-defined everything else. Add to this the cloud factor, and this dramatically changed the way we now design, specify and buy and use computer platforms.
The flip side of this is we now have a myriad of choices. So it’s a huge and sometimes difficult question to ask about which one is the most suitable for your application, or even for your organisation to base their future strategy on.
The established large players can provide a menu of products to choose from to cover every aspect of computing and networks, but choosing between them requires skill sets to implement them. What’s great about cloud computing is that it is all there for you to run up in an instant. Then there is the man in the middle – converged systems (SD- everything) and the hyper-converged players, such as Nutanix.
Cloud: Easy choice
Cloud computing makes an easy choice for start-up companies. Yet one should be careful. There comes a point where the cloud will start to become more expensive than a cap ex model. Inevitably your computing, performance and storage requirements grow, and with that your costs will grow too. Let’s also remember that the utility operational expenditure model of cloud computing is much like making the choice between hiring a car and renting one.
So, what about converged systems – or SD-everything? This is a bit of a half-way house between the bespoke system approach and the cloud. Everything is in one Lego-style box. To add more capacity in storage, network, or computing you just add more boxes. In most cases it will automatically add these resources into the mix. The other benefit is the single point of administration for all aspects of the system administration: storage, networking, computing. This as the benefit of reduced training and skill sets required to manage the system.
However, there are downsides to this approach. Any expansion requires capital expenditure and that may not always be possible. This is in the case where there is a short-term requirement for extra capacity. This approach can often deliver poor ROI.
However, a hybrid system that utilises both the cloud and converged, or hyperconverged systems can give that extra level of flexibility you need. Yet, with any hybrid environment, you should consider wide-area network (WAN) latency. This is very easily overlooked when using the cloud, and it can be a real application performance killer.
Anjan Srinivas, Sr. director, product management at Nutanix, thinks, “Companies need to look at the needs of individual workloads and their users. Many will end up with a hybrid infrastructure as a result. Moreover, although early adopters of hyperconvergence focused on virtual desktop infrastructure (VDI), use cases have since broadened considerably to include Tier I applications and highly virtualised general business workloads such as SAP, Oracle, Microsoft Exchange, SQL Server, SharePoint, Splunk, and Unified Communications and collaboration (UC&C).
He adds, “The majority of organisations are taking a hybrid approach to IT, mixing together on-premise and cloud platforms to suit the workloads and budgets involved. Moreover, the flexibility and simplicity that hyperconvergence brings to the data centre should not be viewed as the end objective, but as a foundation for building an on-premise enterprise cloud.”
“Like public cloud services, an on-premise enterprise cloud enables the IT team to start small and scale incrementally to precisely meet application demands, while keeping the overhead of managing the infrastructure to a minimum”, he explains.
That may be the case, but you can also go with the traditional data centre systems architecture with separate servers. At the end of the day it’s all about maximising the ROI from your infrastructure. So, you may wish to maintain some of your traditional architecture such as Flash Cache, and mix it with hyperconverged infrastructure.
Look at the costs of servicing and match the performance requirements of the technology you have. Pick the best from both cloud and hyperconverged infrastructure. So, my advice is to look at it in practical terms.
To mitigate latency, Anjan believes that, “Data acceleration products can shore-up an existing infrastructure to cope with larger or more demanding workloads, but won’t address fundamental weaknesses in the platforms employed.” He also thinks they, “Tend to add to the complexity, rather than make the infrastructure easier to manage and use.”
Arguably, a hybrid approach with a data acceleration solution such as PORTrockIT can help to minimise these weaknesses. For example, with BaaS and DRaaS, careful consideration must be given to the performance of the WAN link and the recovery requirement for your organisation’s recovery point objective (RPO) and your desired recovery time objective (RTO).
Latency over the WAN link can have a dramatic effect on the performance of these applications. Even with just 5ms of latency on a 1Gb WAN link can reduce the throughput from a possible 100MB/s down to less than 30MB/s increasing the recovery time by a factor of three.
Anjan concludes, “Although a hyperconverged infrastructure can deliver the same kind of on-demand scalability as the cloud, and in many cases, beat it in terms of TCO, the ability to source IT without having to invest in on-premise infrastructure and management makes it highly unlikely that the cloud will ever go away. Also, the fractional consumption models make it attractive for public clouds to be leveraged for shorter bursts.” So, while it may seem that hyperconverged systems are the competition for seeing off the cloud, the reality is that both can work effectively in partnership.
Comments are closed.