‘The introduction of virtual storage area networks (VSAN) and software defined storage (SDS) is significantly changing network architectures in the enterprise data centre,’ says Dr Thomas Wellinger, market manager, data centres, R&M.
As Dr Wellinger explains, the background to this development is as follows: ‘Since major providers of IT infrastructures launched the principle of software defined storage, previous storage systems have been marginalised. The X86 servers with PCIe 3.0 bus appeared on the market about four years ago. This meant the conditions were created for the integration of storage tasks into the server infrastructure. A standard server on two height units with six card slots suddenly offered more bandwidth and performance potential than any midrange storage system.’
The economic advantages of this integrated concept are convincing. ‘A data centre has to use powerful servers in any case to be able to run the numerous virtual machines. And storage volume can easily be added for the price of further server disks. For example, on the basis of SDS, servers for virtual machines are equipped with 24 instead of only two to three disks. And there is already sufficient storage available – for the marginal costs of additional server disks,’ explains Dr. Wellinger.
However, he says, this evolutionary advance will not result in a reduction in the amount of cabling in data centres. The cabling would just be shifted and might even increase. ‘This has to be taken into account right from the infrastructure planning phase,’ Dr Wellinger emphasises. Until now, cables have run from the server’s network interface card to the Ethernet switch as well as from the server’s host bus adaptor to the fibre channel switch and from there to the storage. The amount of cables was split over the relevant areas. This meant cabling density was still relatively low.
Shifting storage into the server housing means networks are consolidated. Cabling density increases as a result, both on the server housing and on the switch or router. Increasing virtualisation means data traffic between servers grows. At the same time, there is a further increase in CPU and PCI performance. ‘These advances should also get through to users. They expect acceptable latency,’ says Dr Wellinger, highlighting the market requirements. Improvements in quality, he adds, can only be achieved with more bandwidth and higher performance from the cabling. ‘As a result, data centres have to ensure their networks can cope with using 40 and 100 Gigabit Ethernet (GbE),’ he summarises.