by Eduard du Plessis, CEO of EOH Network Solutions Division
With the advent of cloud computing, network virtualisation and on-demand technology services are changing the way we use IT. Cloud systems are heavily reliant on both the LAN and a stable internet connection.
In addition, cloud computing, along with the Internet itself, relies on millions of switches, servers and firewalls located across the world. To maintain high availability and fast response times, businesses will still need to rely on LANs.
A variety of systems are increasingly making decisions based on data collected from the Internet of Things, whether industrial or consumer, and some of them rely on nearly instantaneous feedback. Delays could realistically mean the difference between life and death.
That’s particularly problematic due to the seemingly natural relationship between cloud and IoT. Lots of data needs to be processed at massive scale. What better way to do that than in a cloud environment?
The problem, it turns out, is one familiar to networking people: latency.
Overcoming latency in cloud computing
Latency has always been a known issue, but one that could be generally addressed with the traditional tricks of the trade: tweak the TCP stack, use a CDN, turn on compression, and use geolocation when possible.
One of the ways organisations are addressing the latency problem is by abandoning HTTP for MQ Telemetry Transport Protocol (MQTT) and other, IoT-favouring protocols. But simply changing protocols doesn’t necessarily solve everything and, as is often the case, there will be an impact to networking.
We’ll need to learn how to best transport MQTT, and secure and scale it. This isn’t just a “HTML to XML to JSON” over HTTP transition; these are completely different protocols, riding atop TCP, that need to be supported in the network, even if that network is in the cloud.
Virtualisation can be a two-edged sword
A virtualised computing environment is in constant flux: virtual machines are brought up and shut down, applications move between physical servers.
Until recently data centre networks were not similarly fluid: configurations were rigid and could only be changed manually. Now, thanks to the development of Software-Defined Networking (SDN), the network can be configured to optimise links between servers running applications, storage and the external connections to the data centre.
Apart from the functional demands of cloud computing there are many situations where organisations need ‘bandwidth on demand’. These include daily back up to a remote location, disaster recovery and product launches.
Digging a little deeper
At the heart of all data networks are switches and routers – hardware devices that control the flow of data packets, delivering them to the destinations specified in their address headers.
Today most switches and routers contain two major components: the ‘data plane’ that is responsible for directing the flow of data packets and the ‘control plane’ that provides the instructions to the data plane.
In commercial routing and switching hardware the control plane function is typically fulfilled by a proprietary operating system. In switches and routers that support SDN the control plane function is removed from the physical device and implemented in software running on standard servers.
The benefits of SDN
SDN enables applications to directly configure the network to meet their requirements, provides end-to-end visibility across the network through separating the control plane and data plane, and the costs of the routers and switches are greatly reduced because they are standard items that do not incorporate proprietary operating systems to perform control plane functions.
Enterprises and carriers benefit from reduced operational expenses, more dynamic configuration capabilities, fewer errors, and consistent configuration and policy enforcement.