Data centres in disaster zones
Is your service safe? From 2.6 zettabytes in 2012, global data centre traffic growth is set to hit 7.7 zettabytes by 2017, a tripling of workload. Global cloud traffic will soon dominate data centre traffic, and it's growing fastest in the Middle East and Africa.
Tragedy could hit any data centre at any time. Hurricanes are common in the eastern US, and earthquakes in the west, while tornadoes, freezing winters, floods, and of course terrorism, can happen almost anywhere. It's less about choosing a data centre in a safe location and more about knowing what its Plan B is.
"Take care in selecting a data centre and have full knowledge of its disaster planning," advises Jim Cowie, Chief Scientist at cloud-based internet performance company Dyn. "If your data centre facility isn't properly prepared for a potential disaster, you are risking major outages and potential loss of revenue."
Cowie recommends investing in an intelligent load balancer. "If you rely on only one data centre to hold and serve all of your information, it's only a matter of time before something happens, causing your site to go down," he says. "With failover, in the event of an outage on your primary server, you can redirect traffic to a redundant, off-site server." An intelligent load balancer can geographically load balance traffic and have built-in failover mechanisms.
However, Palladino thinks that the internet at large already has a built-in back-up plan. "In the rare event of a natural disaster that severs cable connections, intelligent routing capabilities can help organisations to ensure continued network availability and prevent any disruptions," he says. "These technologies have the potential to scan networks globally for traffic-impacting issues like outages, packet loss and latency, and redirect traffic over alternative and stable internet paths."
Cowie agrees, but warns against complacency. "The internet is a dense web of domestic and international connectivity, and in response to any possible disruption the internet will route around catastrophic damage and keep the packets flowing, despite terrible chaos and uncertainty." However, Cowie thinks that companies should always have a separate backup plan to reinforce their online presence.
Security bugs, viruses and hackers
So far in 2014 we've had Heartbleed and Shellshock, and numerous targeted hacks, but despite the cybersecurity doom-merchants the internet still appears to be working.
"The number of variables involved in deliberately trying to blackout the entire internet make it next to impossible – even when there's availability of widespread vulnerabilities such as Shellshock – these are still limited in the scope of their impact," says Chappell. "While there are key points on the internet that can result in widespread outages, targeting them all comprehensively would need understanding, timing and coordination beyond the imagining of most of us."
Others aren't so sure. "Hackers are getting more sophisticated and staying a step ahead of security measures," says Marc Malizia, CTO of cloud solutions provider RKON Technologies. "This will escalate until companies start taking the threat seriously and put resources and cutting edge technologies in place to protect their devices, including mobile phones and laptops."
Malizia predicts that in 2015 more organisations will begin equipping mobile devices with security software.
In the longer term, the Internet of Things is cybersecurity's next frontier – no-one is going to want their car/fridge/toilet hacked.
A network node goes down
Companies like AT&T have Network Disaster Recovery teams that jump on a plane as soon as a 'smoking hole' appears on their global networks. However, just as likely as an earthquake or a 9/11 scenario is a complete overload of a network node during, say, a protest or some other unscheduled mass event.
So high are the stakes that an entire industry is watching internet traffic in the hope of stopping or circumventing an outage. So far, it's been very successful, and even major disasters like October 2012's Hurricane Sandy, which ripped through the east coast of the US, have failed to cause havoc.
RIPE NCC – one of the five Regional Internet Registries (RIRs) that support the critical infrastructure of the internet around the world – used its RIPE Atlas to measure the effects of the storm on the internet to see how traffic was diverted around the problem. It shows us how network operators are able to compensate for all but the most severe damage to infrastructure, and function almost normally even without one of their critical hubs.
"We do need to be concerned about our dependency on the internet, but thankfully its architects had already thought about its resilience and robustness," says Chappell, who points out that in the early days of the net the technology being used wasn't very reliable. Consequently it's designed to be resilient enough to cope with frequent dropouts.
"The Internet operates as a packet-switching network, so even in normal operation there's no certainty that two sequential data packets in a transmission will follow the same route, and it doesn't matter that they don't," says Chappell. "It means that when a disaster or attack takes out a portion of the network, the rest can carry on operating, actively routing traffic around the failure until it's resolved."
So resilient is the internet, Chappell believes, that a major disruption to it would be really bad news. "A total internet blackout from a natural disaster is going to leave us with more to worry about than whether we can get to our Facebook accounts," he says, "as it's likely to have also affected many more of the fundamental requirements for life."