A recipe for avoiding disaster in the cloud

Image Credit: Shutterstock (Image credit: Image Credit: Everything Possible / Shutterstock)

A business doesn’t get to choose when disaster strikes. Whether it’s as mundane as a server outage or as serious as a fire, disasters always take us by surprise and leave us to deal with the consequences. However, that doesn’t mean that you can’t prepare for the worst. Indeed, careful planning and backup solutions can mitigate the worst effects of almost any crisis. 

Over the years, countless regulations have helped businesses to protect their workers and customers. However, as data grows in strategic and business value it must also be made part of the equation. In the event of a disaster, securely transferring data and applications to an external environment is critical. For many companies, the cloud has emerged as the ideal location for this; it is large, secure and easy to deploy data to and from. Yet it can’t cure all ills by itself.

When automation is lacking and an organisation depends on manual processes and human supervision, data is always at risk. Valuable company data can be lost, while time is wasted by IT departments attempting manual restores. In such an environment, it’s imperative that all data and applications be transferred to a cloud-based back-up infrastructure. To do without this system, each manual transfer that staff perform reduces the efficiency of the disaster recovery process, costing time and money. 

Consider these essential steps to ensure all your critical data remains protected:

1. Meet your deadlines

Every business application has a unique and intrinsic value for the company. Therefore, each should have its own recovery time requirements. In practice, mission-critical services – those applications that your business could not function without – should not experience a breakdown for more than 15 minutes. This will often be stipulated by its Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs). 

For simplicity, the RPO indicates the amount of data that an application can afford to lose during an outage without causing unacceptable losses to a business. RTO, meanwhile, defines how much time can elapse before all elements of the application need to be up and running again. At its base, any recovery strategy must be planned according to set RTOs and RPOs. For sensitive applications in particular, it’s advised to use data transfer or replication technology in order to meet strict 15-minute requirements.

Image Credit: Shutterstock

Image Credit: Shutterstock (Image credit: Image Credit: Olivier Le Moal / Shutterstock)

2. Find a place to automate

To err is human. Therefore, to ensure a rapid, faultless recovery of data to the cloud, the disaster recovery process must be automated. The ideal solution would trigger the entire process with a single click. From an IT and business continuity perspective, this is advantageous in case of major incidents such as flooding or a fire, where employees may not be available or react badly under stress. 

Many business applications are complex, with certain functions interdependent on each other. In an emergency, however, some may be left to fail in favour of protecting others. Ultimately, unless the entire application, its multiple layers and dependencies are protected by an automated process, the business will only suffer longer and more expensive downtime.

3. Imagine different scenarios 

Organisations that make use of the cloud in their disaster recovery strategies must decide what level of detail they want their applications and data restored. This needs to be done on a case by case basis. Depending on the crisis, they may have to consider whether they need to back up some virtual machines, modify a large number of complex applications, or restore a data centre in its entirety. The company’s recovery strategy will need to be adaptable enough to manage these different scenarios quickly and with flexibility. 

A disaster recovery plan should be detailed and comprehensive. This will determine its success in the multitude of crises that can befall a business.

Image Credit: Pixabay

Image Credit: Pixabay

4. Break the boundaries of the cloud

In a multi-cloud scenario, using a different tool for each cloud environment is not advised. If the IT infrastructure is fragmented across numerous tools, a single, across-the-board overview becomes impossible. In a disaster, speed of response is reduced as employees grapple with disparate systems, each one needing a specific skill set that might not be available at the time. The result, inevitably, is higher operating costs, longer outages and the increased chance of data loss.

5. Testing times

The final step is one of the easiest, but it is all too often forgotten. At the end of the day, the only way to know if a disaster recovery process works – and how long it takes – is to test it. New data and potentially new environments are being added to businesses every day, so these tests should be performed regularly to ensure everything is protected.

Applications, and the data that supports them, are fast becoming the lifeblood of businesses. They are too important an asset be left vulnerable. If you fail to prepare, prepare to fail.

Daniel de Prezzo, Head of Technology, Southern Europe at Veritas Technologies 

Daniel de Prezzo

Daniel de Prezzo is the Head of Technology for Southern Europe at Veritas Technologies. He has more than 25 years of experience in Information Technology, passionate about technology and innovation, evangelist. Today helping organizations to successfully accelerate their digital transformation.