Public cloud services have now firmly established themselves as efficient, effective and affordable solutions for organisations of all kinds. This is mirrored in recent adoption rates; this year, Gartner expects the public cloud market to grow by 17% to be worth a total $266.4 billion, with over two-fifths of SMEs favoring the public cloud and 45% of enterprises favoring a hybrid cloud strategy (making use of both public cloud and on-premise services).
However, one concern that frequently holds organisations back from embracing the public cloud - whether wholesale, or as part of a hybrid cloud strategy - is information security. A recent survey of cybersecurity professionals found that 93% were moderately to highly concerned about public cloud security, with over three fifths of those surveyed specifically concerned about leaks and data privacy. These fears are particularly acute when it comes to hosting sensitive workloads such as financial, personnel, or payroll information, where a leak or a breach could cause major damage to your team and organisation as a whole.
We do know that some sensitive workloads are appropriate to host on a public cloud. However, given the damage that can be done if a mistake is made, you need to approach the risks systematically ahead of a migration and throughout the time you’re using a public cloud to host sensitive information. To do this, an organisation needs to conduct regular risk analyses of its public cloud environment.
What your risk analysis consists of is going to heavily depend on your workloads and the exact nature of your environment. However, there are several issues that regularly come up when looking at the risks of a public cloud environment: the contractual controls you have with a provider, the architectural controls you have in your software environment, and the technical controls you have over your hardware.
From the outset, you should make sure you’re working with a public cloud service provider (CSP) who can meet your information security needs. However, this scrutiny shouldn’t end once the procurement process is over. Instead, you should assess your CSP’s terms, conditions, and contractual terms to ensure that the standard of security you need is being met.
One of the first things you should look at in your risk assessment are the ISO security standards your CSP promises to abide by. You will need to review your own risks and obligations, and make sure that your CSP complies to all relevant standards for your organisation. Standards that regularly come up in these reviews include ISO 27001, which is the international information security management standard; ISO 27701, which ensures that an ISO 27001-compliant system also respects data privacy standards (such as GDPR); ISO 27017, which provides cloud-specific information security controls; and ISO 27018, which provides practices that focus on protecting personal data in the cloud.
Going beyond these standards, check your contract to see what provisions your CSP makes if a leak does happen. You should look at questions regarding liability for leaks, the recommended response if information does leak, and the review process after a leak occurs. Beyond this, you should also check how your CSP handles information security, such as staff screening policies, in the data center itself.
The area where your audit has the most power to directly investigate and influence are your architectural controls, meaning how software is configured to regulate and demarcate the data that goes into the public cloud. This is particularly important in a hybrid cloud environment, where workloads are running in both the public cloud and on-site servers. In a hybrid cloud environment, good architectural controls will allow you to keep as much sensitive data and workloads on-site as possible. To assess all relevant risks in this area, you should work with your developers and operations teams to understand your environment and its architecture.
One of the first considerations to make if your organisation is using a hybrid cloud setup is whether the team ensures that those workloads identified as most sensitive always stay on-premises, by making use of the scheduling tools found in cloud platforms like Openshift or Openstack. To ensure that only relevant data is passed between your public cloud and on-premise servers, you should also check that the organisation is using API controls to regulate the flow of data, along with a clear monitoring strategy and set of controls. Within your environment, you should also make use of tools such as virtual LAN (VLAN) to partition and control the flow of sensitive data within your organisation.
Architectural controls are continually evolving as technology and standards improve, which demonstrates the need for your risk analysis to be frequently revised. To do this, an ongoing dialogue with your software team is absolutely essential.
Finally, your risk analysis should also take into account the hardware configuration of machines in your organisation. These technical controls can provide vital physical security for confidential data and workloads, and provide an effective defense against malicious attacks on your organisation - whether these attacks be conducted physically or in cyberspace.
Considerations for technical controls can include the use of hardware security modules (HSMs), which are devices that can regulate access to your organisation’s data. In particular, if you have some data in your public or hybrid cloud that only a very small amount of people in the organisation needs to access, an HSM can guarantee with almost 100% certainty that nobody outside that circle can access that information.
In addition, chipmakers are beginning to roll out trusted execution environments (TEEs) in their computer processes. TEEs provide a place for a device to execute code that is ring-fenced from threats on the rest of the device, meaning that any malware on the rest of the device shouldn’t be able to access confidential information. In the age of smartphones, which provide many new vectors for attacks, this is invaluable. TEEs are becoming increasingly available, though using them can be complex. It makes sense to keep an eye on product developments from AMD and Intel and updates from the Confidential Computing Consortium, a Linux Foundation project which tracks technology in this space and encourages open source initiatives and projects to make use of TEEs.
Your risk analysis is likely going to uncover many things your organisation needs to monitor in your public cloud migration, including and well beyond the domains of contractual, architectural, and technical controls. The migration is going to be the first and largest hurdle, but you must not view the risk analysis as a completed task; your approach to risk in your environment is going to continually evolve based on contractual, architectural, and technical developments.
- Mike Bursell is the Chief Security Architect at Red Hat.