Computational governance: The key to building safe and compliant AI

The lady of justice comes for AI
(Image credit: Future / James Cutler)

The stakes are becoming increasingly high for companies developing AI in highly-regulated industries. In sectors such as healthcare and finance, compliance is not just a legal obligation, but a crucial aspect of building trust and integrity between organizations and their customers.

As machine learning models require ever more diverse data – often from multiple sources across different organizations – the need for a compliant solution increases. While developers rush to create the most sophisticated machine learning models, data custodians are searching for a means to make their data available to these developers – and therefore realizing its value.

One emerging solution is computational governance, which describes the ability to control, supervise and track all aspects of computations on data. For companies sitting on terabytes of valuable data, computational governance is a route to making data available for ML, doing this while ensuring governance, security and privacy. Although nascent, it could be a component in unlocking the real potential of data for data owners.

Robin Röhm

Co-founder and CEO of Apheris.

Defining your controls

Computational governance allows data custodians – the organizations that own the data – to set the required level of privacy and define access controls on the computational level. This dictates who can run which computations on which of their data assets, and for what purpose. In essence, only authorized computations that align with the custodian’s requirements can be executed on the data, ensuring compliance with privacy and AI regulations.

The result is that companies can monitor and track who does what with their data, whilst also maintaining the ability for users of the data to update their models so long as they comply with asset policies.

This is essential for several reasons. Firstly, it helps organizations comply with regulations such as GDPR and HIPAA, which require organizations to protect the privacy and security of personal data. Computational governance helps organizations meet these requirements by ensuring that only authorized individuals have computational access to data, that data is only used for approved purposes, and raw data is never directly shared.

Additionally, computational governance plays a vital role in the development of ethical and responsible AI models. For example, in healthcare, it means AI models can be trained solely on data for purposes that are compliant whilst ensuring privacy is being protected.

Making data available

Data is the bloodline of modern organizations, but it’s only as valuable as the insights that can be mined from it.

Each time data is moved, it is exposed to threats such as data theft and data interference. If it's moved outside of its environment by being shared with another organization, the owner loses control of how their data is used. As a result, the data loses much of its value to the owner.

Federated learning is the way to train AI models without data ever moving from its secure location - which allows data custodians to make their data available to developers in a secure environment.

Keeping proprietary data protected as a valuable asset is of critical importance to organizations of all sizes. This enables data custodians to derive further value (either through commercializing it or productizing it). By not moving data, the custodian remains in full control, makes sure that you meet data residency and sovereignty requirements, and retains business value.

The ability to leave data where it resides also supports compliance with regulations such as the GDPR, which contains laws around data residency, and the EU AI Act, which has strict privacy requirements.

Why don’t companies do this already?

It is likely that many companies don’t use computational governance methods simply because they are unaware of the option to retain data control while having algorithms sent to data. Consequently, their way of addressing regulatory concerns is to not make data available, thereby opting to remain in silos. A mindset shift is required if change is to happen.

Compliant methods of leveraging customer data sometimes dilute the inherent value of their data, hindering its potential to fuel the advancement of AI. As a result, many organizations fall short of compliance requirements, especially in Europe.

Centralizing data or new data-sharing agreements have perhaps enabled data collaboration up to a certain point, but these are often lengthy, and costly and are unlikely to remain functional in the future given the pace of regulatory change and technological advancements.

Companies are left at a crossroads: Do they prioritize compliance or innovation?

Taking the next step to tackle society’s biggest problems

In a changing regulatory environment, being agile yet compliant is not just an aspiration but a critical business imperative. Computational governance can serve as a catalyst for organizations to securely leverage their data assets to enable innovative, compliant, and trustworthy AI.

If companies are able to securely make their data available for ML and AI, they can truly differentiate themselves, allowing them to remain competitive and provide the data to develop products that can benefit society. By improving the quality of data available to developers, we move on from ChatGPT into a world where AI truly makes a difference.

After months of hype around AI, a solution such as computational governance could support data custodians by making their data available in order to help advance real-world solutions for problems that are happening today - such as in medical research.

By productizing your customers’ data in a compliant manner, you can be at the forefront of innovation and push the boundaries of AI responsibly.

We've featured the best database software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:

Robin Röhm is Co-founder and CEO of Apheris, the platform for powering federated machine learning and analytics.