AI (opens in new tab) can sometimes feel like the tech industry’s secret sauce, underpinning many of the most pervasive tech-driven changes that have impacted society in the past decade. From the smart assistants which have redefined customer service, to the tools which detect and prevent payments fraud to new and exciting ways to predict and track the spread of infectious diseases, AI is already touching most of our lives, whether we choose to engage with debates over its use or not. In fact, many of us may not even be aware we are using AI at all.
Mark Mamone is CTO at digital identity specialist, GBG.
Issues around data
When it comes to data privacy, ensuring consent and the legitimate use of data is essential. If data is the new oil in our digital age, then AI is the way we transform that data into something useful and, indeed, valuable.
As AI becomes more pervasive, transparency and explainability are critical in providing reassurance that decisions reached through the use of AI are sound and free from bias. Decision-making is currently being overhauled due to an explosion in volumes of available data, and the new power of machine learning algorithms.
While it is true that data can enable us to see where bias is happening and measure whether our efforts to combat it are effective, it could also, depending on how it is deployed, make problems worse. There are, sadly, multiple examples of where algorithms have amplified existing biases. We now know that algorithm discrimination may arise when an algorithm is built in such a way that it discriminates against a certain demographic, for example. It is vital that the algorithms themselves, as well as the data on which they depend, are carefully designed, developed, and managed to avoid unwanted and negative consequences. Ensuring that algorithms are free from bias and that results are suitably validated is crucial.
The buck stops with all of us
Fairness is of course highly subjective. Even before the advent of AI there were many different interpretations and definitions of exactly what we mean when we talk about “fairness”. Now that complex algorithms are being applied to decision-making systems, it comes as no surprise to learn that these definitions have multiplied. We need technical expertise to help us understand and work within the available definitions and choices, but fundamentally the decisions about how we ensure that AI is operating fairly is one for society as a whole to navigate – it is not a question for us to pose to data scientists and then forget about. Decisions around the use of AI only gain legitimacy if they are accepted by society as a whole.
The role of the tech industry
The tech industry's understanding of the implications of AI are rapidly maturing as are the relevant regulations and policies. We already have robust regulations in place (i.e., GDPR) which govern data, but moving forwards we will see regulations governing the AI models themselves and the algorithms behind them. There are also technological advancements being made which will ensure AI technology will become less inherently problematic - including Privacy Enhancing Technology (PETs), for example. PETs ensure that encrypted (opens in new tab) data can be used without losing its value or, crucially, needing to be decrypted. This lack of decryption is key here as the privacy and integrity of the data remains intact. From a privacy perspective it is particularly exciting to look forward to the way technology will remove opportunities for human error, ensuring the AI technology of the future is compliant by design.
We've featured the best business VPN. (opens in new tab)