Transparency is key to ethical AI
Making AI implementation transparent and honest
Dr Iain Brown is Head of Data Science at SAS UK & Ireland
The concept of Artificial Intelligence (AI) is becoming commonplace in relation to the running of our lives and businesses – we’re all used to the idea, if not quite the practice, of using AI to improve the way we live and work.
As a result, the time has come to stop debating what it can do and started discussing what it should do. AI has the capacity to be both good and bad – what matters most is the intention of those who use it. Yet ethics isn’t just concerned with the end goal. The means are just as important.
Data is the fuel that feeds AI, and as such it’s now also firmly a part of public ethics across the globe. Regulations like the EU’s GDPR and South Korea’s Personal Information Protection Act have gone so far as to enshrine certain data rights into law. Organisations have to comply with these regulations, doing all they can to protect customer data and secure consent for feeding that data to their algorithms.
However, ethics extends much further than simply obeying the law. Consumers are increasingly aware of the importance of their data and its power to impact and change their lives. This is particularly true when data meets AI. The law moves slowly and it’s fair to say that many people will have more stringent expectations of company behavior than what regulators may stipulate.
Trust is still foundational to business. A quarter of customers will take action if they feel their data isn’t being respected or protected. If you can’t guarantee good behavior, you’ll quickly become irrelevant.
Who watches the watchmen?
There are no easy answers when it comes to ethics. Yet, when trying to determine if your use of AI is ethical, you should ask yourself three basic questions: do you know what your AI is doing, can you explain it to your customers, and would they respond happily once you told them? If the answer is ‘no’ to any one of these, then it’s time for a rethink.
To inspire confidence and trust in AI, you need to take transparency seriously. For those organisation yet to cement their approach to AI ethics, adopting an existing framework is ideal. Indeed, the fairness, accountability, transparency and explainability (FATE) framework provides a good place to start.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
FATE encourages responsibility and transparency at every stage of the AI process, from data collection to analysis. AI decisions must be fair and unbiased, and this means being sensitive to team makeup and the data set used during the build and design phase. Only a diverse range of inputs and stakeholders can avoid unconscious bias from embedding itself in the system.
Crucially, FATE insists that the process by which data is turned into insight must be transparent and explainable. Consumers should always reserve the right to question and opt out if desired.
Finally, there should be human oversight to ensure no automated decision is made that betrays the company’s values, ethics and regulatory obligations. AI development is still in its infancy – we’re bound to make mistakes and those mistakes can lead to negative consequences for consumers. Manual review will help uncover and resolve these errors, ensuring customer feel (and remain) safe.
AI is an integral part of our future, yet so much of society’s thinking around it is still dystopian. It’s sadly common for us to fear what we don’t quite understand. In fact, airing on the side of caution is practically programmed into our brains as a species. To grow support and acceptance for AI, therefore, developers and organsiations first have to demystify it.
Dr Iain Brown is Head of Data Science at SAS UK & Ireland.