In recent times, apprehensions over biased AI systems have dominated the media and the minds of tech innovators. It turns out simply developing an AI model and letting it do its work is not acceptable. Discussions unravel over who is responsible for the potential flaws in AI systems. Who should be held accountable when a self-driving car has an accident? Who’s to blame when algorithms discriminate against a certain group of people? Such questions are important, not for pointing the finger at the culpable parties, but rather for creating a shared responsibility among innovators, owners, and users of AI technology.
A key issue that industry and society need to confront is how can we address the philosophical concept of ethics in inanimate, unthinking algorithms, especially when there is no one-size-fits-all description of ethics? AI is amoral, not immoral – though the unintended consequence can feel like the latter.
It is hard to discern if tech companies are genuinely dedicated to creating responsible AI or are paying lip service to avoid formal regulation. Recent public disclosures point to some bad actors; Google’s reputation is under scrutiny after they dismissed their lead AI ethics researcher Timnit Gebru after she collaborated in the paper that disclosed Google’s facial recognition technology discriminated against women of colour. Another tech giant, nowadays calling itself Meta (which many ethics specialists lambaste for its questionable business operations), has also been criticised over their Responsible AI team calling it “superficial and toothless”.
Many experts in the AI field are doubtful that the AI ethics movement will bear any fruit when it comes to actual practice. Depressingly, a Pew Research Centre survey found that more than two-thirds of AI experts do not believe that AI would be implemented to bring social good by 2030.
Tech giants often use a theatrical approach to reassure the public that serious questions such as historical biases, discrimination and social profiling are addressed by their auspiciously named AI ethics boards. Meta’s Responsible AI team has been cornered into tackling AI bias while completely abandoning unethical company practices such as subliminal algorithms. However, many concerns highlighted by the AI ethics movement such as racial bias in the judicial system, can’t be mitigated by the AI systems alone. These major challenges will require more remediation than simply tweaking an algorithm and, given that they are deeply ingrained in society, we will need society as a whole to identify and implement appropriate safeguards and solutions.
While there is hope that they can be resolved, there is almost no doubt that it can’t be done by a uniform group of experts in only one field.
The publicity AI technology receives mostly goes two ways; either it is the solution to all of humanity’s problems or the ultimate evil that will destroy us all. But there should be a different approach – AI could be seen for what it is – powerful technology that can and will shape many aspects of our lives and the society we live in. Humans have the ability to shape the way AI should operate. AI ethics should not be a PR stunt to conceal the unacceptable uses of such a powerful, and potentially useful, technology. All AI should be ethical AI – to make this a reality, we will need to hear many different voices, including citizens, ethicists, tech leaders, governments and end-users.
The public shouldn’t fall for the idea that incompetent and immature AI systems can be amended and fixed once already deployed by simply adding a human into the loop or assuming that some magical antidote exists. Responsible AI systems must be created as such, rather than building on top of already flawed algorithms.
An academic, David Edelman, has said: “perfect AI isn’t coming any time soon. It’s up to us to get specific about where and when we’re willing to tolerate it”.
To prevent any harmful uses in the development of AI systems, the European Commission released a proposal for the EU AI Act. The proposal is still awaiting further amendments from the European Parliament and Council of Europe. The proposed legislation sets out a vision of how AI technology should be regulated and who should take responsibility for it. The proposed AI Act has a flexible risk-based approach, dividing AI technology according to the risk it imposes on the public. Certain AI technologies, such as real-time biometric surveillance or social scoring by government bodies, will get harsh treatment from the European Commission with a complete embargo. Yet, other applications of AI leave some unanswered questions and potential loopholes, which hopefully will be addressed in further amendments of the document.
Dr Adrian Byrne, a Marie Curie Fellow Researcher at CeADAR and a Lead Researcher at the AI Ethics Centre at Idiro Analytics, noted that the “proposed legislation might struggle to be effective while coexisting alongside other legislation”.
“For example, the regulation of social media is to be handled separately within the proposed Digital Services Act despite the prominent use of AI on its platforms. This non-consolidated approach risks causing confusion and extra burden for the regulators, affected companies and individuals,” Dr Byrne said.
Another experts’ concern is how member states will implement the regulations set by the EU. The example of GDPR shows that it might be slow and not as efficient as is hoped. In the most recent reports, GDPR has been criticised as insufficient due to many operational, financial, and staff difficulties.
“The EU needs to back up this legislation with real resources for the regulators of each member state. Otherwise, it risks becoming toothless,” said Dr Byrne.
We have seen futile efforts of green legislations and endless climate conferences that haven’t shown any progress in dealing with the environmental crisis the world is facing. Now we are left to wonder if the EU AI Act will be just another addition to the list of political lip services.
Ethics is a hard concept to define since it is more art than science, it is often subjective and cultural, but that does not mean that we should give up on trying to get it right.
Idiro AI ethics centre
Please get in touch with us if you have an interesting take on the subject or would like to share your thoughts with us. If you work in AI or ethics (or both), we’d love to speak with you to hear your opinion.
And if you are interested in reading more about AI ethics, please subscribe to our mailing list.