The good, the bot and the ugly of ethics in AI 

by | 14 Nov 2024 | Blog

An elderly person sits with a medical robot
Would you feel comfortable receiving medical care from a robot?

When thinking about adopting AI in your organisation, is – or was – ethics on your radar? If you haven’t considered it, read on to get ahead. This blog post will give you a high-level overview of what you need to consider and why.  

The definition of ethics  

It’s likely that you’ve read this far without really considering what ‘ethics’ means. You just kind of know, right? But if you think about it a little deeper, what actually are ethics to you? Your personal feelings of what’s morally right or wrong? Accepted behaviour in society? Are ethics tied to religion or to the laws of the land? It’s quite hard to pin down, as being ethical can mean different things to different people. And whilst you can associate ethics with all the above, there are always exceptions. For example – and we’re keeping it relatively light here – you know it’s not right to ghost your date… but plenty of people do it anyway. 

A person running through the woods in a sheet like a ghost
Have you done this? Not very ethical

The word itself comes from the Ancient Greek word ‘ethos’, which meant ‘character’ and ‘personal disposition.’ Nowadays, according to Markkula Center for Applied Ethics (and they should know), ethics means two things. Firstly, it refers to long-standing standards of right and wrong that prescribe what humans should do, usually in terms of rights, obligations, benefits to society or fairness. Some really basic examples would be that we shouldn’t steal or murder and we should help someone if they’re injured. But of course, there are a lot more obligations and ethical norms than that! We’ll be exploring themes, such as the right to privacy and freedom, and treating people as individuals and with respect.  

Secondly, ethics refers to the study and development of your own ethical standards. As indicated with the ghosting example; feelings, along with laws and social norms can – and often do – deviate from what’s ethical. So, we need to constantly examine our standards, to ensure they’re reasonable and well-founded.  

Ethics also means the continuous effort of studying our own moral beliefs and conduct, and striving to ensure that we, and the organisations we create and represent, live up to standards that are reasonable and expected.  

Whoops

If you’re now having a light existential crisis, we can only apologise. Let’s move on to ethics in AI specifically. If you’ve never really thought about it before, the best way to familiarise yourself with the importance of it, is to look at what’s gone wrong before. Here are four ethical faux pas, where ‘whoops’ won’t always cut it… 

View of a shopping trolly in the middle of a supermarket aisle
One day AI might be able to nudge shoppers towards healthier choices
  1. Tesco say they might use AI and Clubcard data to nudge shoppers towards healthier choices 

This created a bit of a hoo-ha in the press and it’s not even happened… yet. We were even asked to comment for this interesting article in The Grocer. Most discussion centred around whether it’s right that a supermarket is able to reach into the lives of their shoppers and influence them. Jake Hurfurt, head of research and investigations at Big Brother Watch, criticised the idea, saying it represents a form of surveillance. “Tesco has no right to make judgments about what’s in our baskets or nudge us on what we should and should not be buying”, he said. There’s an obviously argument for and against it, you can see what our CGO J Cromack thinks about it here

  1. Air Canada’s chatbot gave incorrect information to a bereaved traveller 

In 2022, Air Canada’s chatbot promised a discount that wasn’t available to a passenger who was assured he could book a full-fare flight to his grandmother’s funeral and then apply for a bereavement fare afterwards. However, when the passenger applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and they wouldn’t offer the discount. The airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Clearly this is a wild excuse – if they’re not in charge of their AI, who is?! Unsurprisingly, the British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay £642.64 in damages and tribunal fees. 

  1. ChatGPT’s training violated copyright law 

A former OpenAI researcher raised concerns about the company’s data practices, claiming that OpenAI’s data harvesting, which included copyrighted and paywalled materials, violated copyright law. He argued that AI companies negatively impact content creators’ economic viability by exploiting their work without compensation. These revelations shine a light on broader ethical concerns when it comes to training AI. There’s since been calls for stricter regulation and fairer compensation. This debate continues as the industry faces potential legal and financial consequences over its data collection practices. 

  1. Medical hallucinations 

An AI-powered transcription tool, called Whisper, widely used in the medical field, has been found to ‘hallucinate’ text, posing potential risks to patient safety. Not only this, but also that the tool is working with a product that deletes the underlying audio from which transcriptions are generated, leaving medical staff no way to verify their accuracy.  

A study discovered that the tool hallucinated in about 1.4% of its transcriptions, sometimes inventing entire sentences, nonsensical phrases, or even dangerous content, including violent and racially charged remarks. The study, called Careless Whisper: Speech-to-Text Hallucination Harms (great title) found that Whisper often inserted phrases during moments of silence in medical conversations. According to the researchers, 40% of Whisper’s hallucinations could have harmful consequences.  

Things you need to consider

So, now we’ve gasped and gawped at the muck ups, what do you need to consider before you start using AI in your organisation? Here are some pointers: 

Staying fair and unbiased  

Sometimes, due to the data they’re using, AI systems can discriminate or reinforce bias towards certain groups or individuals. Of course, that’s never the intention (or it shouldn’t be at least!) but if the data the AI is using is already biased, you can set off a vicious circle of self-fulfilling prophecies.  

To ensure this doesn’t happen, regularly audit your algorithms for biases and try to use diverse and representative datasets. You can also use techniques like adversarial debiasing, which is when a classifier and an adversary model are trained in parallel; the classifier is trained to predict the task at hand, and the adversary is trained to exploit a bias.  

Being clear 

Is your rationale for using AI clear? Not everyone is going to be interested, but both employees and customers alike can be distrusting of AI. So, make sure you can clearly answer basic questions:  

  • How is using AI benefiting your organisation?  
  • How do your AI systems make decisions or take action?  

Depending on the users and scope, models that provide user-friendly interfaces can be helpful to build trust.  

Who is responsible? 

Don’t be like Air Canada! It’s a good idea to establish clear guidelines for AI development and to define protocols for assigning responsibility – in case of system failures, decisions causing harm or incorrect decisions being made. For example, who’d be responsible if a driverless car is involved in an accident? Who takes charge and looks into where the fault occurred – is it a software fault, a manufacturing fault or a user error? Having a clear, documented guidelines should not only reduce the chances of something unethical happening, but also increase trust in the process.  

Data privacy and rights  

It’s a no-brainer; you need to be able to safeguard user data and ensure your AI systems handle and process information in a way that respects privacy rights and data protection regulations. Just as you do with your usual data. If you know anything about Salocin Group, you’ll know we’re huge champions of the privacy dividend. Ideally, you should be able to perform a robust Data Protection Impact Assessment (DPIA) on your proposed AI solution and you’ll need to know what happens to the personal data that’s processed as part of the AI system. If you need help with all that, Edit or Wood for Trees – Salocin Group brands working in for-profit and not-for profit spaces  respectively –  offer a privacy review which is a great place to start.  

Staying safe and reliable  

You need to be as sure as you can that your AI systems are safe, reliable and free from unintended consequences or risks, including cyberattacks. To do this, you’ll need to do rigorous testing, validation and simulation of AI systems in various scenarios, and adopt fail-safe mechanisms like kill switches. Plus, you’ll need to continuously monitor AI systems for unexpected behaviour, like Whisper’s dodgy medical hallucinations. 

A white marble angel statue
You don’t have to be angelic, but you should aim to do good with your AI

Doing good 

Have you considered the broader impact of AI on society? Ideally, you should be aiming for positive contributions to societal well-being, and at the very least not doing any to harm. For example, is your AI helping to empower vulnerable customers? Or could it be taking advantage of them? 

This is a tricky area because as we’ve mentioned, personal ethics and morals can vary, and as humans, we don’t always do the right thing, even when we know what that is. We’re complicated beasts! But we must try our best. To mitigate against negative outcomes, we’d suggest conducting thorough impact assessments in collaboration with a diverse set of stakeholders.  

Human control and autonomy  

No one wants the robots to take over, so you need to be able to ensure humans remain in control of your AI and that it accepts human decision-making authority. A way of guaranteeing this is designing AI systems in collaboration with humans, allowing for human intervention and oversight when decisions are made. At Salocin Group, we strongly believe that humans are the key to making AI work well – it won’t take your job, unless you really want it to

Understanding your tools 

Following on from that last point, it’s crucial that users are adequately trained to use your AI in the way it’s intended. They need to be able to understand what it should be doing and why, plus they need to stay up-to-date with changes and updates. Without this, you won’t be able to spot when things go wrong or notice unexpected patterns.  

Need a bit more help?

It’s worth keeping in mind that every situation is different – not all the above points will be a major concern for you and your organisation. If you’d like a broader overview of how to get started with AI, download our guide – it’s for people who don’t know where to start but are feeling the pressure to begin. Or you can take our AI readiness questionnaire. It’s important not to jump straight in, it’s worth taking time to consider, so you can truly reap the benefits. In the guide, we talk a bit more about ethical considerations, plus we go through the hype and truth about AI, the questions you need to ask (and answer!) before you start, the potential pitfalls and how to actually get going. 

Get in touch today

Interested in speaking to us? We'd love to hear from you.