Why AI ethics must be more than a ‘tick box exercise’

Shaun Cheatham, Chief Relationship Officer at Hays

In the latest episode of the ‘How Did You Get That Job?’ podcast, I sat down with Antony and Claire Roberts, co-founders of Full Fathom Five, to explore the ethical dimensions of AI. With over 40 years of combined experience in digital transformation, their insights were not only timely, but deeply thought-provoking. 

As AI continues to reshape the way we work, the ethical implications are becoming harder to ignore, impacting an organisation’s relationship with its customers, partners and people. From data bias to transparency, Antony and Claire made it clear: ethics in AI isn’t just a compliance issue - it’s a strategic imperative. 

Why do the ethics surrounding AI matter? 

Ethical dilemmas are nothing new when it comes to industrial revolutions. Typically, technological advancements have led to concerns around job displacement and reskilling. We’re yet to see the true of extent of that with AI, but the moral concerns go deeper this time. 

Antony, who’s led transformation at major brands including Audi and TUI, compares AI’s impact as a “general purpose technology” as on par with electricity.  However, unlike previous tech waves, AI is accessible to everyone which, while empowering, also raises serious questions. 

“Are these tools ethical in terms of how they are compiled?” Antony asks. “Can you explain what’s under that hood? Do you understand how bias might be creeping into some of the data?” 

He likens the issue to “dolphin-friendly tuna” - a metaphor for transparency in sourcing. If organisations turn a blind eye to how their AI tools have been trained, and the output is being influenced by biased or unethically sourced data, this can create results that are at odds with your values. Furthermore, if you’re feeding this into your business decisions or strategy, your customers and employees deserve to know. 

Lack of oversight leaves progress at risk 

In the four years that I’ve been hosting ‘How Did You Get That Job?’, I’ve spoken to numerous guests about the importance of introducing different profiles into tech. Gaining new perspectives on customer needs and transferring ideas from different backgrounds are crucial to successful innovation. 

Claire, who previously held the role of Senior Director for Value Transformation and Change at Arm, is actively involved in the UK AI Trade Association’s ‘Women in AI’ working group. 

Her concern? That AI could undo years of progress in workplace diversity. 

“We’ve spent a decade delicately building this diversity soufflé. Now we risk it collapsing because large language models are trained on historical data that doesn’t reflect today’s inclusive values.” 

What does this mean? She explained that it’s not about removing offensive language, but instead eradicating subtle biases that creep in. For example, Claire explains how assumptions based on outdated source data, such as doctors all being male, can reinforce damaging stereotypes and undermine innovation.

 

Ethics as a brand differentiator 

One of the most powerful takeaways from Claire was the idea that ethics can offer more to organisations than just a legal safeguard. 

“Ethics and compliance started as a tick box exercise,” Claire notes. “But now you can use AI ethics as a brand positioning tool. You can make a stance by having an ethical charter or a safe usage policy.” 

Furthermore, Claire add, this proactive approach doesn’t require a full AI strategy. It can even be starting point for organisations to promise transparency, safety and keeping humans involved and front of mind. It’s an easy win that can help your business stand apart from competitors who are also still finding their feet. 

What should decision makers do? 

If you’re a business leader wondering how to navigate the ethical minefield that AI has created, here are three actionable steps based on Antony and Claire’s advice: 

  • Audit your AI usage to understand what data your models are trained on. Ask vendors about sourcing, bias mitigation and transparency. If you can’t explain it, don’t deploy it.
  • Create an ethical AI charter that defines your company’s stance on AI usage, focusing on factors such as fairness and accountability, as well as the role humans have to play. This can build trust from customers, employees and partners.
  • Start with business strategy. I’m sure you’ve heard this before, but it bears repeating: don’t implement AI for the sake of it. There are lots of ways that these tools can help your organisation, but they won’t all be worth the effort. Identify measurable business challenges and explore whether AI can support. As Claire points out: “If you weren’t measuring the problem before, it might not be worth solving with AI. it wasn't a real tangible problem to start with.” 

For more insights from Antony and Claire on integrating AI into your organisation, listen to our full conversation here.

 

Author

Shaun Cheatham
Chief Relationship Officer at Hays

Shaun is responsible for the creation and execution of sales strategies, as well as running the Major and National accounts organization, for Hays in the US. With almost 30 years of staffing industry experience, Shaun now hosts the Hays Technology podcast, ‘How Did You Get That Job?’.

00