Introducing bias & the human mind  

The human mind is a landscape filled with curiosity and, at times irrationality, motivation, confusion, and bias. The latter results in levels of complexity in how both the human and more recently, the artificial slant affects artificial intelligence systems from concept to scale.

Bias is something that in many cases unintentionally appears - whether it be in human decision-making or the dataset - but its impact on output can be sizeable. With several cases over the years highlighting the social, political, technological, and environmental impact of bias, this piece will explore this important topic and some thoughts on how such a phenomenon can be managed.

Whilst there’s many variations and interpretations (which in some cases themselves could be biased), let’s instead of referring to a definition explore how the human mind might work in certain scenarios. 

Imagine two friends (friend A and friend B) at school who’ve had a falling out and makeup again after apologies are exchanged. With friend A's birthday coming up, they’re going through their invite list and land on Person B (who they fell out with).

Do they want to invite them back and risk the awkwardness if another falling out occurs, or should they take the view they should only invite those they’ve always got along with? The twist is though, Person A choosing the attendees for the party may have had minor falling outs with them in the past, but they’re interpreting it through the lens any previous falling outs are insignificant enough to be looked over. 

The follow-up from the above example turns to whether person’s A decision is fair. Now, fairness adds to the difficulty as there’s no scientific definition of what fairness really is.

However, some might align fairness with making a balanced judgment after considering the facts or doing what is right (even if that’s biased!). These are just a couple of ways in which the mind can distort, and mould the completion of tasks, whether they’re strategic or technical.

Before going into the underlying ways in which bias can be managed in AI systems, let’s start from the top: leadership. 

Leadership, bias, and Human In the Loop Systems  

The combination of leadership and bias introduces important discussions about how such a trait can be managed. “The fish rots from the head down” is a common phrase used to describe leadership styles and their impact across both the wider company and their teams, but this phrase can also be extended to how bias weaves down the chain of command.

For example, if a leader within the C-suite doesn’t get along with the CEO or has had several previous tense exchanges, they may ultimately, subconsciously have a blurred view of the company vision that then spills down, with distorted conviction, to the teams.

Leadership and bias will always remain an important conversation in the boardroom, and there’s been some fascinating studies exploring this in more depth, for example, Shaan Madhavji’s piece on the identification and management of leadership bias [1]. It’s an incredibly eye-opening subject, and one that in my view will become increasingly topical as time moves on. 

Generative Artificial Intelligence Report 2024
We’re diving deep into the world of generative artificial intelligence with our new report: Generative AI 2024, which will explore how and why companies are (or aren’t) using this technology.

As we shift from leadership styles and bias to addressing bias in artificial intelligence-based systems, an area that’ll come under further spotlight will be the effectiveness of Human In the Loop Systems (HITL).

Whilst their usefulness varies across industries, in summary, HITL systems fuse both the art of human intuition and the efficiency of machines: an incredibly valuable partnership where complex decision-making at speed is concerned.

Additionally, when linked to bias, the human link in the chain can be key in identifying bias early on to ensure adverse effects aren’t felt later on. On the other hand, HITL won’t always be a Spring cleaning companion: complexities around getting a sizeable batch of training data combined with practitioners who can effectively integrate into a HITL environment can blur the productivity vs efficiency drive the company is aiming to achieve. 

Conclusions & the future of bias  

In my view, irrespective of how much better HITL systems might (or might not) become, I don’t believe bias can be eliminated, and I don’t believe in the future - no matter how advanced and intelligent AI becomes - we’ll be able to get rid of it.

It’s very much something that’s so woven that it’s not always possible to see or even discern it. Furthermore, sometimes bias traits are only revealed when an individual points it out to someone else, and even then there can be bias on top of bias!

As we look to the future of Generative AI, its associated increasingly challenging ethical considerations, and the wide-ranging debate on how far its usefulness will stem at scale, an important thought will always remain at heart: we on occasions won’t be able to mitigate future impacts of bias until we’re right at the moment and the impact is being felt there and then. 

Bibliography  

[1] shaan-madhavji.medium.com. (n.d.). Leadership Bias: 12 cognitive biases to become a decisive leader. [online] Available at: https://hospitalityinsights.ehl.edu/leadership-bias. 


Want to read more from Ana? Check out one of her articles below:

Navigating artificial intelligence in 2024
Discover how businesses can harness AI’s potential, balance innovation with ethics, and tackle the digital skills gap.