“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.” – Colin Angle
This is a quote that will likely resonate with many of us. Artificial intelligence has already caused an array of incredible and fascinating developments in the way we operate as a society, and the speed of this has been startling.
For good or for bad, what is clear is that the days of speculating over a future filled with killer robots, self-driving cars, and virtual realities are over. Why? Because those days are very much with us already.
Thankfully, very smart people all over the world, working independently, within governments, and within global organizations are working rapidly to mitigate the inevitable risks and blunders that accompany the development of these technologies.
The work they are doing can be broadly characterized as work in artificial intelligence ethics – a field of ethics, and a field within the technology industry, that is concerned with ensuring that artificial intelligence is a positive force in the world, now and into the future.
Why we need artificial intelligence ethics
That being said, many people thought that the advent of artificial intelligence and its steady integration into society would quite quickly make the world a safer place to live. After all, artificial intelligence systems are programmed to function in very specific ways, and as a result, they aren’t prone to the range of errors that human beings can make.
An excellent example of this is that people thought that the use of artificial intelligence within the criminal justice system would have a dramatically positive effect on the fairness of the judicial processes. After all, artificial intelligence systems are logical thinkers, stripped of all emotion and therefore all bias.
As a result, there is surely no better candidate for making objective, rational decisions about crimes and their perpetrators. Unsurprisingly to some perhaps, this is not the reality that manifested, and what we actually saw from the use of artificial intelligence in judicial processes was a lot more complicated. Let’s take a look:
PATTERN is an algorithm used by the U.S. Department of Justice that is programmed to predict the likelihood of an inmate reoffending upon release from prison (i.e., the inmates' recidivism risk).
PATTERN stands for (Prisoner Assessment Tool Targeting Estimated Risk and Needs) (BOP, 2020). The algorithm emerged from the passing of the First Step Act; a piece of legislation concerned with reforming criminal justice procedures (Cyphert, 2020).
The results of PATTERN’s risk assessment were used to decide whether certain prisoners were eligible for early release from prison. If the algorithm decided that X prisoner had a ‘low risk’ of re-offending, then the prisoner would be eligible for early release and if the algorithm decided that they weren’t ‘low risk’, X prisoner wouldn’t be eligible (Urban Institute, 2021).
The relevance of this to the discussion of artificial intelligence ethics is that the assessment made by PATTERN overpredicted the recidivism risk of ‘non-white individuals’ as a result of the quality and context of the data it was using. For example, some of the data points that the algorithm used were ‘criminal history’ and ‘neighborhood characteristics’ and so if biases exist within these data points, then ultimately, the algorithm is going to express them.
This is why there is a need for artificial intelligence ethics in conversations surrounding the technology. By involving ethicists in the development and implementation of artificial intelligence, we can help to mitigate unintended consequences and generally make the technology a safer, and more effective tool.
As mentioned at the beginning of the blog, ethics has already had a massive impact on the sector, with many tech companies like Twitter, Microsoft, Apple, and IBM hiring teams and individuals that function to ensure that the company’s artificial intelligence systems are having the desired effect, and none of the unintended consequences.
You might have also noticed an upsurge in content on artificial intelligence principles and values, which serve to guide the behaviors of organizations in a more ethical direction. Principles and values like transparency, trust, fairness, and privacy are frequently adopted by companies as representations of their ethical conduct and provide a huge amount of clarity concerning the right way to build and use these technologies.
This trend will hopefully only continue. In order for it to do so, attention needs to remain on uncovering the ways in which current and emerging artificial intelligence technologies can function as net-positive tools in the world, tools that aren’t only useful for those that wield them but are also useful for those that don’t.
This doesn’t mean that the path toward this ideal will be a smooth one. It is only by making mistakes that we learn from them, and this is no different from the use of artificial intelligence. Problems will inevitably arise; what is important is that we – as individuals, communities, and companies, work to find solutions in the right way.
The horizons for artificial intelligence are vast, and to reiterate the words of Colin Angle, it’s going to be interesting to see how society continues to deal with technology. What is clear though, is that, with the right approach, the right principles, and the right ambitions – the future of artificial intelligence is going to be very, very cool.