Artificial example of this is the AI algorithm COMPAS

Artificial intelligence (AI) is one of the fastest growing and most promising technologies with the potential to revolutionise a wide range of industries from the medical to judicial. Machine learning is a subset of AI where the AI is not explicitly programmed but rather ‘learns’ to make decisions from training.

An especially powerful technique within machine learning that is currently very popular is deep learning. Deep learning relies on a neural network of ‘nodes’ arranged in layers joined by weighted connections. These neural networks can be trained on datasets to perform functions that are outside the reach of an ordinary algorithm relying on basic logic alone. It can perform tasks such as recognising and distinguishing between different animals in pictures or controlling self-driving vehicles.In 2015 Deepmind’s AlphaGo AI beat the european Go champion Fan Hui in its first match and the world champion in 2016 before going on to compete online against a variety of the worlds best Go players and winning all 60 of is matches.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

AlphaGo used a deep learning neural network to determine what moves to play. This sort of play was only possible by AI, as the game contains an estimated 10761 game states; too many to address with a traditional algorithm. AlphaGo was trained by analysing thousands of games played by expert Go players and then by playing against itself to improve upon its initial knowledge. In 2017 the AlphaGo team revealed a new version of their AI called AlphaGo Zero. This AI did not initially train on human data but taught itself the game from scratch by repeatedly playing against itself. AlphaGo zero outperformed the initial AlphaGo and used less computing power to do so. This is because it was not influenced by the, oft inefficient, human bias inherent in the provided data.

This self-teaching approach however can only work in an artificial environment like Go where the rules are simple and easily defined however. In the real world a computer can not simulate every aspect of an environment and thus an AI solving real world problems is dependent on data to train on. As seen with AlphaGo this introduces human bias into the algorithm’s decision making. While often benign, there are cases where AI learns negative human biases as well. An example of this is the AI algorithm COMPAS used to help judges determine the risk an offender reoffending. An analysis of cases conducted by ProPublica found the algorithm to favour white people and give a higher risk-rating to those with a darker skin colour. The creators of the programme Northpointe Inc. (now Equivant) insisted it was not racist as race is not one of the inputs the algorithm is trained on.

In a similar case a computer science professor who was building an image recognition programme noticed, that when his algorithm was trained on public datasets, some even endorsed by Facebook and Microsoft, the associations between classic cultural stereotypes such as women cooking or shopping and men and sports equipment were not just displayed but even amplified. The problem of AI inheriting negative human bias is not just for fear of causing offence, but when using AI based decision making in a real-life context, even just for small decisions, it can have serious effects. The Facebook algorithm for deciding what content users see on their feed is AI powered and relies on predictions of what users wants to see.

While allowing Facebook to target advertisements at different groups of people, it also isolates them effectively trapping them in a ‘bubble’ of like minded content. This can be used by campaigners to sway the opinions of people with targeted campaigns, the likes of which were thought to influence both the most recent US presidential election and the Brexit referendum and increase polarisation generally due to less cross exposure to conflicting ideas. AI is such a fast paced technology that it is always a step ahead of the lawmakers. The onus for the ethical use of AI thus falls on the engineers creating and using it. This is a problem that is also becoming increasingly relevant as more and more technologies become reliant on AI. With self-driving cars slated to start driving soon and AI becoming increasingly used even in the research of pharmaceuticals the tolerance for error is always decreasing as increasingly important issues are at stake. The largest barrier to solving problems around and working with AI is opacity of the algorithm.

A neural network has so many nodes and layers that it is impossible to tell how an algorithm has reached a conclusion just by looking at it. Researchers are currently working on building AI that can explain or justify its results, for example by highlighting particularly relevant parts of the input. Even though this doesn’t fully explain results it gives some insight into how a decision was reached.One solution to the problems presented would be a movement away from deep learning that is opaque to engineers and instead move towards more transparent methods of machine learning. Machine learning based on a probabilistic approach, while not as powerful as neural networks yet is being explored.A leader in this field is Uber, who have recently open-sourced their own probabilistic based programming language ‘Pyro’. Alternatively, if neural networks are used, greater care must be taken in selecting the data that they are trained on if they cannot train by themselves. Research is being done into mitigating the effect of biases in data to reduce the amplification effect.

However, more importantly in deciding what data to use is to determine the application of the AI. Some algorithms needs to reflect the reality of the world in the decisions they make, even if that appears insensitive, in order to make accurate predictions. In many cases, however we don’t want AI to judge based on biased data in the past. Here engineers may need to train AI on data that has been checked for and cleansed of unwanted bias although this may not be feasible for large datasets.

Additionally as society’s morals change the AI would become ethically ‘out of date’. In the end it may be best to leave AI doing what it’s best at, working within well defined environments and not having it make automated decisions without a human checking the result and verifying that it doesn’t go against common sense.