Lesson 9: Ethics of AI
Ethics in computing
The area of computer ethics has been widely discussed since the dawn of computing. Over the years different organisations have worked to create a Code of Ethics to try and bring some guidelines to computing, and more recently to AI.
Some examples of codes of ethics are:
- Association of Computing Machinery (ACM). See https://www.acm.org/code-of-ethics.
- Ethics Guidelines for Trustworthy AI. See https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
Another area that is drawing much attention is the area of data protection. Consumers often are unaware of what happens to their data. This was evident in cases like Cambridge Analytica where Facebook data was used to target election voters. To try address some of these concerns additional regulations, like GDPR, have been established to govern data protection.
The challenge with publishing the codes of ethics is that these are a set of guiding principles and it is often impossible to stop criminal enterprises from breaking these guidelines.
Ethical Hacking
In recent years the concept of a "Grey Hat" hacker has emerged. A Grey Hat hacker does break laws but is driven by ethical motives. One of the most famous of these was Edward Snowden. Edward Snowden was a system administrator for the US government and was concerned with the mass surveilance of US citizens. Because if this he stole thousands of sensitive documents and shared with reporters.
Hacking of AI
There has been lots of work performed to investigate how to hack an AI model. The most common area is can images be manipulated to trick the machine in to thinking its something different. This is often done by analyzing the Neural Network model and creating a specific type of image to trick it.
A simple example is a case where a sign post was modified to trick an autonomous car to thinking it was in a different speed limit. This could have a dangerous side effect and cause the car to speed out of control.
Explainable AI
The idea of Explainable AI is to use techniques that explain to humans how an AI model makes its decision. This is a particular concern with in the area of deep learning where huge neural networks are used.
An example of such networks is Inception from Google. Inception is a 22 layer deep network with over 25 million parameters. It is often near impossible for a human to know how this model came to a decision that a particular image is a cat versus something else.
Medical use of AI
It has become very common that AI is used in the field of medical research. One area in particular is around the reading of scans. Using image technology based on AI it is possible to look at a scan and detect bad cells. There are already cases where models are showing near human performance for detecting specific illnesses.
A big challenge in the medical field is that due to the lack of Explainable AI the models are not trusted to make automatic decisions. Because of this some of the more complex deep neural networks are not certified for medical use. More simpler AI models are preferred in the field of medicine.
Usecase: Husky
A University of Washington team built an AI model that was differentiating between a Husky and a Wolf. At first glance the model was producing a high level of accuracy. On closer inspection the reason the Wolf was being classified was that there was snow in the background of the image. This shows that the training data consisted of Wolves on snow and Huskies in urban settings. This would not be possible unless the team built Explainable AI. See here for full details.
Bias
A bias in an AI model is when the model is not representative of the whole population. The idea that an AI based computer algorithm is not biased by definition. This is not true.
A common source of bias in AI is the labelling of data. In most cases data is manually labelled. One persons description of an object may be different to another and can reflect the bias (concious or unconsious) that the person has.
Its also possible that the training data is perfectly correct but the underlying environment where the data is generated has bias. Imagine a company that only hires men and this data is used to train an AI model. This model will be biased towards men.
Usecase: COMPAS
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a system used in the American justice system. It is currently in use today and is used to try identify the risk of a person re-offending.
The model includes several factors such as:
- Substance abuse
- Residence
- Employment History
- Prior crimes
- many more
This system was found to have hugely overestimated the re-offending rate for people from ethnic minorities. People from majority groups were often given shorter sentences based on this model.
This example shows the huge impact of having unchecked bias in a model. In this case the original inventor of the model stated it was not intended for sentencing people.