Ethical Concerns

4 mins read

Kyle Dent talks New Electronics through some of the ethical issues that have to be considered when developing and using artificial intelligence.

Data is often biased, incomplete or not reflective of the real-world situation it’s supposed to model. As a result, AI developers necessarily make decisions and choose trade-offs.

Self-driving car designers choose to have their cars drive the speed limit and not the safest speed. That’s an ethical decision, but probably without much thinking about it in those terms.

There is a general feeling that technology is inherently neutral even among those developing AI solutions. This misunderstanding presents a growing challenge as artificial intelligence evolves and spreads into virtually every facet of human society.

Over the past twenty years, AI has been adopted for many new applications with a big ripple effect on people’s lives. How we apply AI is starting to matter, which means the developers of smart systems have an obligation to consider very carefully any potential for harm. We need to bring the ethics of AI front and centre.

This issue is compounded by the fact that most people trust their technologies without really understanding how they work, or recognising their limitations.

Consider the driver, who in 2016 trusted enough in her GPS system that she mistakenly steered her vehicle directly into Georgian Bay in Ontario, Canada. Earlier that same year we saw the first fatal crash of a Tesla Motors car that was being driven in Autopilot mode. Putting undue reliance on the car’s semi-autonomous controls despite the manufacturer’s warnings, the Tesla driver collided with a tractor-trailer resulting in his own death.

You can’t argue with a machine

A major side-effect of the common belief that machines are unbiased is that any debate about decisions is often shut down once an intelligent agent is introduced into the process.

AI technology is already being used for decisions about judicial sentencing, job performance and hiring, among many other things. There is no denying that without technology, human beings bring their own biases to decision making, but those decisions are often questioned amid robust public debate.

Consider the California judge who narrowly escaped recall and faced public outrage at his sentencing decision in a sexual assault case.

People seem to believe that technology is inherently neutral, so its decisions must be fair. What’s lacking with AI, however, is any discussion about how developers chose data sets, selected weighting schemes, modelled outcomes or evaluated their results or even what those results are.

Those affected often have no recourse because computer decisions are considered infallible and usually final. One widely reported and commented on system is now being used in several US jurisdictions to predict a defendant’s likelihood to recommit a crime. An AI system that was used to predict risk assessments for criminal recidivism in the Florida courts was found to be quite unreliable in predicting who would commit a crime, scoring only 20 percent, in terms of accuracy, of those it said were likely to commit a violent crime in the future. It was also reported that there were significant differences in the types of errors the system made when analysing white and black defendants. While their findings are disputed by the company supplying the software, the company did not disclose how they determined the risk scores the system produced, claiming their techniques to be a ‘trade secret’.

Using AI shouldn’t eclipse existing laws and traditional protections extended to those affected by it.

Historically, society in the West have favoured open government and have held human rights values that include human dignity, public health and safety, personal privacy and extend legal protections, even to criminal defendants.

Those with the authority to procure technology and those making use of it, must be aware of its design, its context for use, and its limitations. At a minimum we need to maintain established values. As a society we have to consider who benefits from the use of the technology and who accepts the risks of its use.

Average consumers, business users and government agencies, usually aren’t qualified to assess the relevant AI data models and algorithms. This asymmetrical relationship puts the burden on AI developers to be forthright and transparent about the underlying assumptions which guide their decisions.

Adopters of technology also have a responsibility to hold vendors accountable and require disclosure of relevant information.

In the case of decisions affecting sectors of the population who have been historically disadvantaged or marginalised, it is especially important to understand the benefits and risks of using the technology, in addition to understanding the reliability and accuracy of that usage.

Intelligence is only as good as its data

Most modern AI decision-making systems gain intelligence from existing data, so it’s critical that we review that data to understand how well it aligns with the real-world goals of the system.

Training data does not always reflect variables from the actual environment where they are deployed. It’s often repurposed after being collected for other purposes.

Real-life is complicated and messy. It can be difficult or even impossible to accurately define value functions that match the end goal. Whereas humans are good at ignoring obviously irrelevant data, it often doesn’t even enter our minds, machines are not good at reasoning about causal factors. They are really good at finding correlations whether they matter or not.

Performance accuracy is another important consideration. A model with 99 percent accuracy would rightly be considered as providing excellent results by any AI developer. But how many developers ask themselves about the real people who are within that 1 percent where the system gets it wrong?

What if hundreds or thousands of people are adversely impacted by the system? Is it still worth using?

In these cases, systems could be designed to allow for human input to compensate for the system’s misses. The size or severity of potential negative consequences should justify extra cost and effort to add protections into the system.

Evaluation of a system should continue even after a system is deployed. The world is highly dynamic and fast-changing. AI systems should incorporate ways to assess post-release accuracy and calculate how often that accuracy should be reviewed and recalibrated.

All of us should be asking hard-hitting questions about AI systems which impact individuals, communities, societies and our shared environments. Are there some specific groups which may be advantaged or disadvantaged in the context of the algorithms under development? When using human data, do the benefits outweigh the risks for those involved? Will there be a calculation of the error rates for different sub-populations and the potential differential impacts? And what are the effects of false positives or false negatives on the subjects who are misclassified?

The developers of AI systems can help mitigate a broad range of future problems by paying close attention to all the decisions which influence their software and data models.

If these decisions are not surfaced and thoroughly examined in advance, we risk incurring a tragic and costly backlash from the growth of artificial intelligence.

  • Located in Silicon Valley PARC (Palo Alto Research Center), a Xerox company, is a leading scientific research and Open Innovation company. Kyle Dent is a Research Area Manager, focused on the interplay between people and technology. He leads the ethics review committee at the centre.