• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Mass Misconceptions

Why We Should Not Worship Algorithms

© ISTOCK

The year 2019 ended with an important event: Russia approved its first national standards for artificial intelligence. Starting in September 2020, AI-based smart systems will need to comply with these new regulations. It is far more difficult, however, to regulate people's attitudes towards such systems. People tend to fear and idolise AI, but should they really? Henry Penikas, Assistant Professor of the HSE Faculty of Economic Sciences, discusses why they probably shouldn't and how algorithms can be fooled.

In May 2010, stock indices in the US stock market collapsed by 9% to 10%, and a similar thing happened in October 2016 to the British pound sterling, depreciating by around 9% against the US dollar. Described as 'flash crash', such events are often linked to algorithmic trading, a method whereby automated trading systems execute pre-programmed instructions: when such systems 'see' prices go down they act to lower them even more.

Some people tend to blame 'bad' algorithms for taking over the stock market. But before pointing the finger at AI-led systems, we should look at our own behaviour and remember, for one thing, the concept of stop loss  which appeared back in the 1990s. A stop-loss order to sell is placed with a broker when stock falls below a certain 'stop' price. While it appears to be a reasonable approach to cutting losses, it can activate a chain reaction: price falls – the automated system activates a stop-loss order – the price falls even lower – stop-loss is activated in other traders' systems, and everyone begins to sell.

This means that the flash crashes mentioned above could have been expected. Indeed, The Story of Wall-Street, a book published back in the 1930s, refers to Wall Street as a place where people could make a fortune in a matter of days and lose it in a matter of hours. Today, nearly a hundred years later, we would say, lose it in a matter of minutes. But think about it: while no automated trading systems were available in the 1930s, stock markets faced the same problems then as they do today. So I would not blame algorithms, because they do what humans tell them to and thus only reflect human behaviour.

In fact, algorithms, like humans, can be fooled. In 2017, MIT researchers managed to confuse Google Cloud Vision (GCV), a cloud-based machine learning product for detecting objects, faces, etc. which is offered by Google to its customers. By manipulating the patterning of a turtle, MIT engineers tricked Google's object recognition AI into thinking it was a rifle. It might seem funny unless applied to actual decision-making.

The model fooled in Massachusetts was a standard product, ready for commercial use and trained on Google data. A client using GCV does not need to develop and train their own neural network – a process which takes considerable time and requires large amounts of labelled data. The convenience of ready-to-use solutions offered by Google, Amazon and Microsoft which save customers the need to think, write code and deal with mathematics explain their popularity. But as a result we should not be surprised when stock markets crash or turtles turn into rifles.

It is virtually impossible to examine the inner workings of machine-learning algorithms: they operate by their own rules and learn from human-labelled data. To understand these rules, researchers today have to observe the behaviour of such systems in much the same way as they observe animal behaviour by using a digital version of Skinner’s box in a controlled environment and eliminating all irrelevant factors. Creating a controlled environment for studying artificial intelligence requires about as much effort as writing the neural network code itself.

The ethics of algorithms is yet another aspect to consider. Critics say that artificial intelligence is inhuman and immoral, and its use can lead to unfair discrimination. But AI is neither moral nor immoral, it is just a machine which uses input data to make decisions. A paper was published in 2018 by UC Berkeley researchers who had developed a model for predicting creditworthiness. While they did not use race as an input variable, the algorithm was more likely to support positive lending decisions in respect of White non-Hispanic borrowers as opposed to Black and Hispanic borrowers. Based on this finding, the paper authors conclude that this technology may disproportionally and negatively affect the latter types of borrowers – or, in other words, lead to unfair discrimination that society finds unacceptable.

But a careful study of the data suggests a different conclusion. While the model does not use the variable of race, it does use the variable of income. In the study sample, African Americans had lower incomes as a group and, correspondingly, a higher probability of default; therefore, artificial intelligence categorised them as higher-risk borrowers.

There have been studies suggesting that AI-based models could lead to unfair discrimination of women in employment. But the reason, once again, was that the samples used in these studies had a prevalence of men, causing the machine to predict that a higher proportion of men would be offered job vacancies.

These outcomes do not mean that algorithms are biased against certain people. They only mean that machines learn from and make decisions based on input data. If the input data is biased or inconsistent with new realities and expectations, the machine will produce equally biased or unacceptable decisions, and the reverse is also true.

Actually, discrimination in its literal sense (from the Latin discrimino – I differentiate or distinguish) is something any model is supposed to do. When you ask a model which borrowers or employees you should or should not accept, it must be able to distinguish 'the good ones' from 'the bad ones'. Otherwise, what's the point of having a model?

Ultimately, the examples of the 'flash crash' and unfair discrimination are not about 'evil robots' but about people's mistakes. We may not be fully aware of our own reality – or we may distort it – but these stories help us see our own biases clearly.
IQ

Author: Henry I. Penikas, January 26, 2020