Being able to read situations decides about winning or losing
Can you read the green like Northern Ireland’s golf pro Rory McIlroy? He shows us how to do it professionally – most recently at the Omega European Masters in Crans-Montana a few days ago. Changing perspectives by looking at the starting position from different physical locations. He anticipates which line his golf ball will roll before falling into the hole with a nice dull “clack”.
Rory Mcllroy finished the tournament in a split second place. Another has been able to read the green even better Lucky winner of the European Masters in Crans-Montana 2019 is Sebastian Soderberg.
We are all not Rory Mcllroy or Sebastian Soderberg and very few of us are professional athletes in another discipline. Nevertheless, we can learn from them all. It is essential to be able to analyze an initial situation objectively. Crucial – especially in a competition situation. We lose this professional objectivity several times a day – in our private and professional environment.
Biased: How do we judge people?
As I grew up in the field of auditing and consulting, the answer is obvious: “it all depends”. Yes, and this is also the crux of the secret. It depends on how we
(a) have been socialized,
(b) are characterized; and
(c) are reflected on our way
Also, the current physical and mental condition plays an important role. According to studies, a sleep deficit has the same “bias-promoting” and “judgement-reducing” effect as an increased alcohol level. The objective ability to judge is massively restricted.
We are consciously or unconsciously biased. This in different degrees. The three most important factors of bias include
– stereotypes
– prejudices
– discrimination
Stereotypes are the exaggerated opinions, images or distorted perceptions of facts about people or groups of people. Of course, positive as well as negative.
Prejudices, on the other hand, are opinions, (pre-)assessments or attitudes towards people or a group of people.
Discrimination is a form of behavior in which people are treated unequally compared with others based on their (group) membership. Often this behavior starts based on stereotypes or prejudices.
This short overview only as a basis, more about it later. Due to my expert activity in the field of integrity, non-compliance and white-collar crime, I am involved with this topic daily and am very aware of its explosive nature. Because the core question that arises from this is the following:
“How do we get our human bias under control to the extent that it does not harm?”
One of the answers could be: If we humans are so biased, we could get this human characteristic under control technically by artificial intelligence. True? Let’s take a closer look.
Artificial intelligence learns from Big Data
Artificial intelligence learns from data. Large amounts of data. These data often reflect the behavior of people in any situation in the past. When the database reflects human behavior patterns, it says a lot about the feeding of artificial intelligence. Are we talking about automated bias?
Let us assume that these algorithms, in the sense of machine learning based on Big Data, acquire the selection criteria for their analyses. The basis is an image or pattern of human behavior – large amounts of data. The machine draws its conclusions from these patterns. It cannot evaluate them.
Algorithm: Risks and side effects
There are already many examples of discriminatory results of algorithms. A case study in the field of recruiting shows the following in a very simplified way:
The company is looking for a sales executive. The historical data available to the machine goes back to the entire company history in the Sales area. There was no female manager in sales until now. Neither was there an executive with Swiss nationality. What does the machine learn? That these two characteristics are not part of the company’s pattern or recruitment behavior.
It’s not the fault of the machine. It concludes by the Big Data available. I am convinced that such stereotypes, prejudices, and discrimination are not wanted anymore (or should not!) by human intelligence today. The machine doesn’t show such – very often unconscious – bias on its own and raises its hand. Because the machine learns by the data feeded.
Digital integrity of human and artificial intelligence
We can perfectly complement each other, human and artificial intelligence. If we want. This requires an understanding of the possibilities, risks and side effects. Just like a moral that also takes into account the ethical aspects – from an analog and digital point of view.
Based on this, the #digitalintegrity of humans and machines in the sense of how we implement morality in our actions digitally and analogously – will continue to increase in importance.
In this sense, let’s start programming our personal digital assistants in such a way that they don’t take over our unconscious biases.
And if you want to be able to read something like Rory Mcllroy: walk symbolically around the actual situation you are in and don’t let the flat green dazzle you. Otherwise, you might miss the one or the other hill.
Yours,
Sonja Stirnimann
PS: I’m just thinking about how to make sure I teach my personal digital assistant that on the one hand, I’m happy if he takes my prejudices about music taste seriously (e.g. like spotify), but not those bias that jeopardizes my professional independence. It could be a challenge – I will stay on it…