AlgorithmWatch: Small tools with big consequences
Algorithms help shape our lives: They control our traffic lights and social media timelines, help forecast the weather and provide streaming tips. The organisation AlgorithmWatch keeps an eye on their problematic aspects.
Algorithms affect our daily lives. And yet they are so inconspicuous that many people aren't aware of their presence. What exactly is an algorithm? Simply put, they are a series of instructions followed step-by-step to complete a task. If you use a recipe to cook a pasta dish, you are following the algorithm of the cooking instructions. And when Google Maps calculates the shortest route to a given destination and shows it on the screen of your mobile phone, the app does so with the aid of a programmed algorithm.
Algorithms are currently one of the most important topics in IT and mathematics. Clever minds develop them and determine how they should control computers and machines in the form of programs and electronic circuits. Algorithms take their name from the Persian mathematician Mohammed al-Khwarizmi, whose teachings were translated into Latin in the Middle Ages.
They decide what we see
Social networks are dominated by algorithms. They work in the background, deciding what and who should appear on Facebook news feeds and on Instagram. And what and who should not. They are often based on clear commercial interests that the programmed algorithm takes into account.
There are a few more problematic side effects, such as when Google Maps diverts masses of cars onto tiny side streets to avoid a traffic jam. Or that facial recognition on smartphones and other devices is better at identifying pale-skinned men than their darker-skinned brethren or women. Or that job portals suggest different jobs to women than to men - mostly less well paid ones - based on stereotypes.
That's why for the last few years there has been AlgorithmWatch, a European non-profit organisation that promotes public awareness for ethical conflicts, checks algorithm-based decision-making processes and advises authorities and governments on what action to take.
Keeping a close eye on algorithms
Since September, the organisation has also had a branch in Switzerland, thanks to the Migros Pioneer Fund, which will guarantee its funding until the end of 2022. "We try to keep a close eye on algorithms," says Anna Mätzener (42), a mathematician and the Managing Director of AlgorithmWatch Switzerland. "The aim is to ensure that these technologies don't have negative consequences for society, that everyone benefits from them and that they don't simply entrench existing power relationships." To this end, the organisation publishes regular reports - including, at the end of January, its first on the situation in Switzerland (see interview). It also provides information in the media.
At present, AlgorithmWatch Switzerland consists only of Mätzener and a colleague. However, a third person is due to come on board by the end of the year. There is also close cooperation with the head office in Berlin.
Many people know too little about what lies ahead
Nadja Braun Binder (46) is a professor of public law at Basel University. She has been studying algorithms for 20 years, especially with regard to the digitisation of public administration.
Expert Nadja Braun believes Switzerland is currently on the right path in dealing with algorithms. Nevertheless, she urges caution.
Algorithms simplify many aspects of our everyday lives. That's good, isn't it?
Absolutely. For instance, it's great what you can do with voice recognition nowadays. I myself regularly use dictation software, which transforms the spoken word into writing. That's how I write my e-mails nowadays, for example.
But there are also problematic aspects, as you also pointed out in your Switzerland report for AlgorithmWatch. Which would you mention in particular?
Several. Depending on the application, it's problematic that you can't always tell how these systems operate. That doesn't really matter in the case of the speech recognition software I just mentioned. But if an algorithm at a bank evaluates my creditworthiness, I want to know how that decision is made. What's more, these systems require huge amounts of high-quality data, and if there isn't enough, it can generate bad decisions, for instance in predicting recidivism in criminals.
Are there any others?
There's also a risk of discrimination. Let me give you an example: A labour market service uses specific patterns to assess the chances that unemployed people will find jobs. Historically, women with children have more problems than men without children. There is therefore a risk that this algorithm will give men preferential treatment because they allegedly have a better chance.
Should such systems therefore only be used with caution?
First and foremost, they should only be used for support purposes, not for making decisions. That should remain in human hands, based on the material provided by the algorithm. And these people should be trained in critically evaluating this material.
Are there currently any algorithmic applications in Switzerland that worry you?
It seems that the state and the administration are well aware that there are questionable elements. Here in Switzerland, the state so far only uses such systems with caution and following extensive clarification. For example, a tool for conducting predictive police work is in use in Aargau, Basel Land and in the city of Zurich, but only to a very limited extent. It currently only serves to localise areas in which there could be a higher incidence of break-ins and thefts. There is therefore little chance of discrimination.
What about the private sector?
I can't give a comprehensive answer to that. We have fewer legal restrictions on private-sector use. But there too, data protection directives and legal limits must be complied with. However, the motivation is different there: The state wants to act in a more efficient, citizen-friendly and legally compliant manner. The private sector wants to be innovative, develop new applications and attract customers. Nevertheless, one thing is clear: both sides will be more interested in such systems in the future. That's why we need to be more vigilant whenever people are being evaluated.
So we're generally heading in the right direction in this respect in Switzerland?
We're on the right track at the moment. We are conducting investigations, we are aware of the challenges and - at least in the case of the state - we are generally doing without algorithmic systems rather than rushing into introducing them. However, I believe that we will need to create the legal framework in the medium term. Even though automated decision-making is covered by the new data protection legislation, for example, it doesn't cover the risk of discrimination. Generalised scepticism would be wrong But we must weigh up the opportunities and the risks and keep the latter under control. I also think that many people still know too little about what lies ahead.
Are there any developments that alarm you and that we should avoid here in Switzerland?
I am shocked by developments like facial recognition in the public sphere, in which databases are immediately searched in the background for sensitive information. Such systems are already in use in the name of security and prevention in countries like China. However, in several US cities pilot studies of this kind have already been halted or the use of facial recognition software has been forbidden.
How can algorithms be prevented from taking discriminatory decisions? Doesn't that depend not only on state regulations but also the way in which algorithms are programmed?
Correct. We must therefore raise awareness among programmers and users. In the end, it can be a delicate balance between getting results as quickly as possible and getting results that are as good and fair as possible. That's why we shouldn't simply leave the issue to IT. We need input from many other sides. The key factor for learning systems, for example, is to select the right training data for the algorithm. After all, data from countries in which the composition of the population is very different could lead to wrong decisions here.
Are there any areas of our everyday lives in which algorithms will never be deployed?
Yes. In our political institutions, for example. Parliamentarians, federal councillors, judges and voters will continue making their decisions without algorithms.
What algorithms control
- The order of Google search results
- Online advertising banners
- Facebook timelines
- Online shopping
- Partner search portals
- Voice-activated assistants
- Chat bots
- Streaming suggestions
- Computer games
- The autocorrect function in Word
- Assessments of creditworthiness by credit-rating agencies
- Cashpoint machines (e.g. the value of individual banknotes by time and place)
- Stock exchange transactions
- Customer loyalty programmes (suggesting suitable products)
- (Pre)selecting applications submitted by job-seekers
Transport and the office
- Traffic lights and lifts (e.g. knowing when it's particularly busy)
- Navigation systems or park pilots in cars
- Building technology (e.g. for heating or cooling depending on the time of day and usage)
- Risk assessment for prisoners (case screening tool) used for an initial triage regarding who could benefit from which reintegration measures
- Break-in prediction (pre-crime observation systems) to assess the areas in which the risk of a break-in is particularly high in the next 72 hours
- Gambling machines (which ensure that the casino always wins)
- Weather forecasts (model simulations based on available data)
- Support for doctors treating cancer patients at Geneva University Hospital (Watson for Genomics) in finding suitable therapies