Should we trust the algorithms that rate us every day?

A few years ago, China launched its new social credit system aimed at rating individuals’ trustworthiness in society. The algorithms behind it, will assign citizens with a score based on available user data – from social interactions to consumer behavior. It is legitimately raising question about social control by the Chinese Government and a great number of western media described the Chinese initiative as “dystopian” or “Orwellian”. However, in the age of big data and social media, algorithms used in the Chinese project are not so far detached to the ones already classifying and rating us every day.

cropped-01angwin-superjumbo.jpg

Tinder, Uber and Facebook are examples of platforms where algorithms have replaced human judgement. Automation and the massive availability of personal data are giving algorithms more and more tools to analyse, rate and classify us. For decades already, computer programs have been used in financial scoring all around the world. In the US, the system that determines whether you are trustworthy enough to get credit is called FICO. Through its secret computer programs, it rates individuals based on previous data such as if you have been paying your debts in the past and other credit history.

Up until now, no social data has been involved in classifying citizens. However, platforms and services are increasingly widening the sources they use to provide an accurate classification of their users. In the financial sector, start-ups are integrating data issued from social media in order to financially score individuals. For instance, Lendoo a Hong Kong based start-up uses any social data available on the Internet to score individuals that don’t have enough credit history, claiming to “be expand access to credit”. And the trend is growing: three years ago, Facebook patented a technology that would enable the system to financially assess its users.

Algorithms are currently assessing our value far beyond financial scoring. For example, the dating app Tinder uses similar programs to sort its users’ profiles based on attractiveness in order to match them and increase likability. This system, called the ELO score, has an obvious impact on who you might end up with. It further raises critical questions: on which terms can an algorithm judge something as subjective as attractiveness based on someone’s profile?

 Nowadays, the proliferation of big data models has given algorithms an alarming power determining the future of individuals. And the problem is that we tend to blindly trust them. They define who is getting a loan, who will be your next partner, who will get an interview for a specific job or who might be considered by the police as likely to be a criminal. According to Cathy O’Neil, writer of the book “Weapons of Math Destruction” and big data sceptic: you might not have noticed but it is likely that your interactions with any bureaucratic entities would go through an “algorithm in the form of a scoring system”.

But what if algorithms are wrong?

Algorithms are a set of rules written in computer code that are understood and executed by the computer. They can be trained to predict future events by analysing historical data patterns and comparing it to a pre-coded definition of what is aimed at – whether it is attractiveness, trustworthiness or success.

There is a commonly accepted idea that algorithms are maths, thus true and objective. This trend is what developer Fred Benenson calls “mathwashing” or the human tendency to assume algorithms are objective only because they have math at their core. In a recent Q&A, Benenson clarified:

“Algorithm and data-driven products will always reflect the design choices of the humans who built them and it’s irresponsible to assume otherwise”.

But the threat also lies in the uncertainty of the source of the bias as the coding and the historical data used to run the program can be at the origin of a discrimination. Furthermore, most algorithms have limited access, they are opaque “Black Boxes”. Thus, it is difficult for the average individual to know if they have been valued fairly, especially lacking the technical knowledge required to understand an algorithm.

Can a bridge be racist? The question might seem odd but it is the argument made by sociologist Landon Winner in “Do Artefacts Have Politics?”. He claims that all technologies are embedded with their creator’s biases following from the choices made during the creation. Thus, they carry political implications and embody a certain form of power. In his essay, he references the social and racial prejudices of Robert Moses, a New York urban designer from the 20th century. His bridges were too low for public buses, thus only allowing access to public parks to individuals owning cars – which mostly included white upper class individuals. If a bridge can be racist, then an algorithm can most certainly also be.

The automation of human judgment by algorithms will inevitably create winners and losers, accentuating existing inequalities. The human beings behind the software are probably not aware of the moral dimension of their work, nor given the appropriate social issues background education for it.

An example of algorithmic bias is a US  study on the usage of Compas, a program aimed at predicting a criminal likelihood to reoffend. The algorithm was used by the police in view of achieving more objectivity in their judgment. And guess what? According to the investigation, black criminals were twice as likely to be misclassified as a reoffender. In the racial context of the United States, the use or computer programs could have been a powerful tool against human racial bias, but as it turns out, racism can also be coded.

Now, think how the same type of patterns could reappear in similar situations – an algorithm scanning through resumes of jobs applicants. Algorithms have internal dictionaries called “word embedding” that allow the computer to associate words such as capitals with their corresponding countries. A research from Princeton University has showed that while some male names were associated with “boss” and “computer programmer”, female names were linked to “housekeeper” or “artist”. And whether the bias appears through the coding or the historical data fed to the program, it is also highly dependent on the definition of a “successful application”. As a result, the female applicants might be less likely to be selected for a job in a technology firm.

Thankfully, it does not have to be like that and individuals should not feel powerless against automated value judgments impacting their lives. As Cathy O’Neil claims, “data scientists are not ethical deciders” and technology should be a tool working for us rather than against us: to accomplish that she suggests algorithmic auditing. The first step is to make algorithms more accessible and allow individuals to challenge the data likely to significantly affect their lives. Then, there is an increasing demand for algorithm accountability through government awareness and regulation. In that sense, the European Union has recently adopted a measure aimed at providing citizens with the right to ask for explanations in the context of data-driven decisions, especially in online credit applications or e-recruiting processes.

The critical difference between algorithmic scoring with the Chinese system of social rating is that the Chinese government is the one setting the variables and the definitions; we can challenge them. As data increasingly means money for a lot of sectors it is important to ask ourselves: where is my data going and for what purpose? Law Professor Lawrence Lessig put it right:

“We should interrogate the architecture of cyberspace as we interrogate the code of Congress”.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To map or to be mapped

In 2013, Google former vice-president Amit Singhal projected the company’s vision for the future developments of one of their most popular services, Google Maps: “A perfect map of the world is foundational to delivering exactly what you want, when you want, and where you want it”. In the age of big data, Google Maps has created and controls a digital matrix closely interlinked with reality. It has the power to shape not only our perceptions but also the real world.

by Christoph Niemann-1
Illustrator Christoph Niemann

We take them for granted but digital mapping apps have changed the way we direct ourselves in a city. For most of us, using a smartphone to map the quickest way to a restaurant, a show or a meeting has become a habit. Furthermore, we tend to forget how recently digital mapping has emerged in our lives. Launched in 2005, Google Map has already mapped 28 million miles of roads. It has imposed itself as the most popular digital mapping app as 77% of smartphones users regularly use navigation apps, and 70% of smartphone owners use Google Maps most frequently.

Singhal’s quote, however, echoes far back in time. Throughout history, humans have constantly tried to map their world: the quest for a perfect representation of our physical environment is not new.  In the antiquity, Greek astronomer Ptolemy‘s interpretation of a “perfect” map placed the Mediterranean Sea at the centre, as every culture beyond those borders was seen as barbaric. In the Middle Ages, European Maps put less emphasis on scientific evidence and enhanced religious meanings. Later, after the discovery of the new world, Gerard’s Mercator’s famous projection stretched the poles to unrealistic proportions – the commerce at the time was from East to West and Mercator made representation choices in accordance with the interests of his time.

Mercator-projection.jpg
Mercator’s projection, stretching the poles 

All these historical maps were aligned with the technological capabilities, as well as the political and social context of their times. They all claimed to be “perfect” whilst reflecting the hopes and fears of their audience. As Mark Graham puts it: “There is no such thing as a true map, every single map is a misrepresentation of the world, every single map is partial, every single map is selective. And every single map tells a particular story from a particular perspective”. Digital maps are not an exception to this rule.

The emergence of digital tools has opened up mapmaking possibilities to new levels as it allows the superposition of several layers of data on the same map. It combines real time information to geographical data. The software behind the compilation of the data is called the “Deep Map”. It is a hidden, more complex map, containing the logic of places such as traffic conditions, speed limits and no-left turns. The cartographic historian Jerry Brotton, compares the transformation from physical maps to Geographical Information Systems to a jump from “the abacus to the computer”.

The difference today is that mapmakers are algorithms and all of us individually are at the centre of the map. As the digital world evolves, where you are looking has become almost as important as what you are looking for. Consequently, Google maps uses all the available data and make it relevant to you while using your activity and location to improve its data base and algorithm. When searching on digital mapping platforms, algorithms gives a selected set of permutations from its index and it makes the selection based on individual’s search history and language used. You and your friend might come across two completely different interpretations of your cities.

The real question to be asked is: who has control over the different filters that are increasingly shaping the way we look at our world? How are their values and interests impacting our lives?

According to Brotton, Google Maps is “perfect” in the present time for “maximising online profits”. In a world driven by efficiency, individualism and profitability, the digital geographical space becomes an ideal ground to promote trade and advertising. Google maps projects an accurate image of the modern global economy in the age of big data and social media.

The problem is that we don’t intuitively question maps, and it is hard to spot the bias in cartography. Google Maps displays information in a way that suggests accuracy and trust and it is heavily influencing what we know and how we move around a city. When it suggests the “fastest route” to get somewhere, we are very likely to take it without questioning it.

That is what map researcher Daniele Garcia wants us to do: challenge the idea of the “single-path” in our daily lives. The app only considers a few ways to go from A to B, and it has the power to make them the definitive directions to destination. In his Ted Talk, Garcia explains how he creates alternative maps emphasising different elements in a route such as nature, security or history to contrast with Google’s “efficiency filter”.

But Google Maps’ power goes beyond influencing the way we see the world, it is also shaping reality. For example, it has to certain extent some influence in establishing borders. In 2010, Google map almost caused an international conflict as it misplaced the border between Nicaragua and Costa Rica.  In recent years, Google has been renaming places around the globe, considerably impacting the identities of entire neighbourhoods. That is what happened to a district in San Francisco, now popularly known as the East Cut.

Screen Shot 2018-11-28 at 16.58.09

Google’s biggest real world impact is on the globe’s commercial landscape. As Matt Zook put it: “There is a huge power within Google Maps to just make some things visible and some things less visible”. One of its new features, called “areas of interest”, shows the highest concentrations of restaurants, bars and shops in a given area. Not only is it enhancing Google Maps’ commercial interface but also influencing the places that will be most visited by people and thus the commercial activities..

 

However, information is not spread equally on the Internet and Google gets a lot of things wrong from the real streets. For example, Google data base contains 100 time more indexed information per person in Scandinavia than in the Middle-East and Tokyo’s geospatial data is greater than the African continent.

But what about the ones that stay out of the loop? Indeed, there are some worrying consequences of being invisible on Google Maps. Ask New York florist Greg Psitis, he got his shop “closed” on Valentine Day on Google Maps and lost what should have been his busiest day of the year. In the US, 19% of small businesses are invisible on the internet and a lot of them are not aware of the consequences of this modern form of anonymity. But the truth is: an inefficient digital footprint will increasingly cause real world losses, especially when you know that 97% people searching online act on the result. If you are not on the map, do you even exist?

Google Maps and more broadly digital mapping services have made moving around and traveling so much easier. Some even argue that is it easier to wander around and “get lost” now as at the end of the day you know you’ll be able to get home. However, there is a legitimate concern to be raised when the filters we almost exclusively use are chosen by a few big high-tech companies.