Frictionless bigotry: Uber’s star rating system & the effects of a lack of accountability

Screen Shot 2017-09-26 at 01.50.26

It’s 5 past 4am on an early Sunday morning. I am standing outside Heaven, the largest–and perhaps most famous–gay club in London. My date waits with me for my Uber to arrive; when it does, I turn and kiss him farewell, and quickly get into the car.

My driver is a young male, who says nothing more as I climb in the back. He tries to speed off but a red light quickly stops us before Trafalgar Square. A few drunken party-goers cross in front, with one at the end in drag, struggling to cross (in heels) before the light changes. My driver mutters something, which I cannot fully catch; I make out the words “shameful”. I’m completely sober, and find myself wondering what was shameful – the drunkenness – the crowds – the drag? There were thousands of people on the streets making their way homes on a pretty typical Saturday night in the capital.

As I reach my destination, I thank the driver and climb out of his vehicle.  As I close my app, Uber prompts whether I want to give him a tip — okay, glad to see this new feature; I give him £2 (more than 10% of the fare), and it asks me to rate the ride.  I give the ride a shining 5 stars, as I often do; after all, the service was prompt, and he got me there in one piece.  These criteria are more than sufficient in my eyes — knowing well that anything lower could potentially irreparably damage his future, by giving the invisible algorithms  grounds to punish him.  Such events could potentially set off a chain of events causing him a rapidly downward spiral – for example, by causing Uber’s opaque algorithms to start to set him up with worse passengers, who, in turn, rate him more harshly, sinking his reputation monotonically. Since Uber drivers in London already struggle to make living wage, of course I wouldn’t want that.

After I close the app, I casually flick to my profile to check how much the fare cost. As I do, I catch sight of my updated rating: 4.7; a significant 0.1 points lower than it was 15 minutes earlier.  A drop of that size, given my extensive use of Uber, would suggest he must have given me a 1-star rating. I feel violated; shocked.  How could he do that to me?

Systematic Algorithmic Oppression

My emotional reaction felt like I was digitally violated; my formerly pristine rating singularly represented–to both Uber’s algorithm, and to potential future drivers–how good a passenger I was, thus determining how I was likely to be viewed and treated.  These scores are incredibly granular and sensitive; those who have a 4.5 are considered by drivers to be slightly suspect, and those with a rating of 4.2 or lower are considered strongly suspicious.  Those with less than a 4.0 are degenerates in the Uber ecosystem, and often ignored altogether.  Note that there is often no recourse or reconciliation for a tarnished Uber reputation. Once outcast, there you remain.

My mind flashed back to my ride. Did I do, or say something that might have offended my driver?  My mind raced; I revisited the entire 8 minute ride; jumping back and forth to figure out what might have upset him. Nothing. I was polite, quiet, sober, and he had not even waited 30 seconds to stop before I was in his car. The ride was brief; it was a surge rating (1.5x), and I quickly paid my dues and was courteous as always.

Downrating as a Future Hate Crime

Ah. But there it was. It dawned on me that right before boarding his vehicle, I had briefly kissed my date, a young man, in front of the biggest gay club in London. Shortly thereafter, the driver made some comment about a poor drunk reveller in drag. Was I a victim of a homophobic violation – a crime – wait — it’s probably not even a crime yet — of being downrated because of my sexuality?

Screen Shot 2017-09-26 at 00.41.51

In the first episode of Season 3 of Black Mirror, Nosedive, a young woman lives in a world where every human interaction is rated on a scale from 1-5.  In this world, every individual’s score is visible to the rest of the world, which is then used by algorithms to determine whether they are eligible for loans, to make particular purchases, or to live in particular neighbourhoods.  This score is strongly associated with a person’s social status, and  determines to a large extent, those with whom they are allowed to associate.

It is no coincidence that this brilliant episode, written by Charlie Brooker, strongly resembles the world of Uber;  Brooker himself is a regular Uber user and has often thought about the ways it accelerates people’s misery.

Although the specific episode does not comment on the issue of racism, or homophobia, it does highlight how fragile such a system would be, and–as the title of the episode implies–how easy it is for even the most privileged to quickly descend to a state of being deprived of basic respect and decency, all due to human cruelty, accelerated by the frictionlessness of technology.

Ratings as a Reification of Hatred

This led to my obvious question: what about minorities?  Does Uber’s star rating system allow people to rate others, based on their race, gender, sexual orientation, religion, or any other demographic or physical attributes?  Absolutely.  Uber doesn’t require anyone to specify why they rated someone a particular way, and it is dubious that this itself would even deter those from making hateful ratings.  Does Uber protect raters from the consequences of rating actions?  Yes.  Those that are rated are never shown who gave them particular ratings; thus, one’s only ability to determine how one was rated is based on coarse analysis of the movement of one’s rating over time.

So we have the situation where people can rate others anything they please, without any likely recourse, especially in a city as large as London, where picking up the same passenger twice is likely to be exceptionally uncommon.  And even if the same ride did pick up the same passenger twice, the fact that Uber obscures the ratings one has received gives a huge opportunity for plausible deniability – “no, I didn’t give you a low rating, it must have ben someone else”.

If we had access to the raw rating data, we could see whether women, people of colour, muslims, jews, gays, lesbians, transgender people — or any other visible minority, had average ratings different from the norm.  We could measure how significant such differences were and to quantify how people treated others in society.  For example, we could see how much people penalise the elderly, the disabled, or the poor, and further quantify how the resulting systematic frictionless oppression exerted by the algorithm affected their lives.

Technology and Responsibility

One of the most exciting aspects of working and doing research in tech is the ability for it to have such dramatic effects on people’s lives.  We have countless examples of how the sharing economy and mobile apps have brought people joy and even saved lives during unfolding crises. But as we become increasingly digital, we are also seeing the effect of less favourable effects that result from the interplay of people’s natural instincts as social (and anti-social) beings, and the technology in question.

In many situations, digital technology has been seen to reduce the friction of the real world; making routine tasks, such as retrieval of information, complex social coordination, or satisfaction of a particular needs easier.  Sometimes, however, friction is useful; normal social pressures of typical face-to-face interactions discourage those who are inclined to express or spread hateful, racist or bigoted views to express them in the company of those whose respect they seek.  But when the interactional mechanics, sometimes called the social physics of interactions are changed, such as by digital platforms, new social kinematics come into play.  Under such new rules, the social checks and balances that have held society together for centuries may no longer work or bear any significant effect.  It is this reason we must carefully study the effects of any technology, and understand the first and second-order effects it is having – looking beyond the short-term convenience, to potential long-term consequences for all members of society.

  • Writer’s note: The story in this article is a fictionalised account of a real event where all important details have been changed.
Advertisements

One thought on “Frictionless bigotry: Uber’s star rating system & the effects of a lack of accountability”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s