Algorithms Are Chaotic Neutral

Carina Zona gave the Sunday keynote for PyConAU 2015. It was a very interesting talk about the ethics of insight mining from data, and algorithms. She gave examples of data mining fails – situations where Target discovered a teenage girl was pregnant before her parents even knew; or like machine learned Google search matches that implied black people were more likely to be arrested. It was her last few points that I got interested in the ethical dilemmas that may occur. And it is these last few points that I want to focus the discussion on.

One of the key points that I took away* not necessarily the key points she was trying to communicate – it could just be I have shitty comprehension, hence rendering this entire blogpost moot was that the newer and more powerful machine learning algorithms out there are inadvertantly discriminate along the various power axes out there (think race, social economic background, gender, sexual orientation etc). There was an implicit notion that we should be designing better algorithms to deal with these sorts of biases.

I have experience designing these things and I quite disagree with that notion. I noted on Twitter that the examples were basically the machine learning algorithms were exposing/mirroring what is learned from the data.

Carina did indeed point out that the data is indeed biased – she did indeed point out that for example, film stock in the 1950s were tuned for fairer skin, and therefore the amount of photographic data for darker skinned peole were lacking * This NPR article seems to be the closest reference I have, which by the way is fascinating as hell.

But before we dive in deeper, I would like to bring up some caveats:

  • I very much agree with Carina that we have a problem. The points I’m disagreeing upon is the way we should go about to fix it
  • I’m not a professional ethicist, nor am I a philosopher. I’m really more of an armchair expert
  • I’m not an academic dealing with the topics – I consider myself fairly well read, but I am by no means an expert.
  • I am moderately interested in inequality, inequity and injustice, but I am absolutely disinterested with the squabbles of identity politics, and I only have a passing familiarity of the field.
  • I like to think of myself as fairly rational. It is from this point of view that I’m making my arguments. However, in my experience I have been told that this can be quite alienating/uncaring/insensitive.
  • I will bring my biases to this argument, and I will disclose my known biases whereever possible. However, it may be possible that I have missed, and so please tell me.

On Discriminating Machines

The points I quite disagree with Carina was in her saying that we should fix machine learning algorithms inadvertantly discriminate along various power axes out there. The example that she gave was based on Latanya Sweeney’s paper – Discrimination in Online Ad Delivery, which coincidentally I had read ages ago.

While the rest of the paper is generally uninspiring, Latanya’s paper’s abstract (and indeed her conclusion) was this:

…raising questions as to whether Google’s advertising technology exposes racial bias in society and how ad and search technology can develop to assure racial fairness.

Having once been in the advertising industry, I can quite confidently say that it is in fact that actual advertising system – a combination of the machine learning system (that figures out which ads would have the highest eCPM) and the advertisers whose job is to optimize for their own profit (afterall, they could have chosen to not have that ad template) – that exposes the inherent racial bias of society.

A better way to frame Latanya’s question would be this: if we were to collect the first names of all the people who are arrested in the US, what is the proportion of black-sounding names vs white-sounding names?

We then take this prior information, and create a new posterior rate which we then can compare with the results Latanya acquired in her paper. Given the rate of black incarceration in the US* which in my opinion is a fairly injust problem on its own but is completely scope for this blog post , I’d wager with some amount of confidence that there is indeed a higher proportion of black-sounding names of people who are incarcerated* This back-of-the-envelope calculation could very well be wrong though .

Let’s consider the algorithm bits only for now. Let’s simplify the process and say that the algorithm optimizes the ads to show ads that will return the highest earnings for Adwords. And that the advertiser has provided two variant templates – arrestRecord and contactRecord. Adwords will randomly choose which templates to fill up with. Over time, Adwords learns that people are more likely to click on arrestRecord when paired with a black-sounding name, guaranteeing more revenue. So the obvious solution is then to show more (earn more!)

Really then the question is who is at fault? The algorithm designers? Or the people who trained the algorithm by clicking more on arrestRecord when paired with a black-sounding name?

We can say that the training data (i.e. the live population actively clicking on ads) is biased. I consider the outcome of the system to be a mirror of society. And in my opinion, this is a Good Thing, and we shouldn’t be changing that part. Instead, I argue, we should be working harder to change the underlying data (i.e. the inherent mental bias of the population).

Garbage In, Gospel Out

Aurynn brings up a point in the conversation, saying that developers think data and algorithms are impartial:

Which exactly highlights my point – algorithms ARE impartial. They work on the given dataset – garbage in garbage out, as they say. It’s in fact one of the keystone principles in designing algorithms.

Related to the GIGO concept is the idea of the GIGO fallacy, also commonly known as Garbage In Gospel Out. It is the fallacy where “…the advocate treats conclusions leading from some flawed data, unsubstantiated evidence, unfounded assumption or baseless theory, as gospel.”

The GIGO fallacy is very common, and is the foundation of the anti-algorithm sentiments I got from the talk.

Throughout Carina’s keynote, she notes that there are consequences of these algorithms being inadvertently discriminative. In fact, most of these consequences can be attributed to the population in general being not critical enough to the data that is presented to them.

In the examples she gave – particularly the one where black people were tagged as gorillas and animals – it would be extremely easy to get offended – but when if the people realize that machines are particularly flawed, there wouldn’t be as much outrage. When I wrote EyeMap, I myself wrote an algorithm that wouldn’t detect my own eyes when they do not form a crease at the eyelid, which happens only when I was tired. Did I get mad at the algorithm? No. I simply realize that there are some limits to the algorithm. I believe the general attitude to issues like these should be amusement, not outrage.

Of course this does not mean that the consequences are not real. This does not mean that the consequences does not hurt people, nor does the consequence not cause triggering issues.

Fixing algorithms to handle the general population’s logical fallacies does not fix the underlying problem – people are still not critically treating the information that comes out of their screens.

Take traumatic triggers for example – they’re real, and they have effects. It would be exceedingly terrible if the machine learning systems outputs something that triggers some traumatic flashback. And the conventional wisdom on the Internet is to provide trigger warnings – to be sensitive, so to speak. But research has shown that it is useless, and depending on studies, could even be counterproductive to the healing process. In fact, Metin Basoglu, in his books on torture and trauma research, points out that exposure works better than avoidance (and CBT is in fact one of the best treatments available) * Side note: /r/scholar is a good place to ask for research papers and books you cannot afford .

While it would appear that traumatic triggers are not at all like a logical fallacy – it isn’t – the same analogy can be applied. It is the receiving end that should be critical of the results. Fixing the algorithms would be exactly the same as plastering trigger warnings – useless and unproductive.

On Inequity

Then you say to me, “but Chewxy, surely you cannot expect to fix everyone’s individual issue!”, or “you’re teetering on the edge of victim blaming!”. Here, I shall try to convince you that fixing the algorithm would yield even more harmful consequences. We shall do this with a thought experiment, which is set up thus:

First, we suspend our own morality, and enter a realm and adopt a new morality. In this realm, we have a morality that says that there are categories of people that should be approved for loans, and there are categories of people for whom it is immoral to approve for loans. Now we set up the idea of inherent difference along an axis – say anatomical sex (as in it’s dependent on the physical genitals), for example. The reason why sex is chosen is because it’s pretty binary – you either have a male sex organ, a female sex organ* Yes, I am aware that intersex peoples exist, and that intersex bodies are higly varied on their own, ranging from ambiguous genitals to multiple genitals, but for the purposes of this thought experiment, we cannot be as inclusive for brevity’s sake. . There are no other options.

Let’s say that for some reason* The reason doesn’t have to be known, and could or could not be correlated with the fact that it is due to having male genitals – it doesn’t matter , people with male genitals were more likely to default on loans than people with other genitals. And for the sake of drilling in further the idea of this new morality, anyone who defaults on a loan will suddenly suffer a constant physical pain for a large majority for the person’s life. Hence, it is a moral imperative, to not approve anyone for loans if they do not qualify, or else you would be doing harm to them.

On the flip side, getting a loan would improve the lives and futures of the peoples immensely. Getting a loan means a person can acquire assets. By now you should be able to see that we have set up a pretty inequitable situation. Anyone with female genitals would have access to improve their lives tremendously, while anyone with male genitals would lag in access to loans.

In this scenario, think about what the appropriate empathetic and sensitive response would be.

Now, to make things a bit more difficult. Let’s say in this hypothetical situation, there is one other key factor that determines whether a loan might get defaulted – whether a person has assets or not* Interestingly such a strong predictor of credit default kinda already does exist in real life. If you defaulted once in the past, you are more likely to default in the future . A person with assets is far less likely to default on a loan than a person without assets. You run A_Firm, a firm that provides credit qualifying analysis. Your machine learning algorithm discovers this fact, and starts asking if applicants have assets.

The problem, of course lies with what proportion of the male population have assets, vs the proportion of the female population that have assets. Because it’s easier for females to get loans (because they don’t default as much), it’s easier for females to acquire assets, which makes it easier for them to have loans. We’re in a classic “privileged” situation.

OK, so we’re pretty close to the situation where Carina mentioned in her talk – there will be a lot of people who are rejected for reasons that have nothing to do with their ability to pay, and everything to do with replicating privilege.

So the question becomes this: if we modify our algorithm, to not take into account whether a person has assets – in the name of inclusivity – would we be causing more harm than good? Comparatively, is the harm from modifying our algorithm more harmful than the harm of lack of inclusivity?

My quick back of the envelope calculations indicate yes. But I’m tired so please do your own.

Obviously the example above is just a model. It’s not meant to be an analogy of the real world, but rather, it guides us to think about how we deal with these things in the real world. Our real world definition of harm is a lot more subtle than that – bankruptcy may not be as bad as a lifetime full of pain, but I would definitely consider it under “harmful” as well – and real life morality is also more heavily coupled with profits (i.e. it may be moral to do certain things because it’d be profitable to do so), so it’s a bit more difficult to extricate pure moral intentions in Carina’s case.

Following that thought experiment, you will quickly realize that the best way to fix this issue would be to fix it by addressing the real life inequity – the proportion of male asset owners are far less than the proportion of female asset owners. A far superior solution to modifying the algorithm. A far more complex one too, no doubt.

Either way, it should be food for thought about modifying algorithms in these sorts of scenarios. And that’s only a short term effect. What about long term effects?

Chaos

A related point (and forms the pun of this blog’s title) is that the relationship between machines learning and humans responding and machines learning from humans responses form a somewhat tight feedback loop. And we know what happens when things get into feedback loops, and are extremely sensitive to initial conditions – there is an entire branch of math dedicated to it: Chaos Theory!

I had also wanted to go down this path, to see where it leads the argument, but I realize I don’t have any models in mind that could model our relationships with machines, and it was getting late, so I abandoned it for time being.

The general gist of the idea is that we are not able to predict the long term effects because we don’t know the starting condition well enough. Modifying the algorithms could have really really really weird results in the long run.

This of course doesn’t mean that we should be paralyzed by our inability to predict the future and not take steps. I’m just pointing out that it may be difficult to figure out what’s happening in the long run* By now you should have realized that I’m quite risk-averse, and this is that bias speaking .

AI! Teh Horr0rs!

The final point I want to make was one of my original points that I wrote to Betsy:

I will admit that the response wasn’t well thought out. In fact the whole line of reasoning wasn’t well thought out, and I was indeed caught up by the moment. The general gist of it is something like this:

Machines that figure out biases on their own would be superhuman – in a very literal sense. Human beings have problems enough dealing with their own biases. If machines can figure out biases of humans, that would make them more human than human. That would be the danger point.

Of course, we’re no where near that scenario, so we wouldn’t have to worry about it. Any form of debiasing would be inputs from humans for the time being, and that’s … just imparting a known set of human morality into machines, which we will then force upon the world who may or may not have a different set of ethics from what us. Totally not a problem at all.

Like I said, the AI angle of this is poorly thought out.

Conclusion

Throughout this whole post, it may appear that I am ragging on Carina’s talk. Au contraire, I’m actually supportive of her ideas of being more empathetic and sensitive developers. There was a very good talk she made, called Schemas for the Real World (thanks to Caleb Hattingh who pointed me to that video), which shows the depth of what Carina talks about.

I merely disagree with two very specific parts of her talk – specifically on how to deal with it. My opinions are that these inequities should not be handled at the software/reflection layer. It should be handled at the basic level: real life. We should really fix inequities from reality, and let the mirrors (i.e. machines) show us what we really are.

Towards the end of her talk, she did somewhat echo the sentiments I have above: After auditing your algorithms, if you find that it does indeed cause inequity, what do you do about it? In the sections above, I laid out a model of how to think about modifying algorithms to handle such issues. I didn’t give an answer on what we should do about it. Neither did Carina. I guess this is one of those Hard Things.

The final takeaway I really really really agree with Carina from her keynote was that we have to have a diverse way of anticipating how things would fuck up. This cannot be stressed more.

TL;DR – I liked the Sunday Keynote for PyConAU2015. I disagree with the speaker in 2 out of her 10 or so points. I wrote a 3000 word rambling essay on those two examples, and why changing the software is worse than changing people. Lastly I agree with the rest of her steps to reducing these sorts of issues.

comments powered by Disqus