Correlation Is Not Totality: Avoiding AI Screw-ups

Alex Feinman
5 min readOct 7, 2017

--

Oh, boy, another day, another AI* screw-up!

(* i’m sometimes gonna say AI here and sometimes ML and there’s a certain amount of anthropomorphism in here but you’ll bear with me and I promise you I know what I actually mean, right? we’ll pretend it’s ’cause I have a doctorate in this stuff or something?)

A lot of screw-ups these days sound the same: the system discovers a correlation that really does exist, but has bigoted overtones.

Today’s finding was a nice one-slide demo of how unspoken bias can creep into machine learning: translating two phrases into Turkish and back again (source):

He is a baby sitter.
She is a doctor.

O bir bebek bakıcısı.
O bir doktor.

Yes, Turkish has a un-tittled ı distinct from its tittled i. Turkish also is a genderless language, so you’ll notice the first two words (approximately, “it is”) are the same for both sentences in Turkish.

Now, let’s have Google translate that back to English. Anyone? Anyone? Right:

She is a baby sitter.
He is a doctor.

The algorithm learned this. It learned that this was the most likely translation, that baby sitters are female and doctors are male. And because this is the way its programmers made it, the most likely one becomes the only one.

Pattern-Finding Algorithms

This is how ML currently works. It is a pattern-finding system. If it finds a “correct” pattern — which is to say, a pattern more likely than others, and one that fits enough of the training set — it goes with it, whole-heartedly. There is no “65% he is a doctor, 35% she is a doctor” (the actual US statistics).

Someone programmed it this way — someone who perhaps tried the other way (“here are ten possible translations”) and discovered that people reacted poorly to it. Their users want THE answer. They don’t want “he/she is a doctor”, either, as that’s not what language looks like when in typical use.

So the programmer decided to give a right answer — but one that reinforces stereotypes, and erases all alternatives, and in doing so fights back against social progress in subtle, pervasive ways.

Marked exceptions

Humans learn similarly. We quickly find patterns and associations, and stick with them. (“This! THIS is the remote control that works.”) And we over-fit sometimes, too. Ate dodgy fish tacos one time and got food poisoning? Boy howdy, your stomach is now going to refuse ALL fish tacos, maybe all restaurants on that block, maybe all fish, until you can figure out how to give it some counter-examples.

But we also have this thing, because we know this is how we learn: we mark exceptions carefully. We do it in a particular way that says NO! I am breaking an established correlation you have.

A trivial example is the name of my town — Waltham. US English says, “pronounce that spelling WALL-thum”. Almost all the other -thams in your life are pronounced with a “-thum” (or, properly, /-θəm/). (We’re ignoring Chatham, because New England, but that’s kind of the point.)

Locals say, NO! We break the rule here. It’s Wall-Thaam, /wɔːlθæm/, and both syllables getting emphasis, meaning there’s no vowel shift in the second (theoretically unaccented) syllable.

So you learn the exception. People MAKE you learn the exception.

Poor lonely AI

See, AI systems don’t get this feedback.

Truth: AIs don’t get it because AI programmers consider such exceptions “impure”. They want the theoretically correct algorithm that they can just mash into terabytes of data and have a working system exit the other side. Someone said, “fuck it, it is way too much work to go through every translation and see if it might be a bit racist or sexist or whatever.” (Someone almost certainly not hurt by the -ism, of course.)

Sometimes the programmers do eventually get it, get that this is harmful, because someone raises a fuss. And a programmer puts in a special case. “*eyeroll*”, they say. Fine. Now YouTube doesn’t link to some bullshit false-flag video instead of actual information, today. But someone will just put up a different video tomorrow, and I’ll have to do it again! It’s inefficient. We have an algorithm to do this. It’s not biased, it’s an algorithm.

Let me *eyeroll* back at that statement. All systems are opinionated. Closing your eyes does not make the room empty.

Sometimes, sometimes, if someone is really clever, they figure out that “find all the correlations and treat them as 100% truth” is not a good design pattern. I imagine. I can’t remember a case of this actually happening.

We need some sort of…jolt. The NO! There is an exception here! jolt, the one that humans get when we over-generalize a pattern.

The problem is that it’s trivial to game, of course; it needs…supervision.

The uncanny valley of wisdom

And that’s the punchline. We’re not at the place where machine learning can actually substitute for human judgment. But we are close enough that we’re in the uncanny valley of wisdom.

We’re at, or heading down toward, the bottom of a valley of fake-wisdom. Right now we have algorithms that are damn sure that they’re right about something, and they LOOK like they know what they’re doing! And we believe them. They find correlations and treat them as fact; they amplify bad behavior because they’re violently susceptible to memetic infection.

It’s gotta stop. I don’t know how to stop it. I know where it started, though, and maybe that’s where to stop it, too: in a design meeting where a programmer said, “Hey, let’s throw this data into the ML blender and see if it works”, and then a second meeting where the programmer showed it to another programmer and said, “check it out, it translated it back to ‘He is a doctor’; we’re done here!”

Push The Button

If you’re either of those programmers, or near either of them, and that meeting comes up — throw a red flag. Pull the andon cord, throw the emergency brake lever, push the Madagascar button.

Refuse to release software that erases less-salient portions of humanity, especially when those portions of humanity are suppressed by everyone else. Make software that doesn’t punch down.

This ain’t easy. There is a lot of profit in ignoring these complications. But you can do it! You can do the thing! You can be the guy who avoids having your product in a cringe-worthy video showing it failing to work for people who look differently than its creators.

So give that corrective NO! to your AI, and teach it to look for those exceptions.

What’s astonishing is that ‘dumber’ systems are already smart about this — think of spellcheckers that ignore capitalized words, because some programmer realized that was a cheap and easy way to reduce false positives. Dumb, but it works. Imagine what just that tiny bit of cleverness would look like in a Brain The Size Of A Planet Deep Learning system.

You can do it. I have faith in you.

--

--

Alex Feinman

Obligate infovore. All posts made with 100% recycled electrons, sustainably crafted by artisanal artisans. He/him/his.