Are Human-Tinted Lenses Causing Biased Machines?


01/25/2017

biased machines

Computers are slowly learning to think and process information the ways that humans have evolved to think and learn naturally through thousands of years of evolution. Machine learning is being used in fields as disparate as health/medicine (to find a vaccine for HIV) and website analytics (to track and predict user behavior on websites). It seems like every day there's a new story about how computers are performing actions that used to be the exclusive domain of human thought and critical thinking.

As the directors of this teaching, are we just imbuing our own biases and limitations into the algorithms or are we creating machines that can step outside of our old and misbegotten habits and moral failings? I'll present evidence in a series case studies that the more we teach computers to think like we do, the more computers will simply take on and absorb our own biased thought.

Racial Bias

So, what kind of biases and deficiencies am I talking about? Well, there's the hot and heavy big-daddy of them all—racial bias. Machine learning is built upon the idea that algorithms can learn based on the input data they receive. As data changes, the algorithm adjusts its output to match the evidence it sees.

A simple example is to take the advertising that typically shows up alongside searches on google. During an experiment, researchers conducted searches of stereotypical white baby names versus stereotypical black baby names. They found that 80% of the time when searching for the black name, an advertisement featuring the word "arrest" appeared. When the same search for a white baby name was performed, ads with the word "arrest" appeared about 30% of the time. No one coded this bias into the algorithms. It was learned by observing trends and patterns in the everyday searches that we all do.

Language

Another prime example occurs in the natural language processing field. The GDELT project has the goal of cataloging and analyzing global news articles, twitter feeds, and other news information to create a global picture of current events. By using natural language processing to extract information from these sources, the computer can understand the who, what, when, where, how, and why of a news article or post. Then through machine learning algorithms, the program can examine past events and form connections between the people, places, and events to create a geographical map outlining the forces shaping our world.

This works astonishingly well, except for one small detail. In an early iteration of the code, the project reported an alarming annual occurrence. Every August, the United States was decimating the nation of Japan by dropping atomic bombs on its cities. It seems the algorithm concluded that WWII never quite ended. The main issue with the algorithms was a lack of awareness and context of the actual events. With further refinements to the language processing and by expanding the dataset used to train these algorithms, that problem has been alleviated, but not entirely eliminated.

Image Recognition

Even in such a seemingly mundane area as image processing for barcode recognition, machine learning can stumble. To see how, let's get a primer on the basics. Standard 1D barcodes are composed of a series of alternating black and white bars of varying widths. The algorithms that read barcodes from images measure the widths of the black and white bars to determine the value that's encoded. Machine learning algorithms can refine the process and lead to greater accuracy rates.

However, in images where the widths of the bars is ambiguous, these algorithms, no matter how much they've learned from previous experience, can no better determine the encoded barcode value than a human. The issue is not that the widths are determined inaccurately. They're perfect. The problem is that there may be multiple possible valid values of the barcode based on the widths that are detected. There's no way to determine which is the correct value without external information.

In the world of barcode recognition, the only information available is the widths of the black and white bars in an image. In the following image, both barcodes are encoded exactly the same value. However the non-readable barcode has some distortion that causes some of the modules to become smaller, and other modules to become larger. This makes it impossible to determine if a module in between should be classified as wide or narrow.

Biased Machines

De-tinting the Glasses

For what it's worth, at the end of the day computers know what we tell them to know and process information in the manner that we tell them to. As we continually improve the learning algorithms with deep-learning, neural networks, advanced clustering, and so on, machines may some day be able to reason and analyze information in the same manner as we do. We must be vigilant in observing our own biases, short-sightedness, logical fallacies, and the like to ensure that future algorithms are free of such problems.


Michael Archambault, is a Software Engineer on the Recognition team. He joined the company in 2014 as a Software Engineer in Support after obtaining his Bachelor's in Computer Science from USF. Michael now plays a key role on the recognition team working on Barcode Xpress, OCR Xpress, ScanFix Xpress, and FormSuite.

Join the discussion.