Bias in the Algorithm | 2019 Michigan Tech Magazine: Issue 1

Google+ Pinterest LinkedIn Tumblr +

Algorithms are more than equations. They redefine us.

A few years back, rumor had it that a multinational technology company was making
great strides to finalize and implement a computerized hiring tool. Using artificial
intelligence (AI), the program scored job applicants on a scale of 1 to 5 and predicted
which of the top candidates would be best for the job.

About a year into the tool’s development, progress halted when programmers discovered
the software was blatantly discriminating against women. Applicants were penalized
for graduating from all-women’s colleges or even simply using the word “women’s” on
their résumé.

Jennifer Slack at a desk with stacked books.
Jennifer Daryl Slack’s research explores how everyday life interconnects with technology.

After some investigation, programmers discovered the bias stemmed from the data inputs—the
tool was trained to vet applicants based on résumés the company had received over
the preceding decade. The majority of those résumés had come from men, and the majority
of resulting hires had been men. Because the hiring tool had been programmed to teach
itself which candidates were preferable, it analyzed the data and “learned” that men
were preferable to women.

Programmers attempted to make the tool more gender-neutral, but quickly realized there
was no way to prevent the program from discriminating on other grounds. The project
was quietly disbanded. When word of the experiment leaked to the press, the company
publicly stated the tool was never used to make real-life hiring decisions.

If nothing else, the experiment demonstrated an important lesson: in the human world,
AI has its limits. So if AI is here to stay, how do humans remain in the driver’s
seat?

Algorithm and Blues

The abandoned hiring tool is what’s known as a machine-learning algorithm. An algorithm
is simply a computational recipe, a process to achieve a specific result, a set of
rules to solve a problem. With a machine-learning algorithm, the rules aren’t driven
by human logic; they’re continuously revamped by the computer itself.

Computer algorithms range from the simple (a computer user confirming they are 13
or older in order to set up an Instagram account) to the complex—large, decision-making
software systems rapidly assessing a vast array of data for a variety of purposes
or outputs.

The promise of mathematical objectivity has resulted in algorithmic decision-making
for loans, benefits, job interviews, school placement (both higher ed and K-12), and
even who should get bail, parole, and prison time.

“Algorithms are a mathematical manipulation of the complexities of life,” says Jennifer Daryl Slack, distinguished professor of communication and cultural studies in Michigan Tech’s
Department of Humanities. “An algorithm allows you to manage complexity, but it does so by simplifying, prioritizing,
and valuing some things over others. It’s a fundamentally biased process.”

“What we do when we create algorithms isn’t an entirely new process,” says Stefka Hristova, associate professor of digital media and Slack’s colleague in the Department of
Humanities. “It’s part of the historical trajectory that emerged out of 19th century
data sciences. Algorithmic culture is grandfathered in by things like anthropometrics
(measuring the human body for identification and variation) and phrenology (studying
the shape and size of the skull to predict character and mental capacity). Our efforts
in translating populations into data have now been converted into mechanisms of machine
learning.”

Machine-learning algorithms function on association—they group data into insular categories,
connecting only the seemingly related and disregarding difference. This results in
what Hristova calls solipsistic homogeneity, where the algorithm works within itself
to create a structure of sameness and then builds on that structure—basically, your
Netflix queue.

“It’s a system that precludes creativity and innovation because you get more of the
same,” Hristova says. “It’s also a problematic structure when an algorithm is employed
within society. What happens when you have a machine determining who will make a good
employee? It connects what is similar, and that’s one of the places where bias comes
in.”

Imagine an algorithm created to determine who should be invited to apply to Michigan
Tech. The machine analyzes the data of who’s been invited to apply in the past, who
did apply, and who was ultimately admitted. From that analysis, it learns who is most
likely to enroll at Tech.

“As you narrow down the field to specific features, the algorithm starts targeting
those who ‘look like’ a Michigan Tech student,” Slack says. “You lose out on diversity,
because it’s not an expansive process. And there’s nobody to notice who gets left
out if everything gets turned over to machines and algorithms.”

The Mind and the Machine

In the world of finance and insurance, the ability for people to get funding or coverage
is often decided by an algorithm, not an expert who reads the file or meets with the
applicants in person. The algorithm sets up “points” or markers to determine which
applicants are the best (or worst, as the case may be). As decisions on each application
are made, it learns more information about who is desirable and who is not, and the
points or markers are further strengthened.

“Technology is a new culture, it’s not just a backdrop.”

Soonkwan Hong Associate Professor of Marketing
Soonkwan Hong

Associate Professor of Marketing

“If you live in a poorer neighborhood with more crime, your car insurance will be
higher,” Slack says. “The purpose of insurance is to spread out costs, but that’s
not what happens. The algorithm is analyzing the data, not the person at the table.
It punishes people who have fewer resources and benefits the better off.”

Obviously, Slack says, the more diversity you have amongst the people designing the
algorithm, the less biased the algorithm will be. “But if you only focus on the design
level, you’re missing a myriad of other issues that are going to come into play regardless
of diversity in creation,” she says.

And many of those other issues stem from bias in an algorithm’s implementation.

Earlier this year, governments around the globe grounded the Boeing 737 Max aircraft
after two crashes killed hundreds of people. The fourth generation of the 737 model,
the Max was upgraded with larger engines and greater fuel efficiency in order to compete
with the Airbus A320neo.

In creating the 737 Max, designers worked to ensure the plane was as similar as possible
to previous versions of the 737 in order to bypass costly training for pilots on flight
simulators. (It’s estimated that such training would cost tens of millions of dollars.)
The newer, larger engines, however, had to be mounted differently, changing the aerodynamics
and weight distribution of the plane. The new engine placement caused the plane’s
nose to push upward in certain circumstances, so designers created an algorithm that
would automatically bring the plane’s nose down.

“We need to interrogate whether these are the cultural values we want to support.”

Jennifer Daryl Slack Distinguished Professor of Communication and Cultural Studies
Jennifer Daryl Slack

Distinguished Professor of Communication and Cultural Studies

Media reports indicate that because Boeing was competing with Airbus, it wanted to
get the plane to market as quickly as possible and didn’t want to pay for training.
While it’s not publicly known what conversations and processes went on behind closed
doors, what is known is that Boeing delivered its product without informing pilots
of the algorithm.

“In implementing the algorithm, it appears the company failed to adequately take into
account interaction with pilots and possible circumstances where the pilots may need
to act,” says Slack. “It was a financial decision. A value decision.”

No matter how excellent the design of the algorithm may have been, the problem was
in the implementation, where someone made a determination about acceptable risk. Slack
stresses that you don’t have to attribute ill will to Boeing. “This is not a matter
of a heinous corporation—this is a matter of algorithms. You can’t code real life.
You transcode life. And that entails necessary reduction and simplification.”

Rage Against the Machine Learning

“Algorithms impact every human system; they’re unavoidable,” says Soonkwan Hong, associate professor of marketing in the School of Business and Economics.

From dating services to navigation apps, from résumé screening and employee evaluations
to police patrols that target specific neighborhoods, Hong—who studies consumer culture—says people don’t realize the extent to which algorithms are present in their everyday
lives. “People tend to take extreme stances—they celebrate technology or they criticize
it. But the best path forward is a participatory stance, one where people—not algorithms—make
choices about when to use technology, when to unplug, and what data is or isn’t shared.”

Slack notes that this can be tricky because, in many instances, “you have no right
to know about the algorithm. There’s a lack of transparency in the algorithmic environment,
and the formula is often proprietary. And no one fully understands the machine learning
process,” she adds. “It’s unsupervised learning.”

Slack and Hristova say we must take a look at how easily we hand decisions over to
algorithms and ask what we’re prioritizing.

“What’s not being looked at,” Slack emphasizes, “is the part of our culture that values
and glorifies the process of shifting to algorithms to do certain kinds of work. Our
mindset is to take risks and fix it later. Is that acceptable?”

Hristova and Slack say we must create a process for asking those questions and consider
how different moments of design and implementation relate to one another. Together,
they’re developing a methodological approach for intervening in the design and implementation
of algorithms in a way that allows humans to contemplate ethical issues, cultural
considerations, and potential policy interventions. Their research will be housed
in Michigan Tech’s new Institute for Policy, Ethics, and Culture.

“Every stage of algorithmic design and implementation offers different opportunities
for intervention,” Hristova says. “And an intervention would typically be more like
a tweak, not an overhaul. We’d be fine-tuning and accounting for the equation.”

“We need a more democratic, open, and ethical culture around the creation and deployment
of algorithms,” she continues, “but algorithms are just a recipe. An equation. They
can be redefined. More importantly, we need to become active seekers of difference.
We must seek out alternative views, and communicate with each other to find shared
ground.”

Tomorrow Needs: The Institute for Policy, Ethics, and Culture

Balance icon.In April 2019, Michigan Tech began planning for a new Institute for Policy, Ethics, and Culture (IPEC), which will explore the policy implications, ethical considerations, and cultural
significance of the extensive technological changes and disruptive forces of the 21st
century. IPEC researchers—including Slack, Hristova, and Hong—will address issues
like algorithmic culture; medicine and biotechnology; technology and autonomy; surveillance
and privacy; and reconfiguring human relationships in and with the environment.

“Technological advances are necessary, but not sufficient to address global challenges
related to human well-being, ecosystem health, and a changing climate,” says Sarah Green, professor of chemistry at Michigan Tech. Green co-chaired the Science Advisory Panel for the United Nation’s
Sixth Global Environmental Outlook (GEO-6) report and is a member of the University
working group that developed IPEC. “IPEC will foster innovative and forward-thinking
policies, grounded in science and cultural insight. A primary goal of IPEC is to guide
the ethical development and deployment of technology toward the ‘future we want.'”

Dirty, Dangerous Environments

Light blub.Despite the recent tragedies of the Boeing 737 Max, flying continues to be one of the safest forms of transportation. And this, according
to Jeff Naber, the Richard and Elizabeth Henes Professor in Energy Systems in Michigan Tech’s Department of Mechanical Engineering–Engineering Mechanics, is largely due to algorithms we commonly refer to as autopilot. But, Naber points
out, flying is also one of the most structured and planned forms of transportation,
with relatively few obstacles for a plane to bump into. Automating the navigation
of passenger cars and other terrestrial vehicles is an entirely different animal.

“It requires much more understanding, recognition, and decision-making—all of which
are the traditional purview of human beings,” says Michael Bowler, associate professor of philosophy and associate chair of the Department of Humanities.
Once a vehicle attempts to take over these functions, unintended consequences can
arise even from the proper functioning of an automated system.

“People are rightly concerned about the ethical and social impacts of automation and
the construction of intelligent systems,” says Bowler. “Engineering and perfecting
these systems in dirty and dangerous environments—like extreme weather conditions
and off-road settings—is precisely the right way to explore and demonstrate to the
public the capabilities of automated and intelligent systems in a safe context; that
is, one in which you would not want to risk human life to begin with.”

Michigan Technological University is a public research university, home to more than
7,000 students from 54 countries. Founded in 1885, the University offers more than
120 undergraduate and graduate degree programs in science and technology, engineering,
forestry, business and economics, health professions, humanities, mathematics, and
social sciences. Our campus in Michigan’s Upper Peninsula overlooks the Keweenaw Waterway
and is just a few miles from Lake Superior.

Source link

Share.

About Author

Leave A Reply