Last year, we saw a steady drumbeat of stories examining bias in artificial intelligence (A.I.) and machine learning. It turns out that folks’ preconceptions have a way of creeping into the logic of machines, tainting the results of facial recognition software, recruiting platforms and law enforcement

Even while
they’re building A.I.-powered systems intended to remove human prejudice from
one process or another, technologists may unwittingly contribute to the core problem.
Developers and software engineers, after all, have biases of their own, however
unconscious. “You would have to have zero interactions with other humans to not
have experienced bias, with or without A.I.,” said Meg Bear, senior vice
president of products for SAP SuccessFactors in South San Francisco, CA.

While many users hear “A.I.” and imagine some kind of ultra-intellectual machine, most of today’s solutions simply perform advanced mathematical analysis, industry observers suggest. By speeding the organization and analysis of data, A.I.-driven technology efficiently performs tasks that would take humans much longer to complete (if they didn’t lose their minds to boredom first).

Coder and Dataset Bias

But executing
algorithms isn’t the same thing as making judgements. A.I. solutions only
process the information they’re given in the way they’re programmed to process
it. And because they’re coded by people, they’re bound to include the biases of
the programmers creating them, as well as any datasets used as input.

This can have
real ramifications. According to Reuters,
Amazon once walked away from developing a machine-learning tool to identify
promising tech candidates. The tool relied on a dataset of Amazon’s past hires,
the bulk of whom were men; unfortunately, its algorithms used that information
to begin preferring men over women. Résumés that included the word “women” were
downgraded, as were the graduates of two all-women’s colleges.

Such episodes,
together with increasing awareness of bias in general, have pushed tech
companies to pay attention to how they’re architecting A.I. and machine
learning solutions. Many have begun developing A.I. codes of ethics and working
with officials to develop “thoughtful” regulation, said Montra Ellis, senior director
of product innovation at Ultimate Software in Weston, FL.

Ellis believes
that initiatives such as codes of ethics are good first steps; this early stage
of A.I. development “is a crucial time” for the industry to get ahead of the
challenges involved when technology displays bias.

Watch
Carefully

Broadly
speaking, executives think most technologists are aware of these issues. And as
it turns out, awareness is one of the most important things that programmers,
developers and others can bring to bear in day-to-day efforts to address bias
in A.I.

That “awareness”
involves recognizing a challenge that’s about nuts and bolts as much as wider
corporate, business or societal concerns. Because the algorithms used in
machine learning are almost sure to reflect the subconscious biases of those
doing the coding, “the challenge is that we don’t recognize our own biases or
when specific subsets of data are biased,” Bear said. “So even though there’s a
lot of discussion around bias in A.I., tech professionals are not necessarily
aware when that bias is added.” 

Because people
can’t see what they’re not looking for, the solution for tech pros is to double
down on awareness. “Bias in machine learning and A.I. will never be eliminated,
since bias in humans will always exist,” Bear added. However, it can be
mitigated when a project’s team members actively look out for warning signs and
“make sure that the technology is serving its intended purpose.”

For example, bias
emerges when machine learning applications recognize “unintended patterns,”
such as Amazon’s screening tool linking gender to technical qualifications. Technologists
should be on the lookout for such instances by developing and following
specific methodologies based on a core set of ethical guidelines, Bear said.

Ellis agrees
with that idea. “Programmers and developers should be having conversations with
their managers and their teams about potential blind spots early and often in
the development process,” she said. “Being intentional from the start about
what’s being fed into your system is often far more critical to your future
success than any clever work on algorithms afterward.”

What Goes In…

In addition, pay
attention to the data used to train a system, Ellis said. The old saying
“garbage in, garbage out” may sound corny—but it applies. Project teams should
pay special attention to the data they’re feeding into their machines. Since A.I.
can amplify the biases found in most of today’s data, organizations must make
sure to rigorously train, test and correct their systems from the time their
purpose is defined until they’re released.

“Ultimately, the
best-performing A.I. systems must be trained on vast and diverse inputs,” Ellis
said. That will “ensure the machine represents all the voices you’ll expect it
to understand in the future.”

The emphasis on
data quality will only increase, Bear believes. “As A.I. becomes more pervasive
in applications, there will also be a lot more emphasis on the validity of the
data introduced to drive these decisions,” she said. “We’re already seeing this
today as people start to question the efficacy of historical and social data.” 

Let’s block ads! (Why?)


Source link

Load More By admin
Load More In Tips and Tricks

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

Limited internet to be restored in Kashmir, no access to social media – KFGO News

Saturday, January 25, 2020 2:43 a.m. CST By Fayaz Bukhari Srinagar (Reuters) – Limit…