Mathematician Cathy O'Neil's humanely skeptical mind — is worth reading

© 2017 Peter Free

 

10 October 2017

 

 

If you hate pseudoscience

 

Put algorithm-basher, Dr. Cathy O'Neil on your reading list.

 

She's mathbabe in the blogosphere.

 

Her book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016), exposes the ways in which pretend-science — in the form of mathematical algorithms commonly used in business — subjugate swaths of people.

 

 

One example of a representatively dangerous algorithm

 

O'Neil, writing in Bloomberg View:

 

 

Artificial intelligence keeps getting creepier. In one controversial study, researchers at Stanford University have demonstrated that facial recognition technology can identify gay people with surprising precision, although many caveats apply.

 

The lead author of the “gaydar” study, Michal Kosinski, argues that he’s merely showing people what’s possible, so they can take appropriate action to prevent abuse. I’m not convinced.

 

This is an abuse of the public’s trust in science and in mathematics. Data scientists have an ethical duty to alert the public to the mistakes these algorithms inevitably make -- and the tragedies they can entail.

 

Consider the gaydar algorithm. A government could use it to target civilians, declaring certain people “gender atypical” and “criminally gay” because the black box says so -- with no appeals process, because it’s “just math.”

 

We’ve already seen this very scenario play out in other contexts . . . [referring to examples explained in Weapons of Math Destruction.]

 

© 2017 Cathy O'Neil, 'Gaydar' Shows How Creepy Algorithms Can Get, Bloomberg View (25 September 2017) (excerpts)

 

 

Why would someone be experimenting with a Gay Person Recognition System in the first place?

 

Kindness of heart?

 

Love for all humankind?

 

 

And a clever categorization of artificial intelligence futurists

 

O'Neil separated futurists into four quadrants, according to two axes generated by:

 

 

(a) their confidence that an artificial intelligence (AI) leap forward (singularity) is near

 

and

 

(b) their concern that such a development might go awry from a societal perspective.

 

 

She starts with a reminder:

 

 

[A]t the heart of the futurism movement lies money, influence, political power, and access to the algorithms that increasingly rule our private, political, and professional lives.

 

[In Quadrant 1]

 

. . . people who believe in the singularity and are not worried about it.

 

With enormous riches and very few worldly concerns, these futurists focus their endeavors on the only things that could actually threaten them: death and disease.

 

[W]hether it is the nature of existence in the super-rich bubble, or something distinctly modern and computer-oriented, futurism of this flavor is inherently elitist, genius-obsessed, and dismissive of larger society.

 

[In Quadrant 2]

 

. . . people who believe in a singularity but are worried about the future.

 

As a group these futurists are fundamentally sympathetic figures but woefully simplistic regarding current human problems. If they are not worried about existential risk, they are worried about the suffering of plankton, or perhaps every particle in the universe.

 

[In Quadrant 3]

 

. . . the techno[-]utopianists. They don’t specifically talk about the singularity, but they are willing to extol the many virtues of technology to people who want to believe them.

 

Far from actually fixing problems, they are the type of futurist that is most obviously selling something: a corporate vision, blind faith in the titans of industry, and the sense of well-deserved success.

 

[In Quadrant 4]

 

Finally, the people who do not believe in singularities, but who are still worried. This is my group.

 

In the fourth quadrant, we have no money and not much influence. We are majority women, gay men, and people of color.

 

For the average person there is no difference between the singularity as imagined by futurists in Q1 or Q2 and a world in which they are already consistently and secretly shunted to the “loser” side of each automated decision.

 

For the average person, it doesn’t really matter if the decision to keep them in wage slavery is made by a super-intelligent AI or the not-so-intelligent Starbucks Scheduling System.

 

The algorithms that already charge people with low FICO scores more for insurance, or send black people to prison for longer, or send more police to already over-policed neighborhoods, with facial recognition cameras at every corner—all of these look like old fashioned power to the person who is being judged.

 

© 2017 Cathy O'Neil, Know Thy Futurist, Boston Review (25 September 2017) (excerpts)

 

 

The moral? — Human stupidity is infinite, so is its nastiness

 

If you like knowing the math-clothed tricks that the Avarice Establishment plays on us, put Cathy O'Neil on your reading list.