Book Review: O’Neil’s Weapons of Math Destruction

Article PDF

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown. 2016.

Cathy O’Neil tried to warn us. Through her book Weapons of Math Destruction (WMD for short), published nearly two months to the day before the 2016 US Presidential Elections, O’Neil sounded the alarm over the ways social media platforms, political parties, and corporations were using data mining, machine learning, and digital methods for hyper-individualized voter targeting to alter US elections. Unintentionally prophetic, O’Neil fixated on Facebook, railing against the platform for its experiments to influence voter behaviors. She even wrote that, “The activity of a single Facebook algorithm on Election Day, it’s clear, could not only change the balance of Congress but also decide the presidency” (191).

Yet many of us, myself included, spent the time between September 6, 2016 (WMD’s publication date) and November 8, 2016 scrolling past campaign ads to click on the 538.com article predicting the US 2016 electoral map, the article that Facebook kept promoting in our feeds. We paid no attention to the corrosive damage on our civic lives and institutions wrought by our political parties micro-targeting voters, of Facebook circulating “news” because users were likely to click on it (as opposed to because of its veracity), and of equations and digital procedures attempting to predict and direct voters’ behaviors, all of which occurred during the 2016 President Election (see Carole Caddwalladr’s “The Great British Brexit Robbery” for more) and all of which O’Neil cautions readers about in WMD. Fortunately, we can still read WMD, and we can thus still become informed about one of the most destructive forces in our civic and public lives that O’Neil tries so desperately to warn us about: algorithms.

O’Neil, drawing upon her own experiences as an algorithm-programmer (experiences she details in Chapter 2 of WMD), begins WMD by explaining the basic design and functioning of digital algorithms. She explains that algorithms are intended to be neutral strings of procedures, calculations, and assessments. Algorithms collect data about a range of actors—businesses, shoppers, traffic, wildlife, other algorithms—and press that data into their procedures and calculations. The algorithms then make us their calculations to make assessments of and predictions about those actors. At base, algorithms attempt to provide those who use algorithms (aka those who can afford to create or buy algorithms) with tools to improve their existing services and/or procedures for the assessed actors.

The problem is that algorithms have increasingly been designed to evaluate us as citizens, making assessments of and predictions about our daily activities. Drawing from her own experiences, she explains that programmers of algorithms design their procedures and calculations to collect data, assess, and make predictions on human behavior. More precisely, programmers draw upon their personal assumptions and prejudices about human in order to design their algorithms. Programmers, it should be pointed out, are not trained in directing or evaluating human behavior, they are trained to design equations and code. So, despite their intentions for neutrality, programmers code into algorithms their biases, perceptions, stereotypes, and ideologies: they, for example, assume their behaviors should be the baseline for normal or acceptable behaviors in their algorithm’s calculations and predictions. O’Neil adds that most programmers are white, cisgender males from middle- and upper-class backgrounds.

Exacerbating the potential problems generated by algorithms assessing human behavior based on programmers’ biased assumptions about human behavior, corporations black-box algorithms from any external oversight. As policy, most corporations do not reveal any information about their algorithms. O’Neil notes that the general public does not understand how and why they and their behavior is being measured and assessed, or even how and what data about them is being collected. Compounding the public’s lack of knowledge and oversight, corporations pitch their algorithms as tools for eliminating human bias from decision-making. Thus, algorithms monitor and influence our behavior as citizens while we have little understanding of what the algorithms are doing and how they are influencing those who make vital decisions about our lives and roles as citizens.

Based on the above-outlined general concerns about algorithms (further detailed in Chapters 1 and 2 of WMD), O’Neil examines our civic institutions and the algorithms they have increasingly adopted. For O’Neil, countless civic institutions have turned to algorithms to lower their operating costs, make themselves more efficient, and to introduce neutral measures for their assessments and decisions about citizens. O’Neil studies several key civic institutions, dedicating a chapter of WMD to each. She examines:

  • Higher education, both university rankings (Chapter 3) and for-profit college marketing programs (Chapters 4)
  • The US justice system (Chapter 5)
  • HR hiring systems (Chapter 6)
  • Programs for managing minimum-wage work schedules (Chapter 7)
  • The banking industry (Chapter 8)
  • The insurance industry (Chapter 9)
  • Political Parties (Chapter 10)

With nearly each institution, O’Neil discovers that the biased and black-boxed algorithms have been given control over their central functions, particularly decisions about how citizens will be evaluated and what services the citizens will receive based on those evaluations.

Many algorithms that are designed for civic institutions aim to measure and make predictions about citizens’ behaviors, such as an individual’s potential for recidivism or paying back loans. Based on the programmer’s biases and assumptions, the algorithms view the activities and behaviors of the most privileged as the “safest” and those already disadvantaged as “riskiest.” Once they start using the algorithms, the institutions, designed to engender equality and balance society, become biased against those individuals they were constructed to help, assigning them longer prison sentences and higher interest rates. Put differently, algorithms take many of the human biases that civic institutions were designed as bulwarks against and sneak programmers’ and corporations’ biases into the core services those institutions provide.

For example, one of the key institutions that O’Neil examines is local criminal justice systems (Chapter 5). Facing rising costs and pressure to prevent more crimes, many police departments have purchased algorithms to make patrolling more efficient and precise. The algorithms direct the patrolling and stationing of police officers in neighborhoods. They place officers in areas at the times when the algorithm predicts there to be a higher likelihood of a crimes occurring. Once activated the algorithm’s biases get multiplied by the biases of those using the algorithms.

Once police forces turn over their patrolling systems to the algorithms, they start patrolling in the neighborhoods they already “believe” to be more crime-riddled: see less-affluent, minority neighborhoods. As officers make arrests in those neighborhoods, they create data that suggests those neighborhoods are more dangerous. The algorithms then recommend sending more officers into those neighborhoods (and thus even fewer officers into the affluent neighborhoods). The seemingly neutral increase in less affluent neighborhoods and decrease in more affluent neighborhoods creates more arrests in less affluent areas, which in turn increases the algorithms recommendations to send even more officers into those neighborhoods. From there, the algorithms provide analyses to judges that suggests that the individuals from less affluent neighborhoods (that is, the recently more policed areas) live in higher risk areas, and thus should receive higher sentences to help alleviate the high crime rates in those neighborhoods. Through the dual (and black-boxed) prejudices of the designers and users of algorithms, the algorithms create destructive feedback loops that contribute to economic and social inequality and injustice.

Fortunately, O’Neil provides research methods for critiquing the destructive effects of algorithms. First, O’Neil renames algorithms “Weapons of Math Destruction,” or WMD’s for short. She associates algorithms with the panic-inducing WMD’s that were the focus of the early-2000’s invasion of Iraq. O’Neil draws the comparison to elicit the same levels of public panic about algorithms. Through her research and the provocative renaming, O’Neil wants the average citizen to be as scared of algorithms as they were of the (fictional) explosives that they were told Saddam Hussein possessed. The only difference O’Neil sees between the (fictional) WMD’s in Iraq and the algorithms employed by civic institutions is that algorithms might spread into our economy and culture and leave all of us as collateral damage.

Building upon her proactive re-situating of algorithms, O’Neil provides a framework for examining algorithms. The framework, which she uses in her own research, enables individuals to outline what she titles an algorithm’s “bomb parts:” an algorithm’s central data-collection points, measurements, procedures, and assessments. After identifying an algorithm’s bomb parts, O’Neil explains that individuals should analyze algorithms per three factors:

  • Opacity – how much, or little, do those affected by an algorithm know about the algorithm and how it functions
  • Scale – how many people an algorithm might impact
  • Damage – the amount of social, emotional, and/or financial harm an algorithm can cause (31-33).

If there is one gap in WMD, it is the book’s lack of engagement with the digital platforms—such as Twitter and HR Hiring Portals, for example—that algorithms are part and parcel of. When examining the algorithms shaping Facebook and the portals used by banks, insurance providers, and the justice system, the book never considers the actual front-end platforms that algorithms collect data from and effect citizens through. This should be pointed out not only because platforms are the focus of this special issue but because one has to wonder if the general public’s indifference toward and potential ignorance of the inner workings of algorithms stems from the ways the platforms, which algorithms manage and feed off, have entangled themselves in our lives. Digital platforms have, as O’Neil herself notes, become dominant public forums. They have also become crucial tools for business and institutions. The algorithms used by businesses institutions have lowered costs, increased efficiency, and made many activities simpler and more accessible. The convenience of algorithms make us more apt to unquestioningly integrate them into our civic, social, and professional lives. Can we see beyond retweet-driven protests, car insurance provided in minutes, and “likes” of puppy videos? O’Neil herself notes that one survey found that 62 percent of people were unaware that Facebook manipulated its algorithms for showing posts in feeds (184). Algorithms and digital platforms are deeply intertwined. As activists, scholars, and teachers attempt to, as O’Neil calls us to do, “impose human values on algorithms” (207), we need to also critique digital platforms and the general public’s general affinity for them. Only when critiqued together, can algorithms and digital platforms be exposed for their damage and destruction.

Regardless of WMD’s shortcomings when it comes to directly engaging digital platforms, O’Neil provides methods for re-situating and researching algorithms that what we, as rhetoric scholars, should adopt. We should also look to re-situate algorithms both in our classrooms and in our public advocacy. We should also look to critic and advocate on the real and potential harm of black-boxed algorithms as they take over our key institutions. We can do so by intertwining O’Neil’s methods into our public and digital rhetoric pedagogies, research methods, and criticism. The book provides researchers working on digital rhetorics avenues to expand their concerns into civic issues and institutions. It similarly offers researchers working on public rhetorics tools for incorporating concerns for algorithms into their work on civic and public rhetorics. Through such work, we can, much like O’Neil, produce work that can heighten public awareness of algorithms and digital platforms’ civic, social, and material damage.

In sum, Weapons of Math Destructionexpands our methods and pedagogies for critically engaging with digital platforms and algorithms. The book enables, and urges us, to engage new sites, civic institutions, and new issues, the effects of algorithms on institutions, those served by the institutions, and our democracy which is enabled by those institutions. More generally, the book forwards a vital yet largely unnoticed civic issue, the corrosive effects of algorithms. If we engage with O’Neil’s criticisms and adopt her methods, we can prepare ourselves for the next time algorithms threaten our democracy.

Works Cited

css.php