The Artificial State
Political marketing in the age of algorithmic recommendation systems is edging on coercion
[Note: I broke this one into two parts because it was longer than I wanted. But the second part is coming Thursday, along with a much lighter series, “Notes from New Orleans,” that I wrote while visiting New Orleans last week]
The day before the U.S. Presidential Election, Harvard professor, historian, and journalist Jill Lepore published an article titled “The Artificial State” in The New Yorker. Although I didn’t read it when it first came out, it was perfectly timed. The piece attempted to pull the veil off a common yet invisible practice in electoral politics: using accounting firms, data analysts, and marketing executives to sway election outcomes. While their tried and true strategies are as old as the field of public relations itself, they remain mostly hidden from the general public, and understandably so. The people at the fair aren’t supposed to see how the hot dogs are made. However, in our current era of advanced and highly sophisticated AI-assisted technology, I think it’s about time there was a lot more transparency around these processes, as algorithms have now come to operate in such complex ways that they are opaque even to the marketers themselves. The integration of highly sophisticated data mining techniques, coupled with the strategic deployment of algorithmic recommendation systems, has pushed these once relatively benign marketing practices to the edge of ethical boundaries. AND I DON’T FEEL LIKE WE’RE TALKING ABOUT THIS ENOUGH.
Lepore traces the history of election marketing using machines back to 1959 when “the Democratic Party, desperate to win back the White House, considered retaining the services of a startup staffed by computer scientists, political scientists, and admen, whose ‘People Machine’ could run simulations on an artificial electorate and tell a party’s nominee what to say, to whom and when.” The 1959 “People Machine” was operated by Simulmatics, an early data analytics company that pioneered using behavioral science and computer modeling to influence voter behavior. They ran this machine live on television on Election Night 1952-- and this PR stunt marked the first time that many Americans even saw a computer.
Fast forward to today, and The People Machine is child’s play in comparison to contemporary artificial intelligence-driven algorithms that mine data and make independent decisions on how to market to people. Yes, independent decisions, as in all on their own… without human intervention or programming. Most people don’t realize that algorithms can make decisions independent of human programming, but it’s true, and that’s the precise reason Yuval Noah Harari, author of Nexus: A Brief History of Information Networks from Stone Age to A.I. (2024), argues that we shouldn’t call this artificial intelligence. We should instead call it “alien intelligence” because we have reached the point where algorithms reason on their own in ways that are not transparent to us.
Although the algorithms are NOT sentient-- they can’t feel emotion, nor are they alive-- the fact that they are the first communication technology in human history with the ability to make an actual decision and have an impact (that encompasses coercion!) without a human explicitly directing it to do so is concerning.
We are creeping into dangerously scary territory with the coercive capacity of these unregulated algorithms in such a way that the adults in charge are in denial about where we are going because... how?
Consider the man who committed suicide because an AI chatbot encouraged him to. It sounds like something out of a horror film, and at the same time, it sounds like something too far-fetched to be real. How could a chatbot encourage a human to want to kill themselves? In reality, it’s both. We are creeping into dangerously scary territory with the coercive capacity of these unregulated algorithms in such a way that the adults in charge are in denial about where we are going because... how? It’s like we’re not only contending with unprecedented computational power but a pervasive denial about what these algos can and will do.
Consider the genocide that Facebook “accidentally” enacted in Myanmar (ANOTHER THING I CAN NOT BELIEVE WE JUST GLOSS OVER). The genocide was instigated by Facebook algorithms that pushed hateful messaging about the Rohingya people, a minority group in Myanmar, in ways that inundated people in the rhetoric and weaponized their fear and hatred, all under the guise of engagement. The algorithms kept feeding the content that people interacted with the most, which turned out to be the most sensational content, with no discernment for distaste, racism, or potential harm (obviously). We know this, yet journalists keep reporting on the rise in white supremacist rallies that keep happening around the country as if they’re either an anomaly or a byproduct of the “Trump Era” with no acknowledgment that the “Trump Era” is the online influence campaign era.
In her article, Lepore highlights the use of highly sophisticated data mining, data analytics, and political campaigning beginning with Donald Trump’s win in 2016, which many called a fluke but could probably be better explained by his employment of the now defunct Cambridge Analytica firm.
I read Cambridge Analytica whistleblower Christopher Wylie’s book on how sophisticated data mining, coupled with the algorithms, is less marketing and more direct persuasion because of how highly sophisticated these technical systems are. (If you have time to read a mindblowing expose, please do. It’s called Mindf*ck: Cambridge Analytica and the Plot to Break America, 2019)
And I just can’t believe there’s not more conversation about what happened in and since 2016. Actually, no. I know what the issue is… (Thoughts continue in Part 2)