Artificial Intelligence to Examine Us

The hype over the last few months regarding generative AI has been quite interesting. I’ve facilitated a variety of discussions (and presented some) with faculty, staff, and other librarians regarding these tools and I’ve been following the public discourse. The thing that I keep coming back to and which I don’t feel gets the attention that it merits, is the potential to consider these newer AI systems, LLMs, etc. as tools to be used for examining human cultures, behaviours, and society.

Most conversations I’ve seen about the utility of these tools revolves around the novelty of what they produce (text outputs, image outputs, coding assistants, etc.) and the ramifications. That garners debate on the ethics of how they work and how those outputs may be used, legitimate or otherwise. It certainly deserves critical analysis. In many ways though these systems are banal, they’re essentially instructions for identifying and repeating patterns.

What, to me, is incredible about them is not just that they’re identifying patterns in human behaviour (all the material that they’re trained on), but that once these patterns are identified, we now see that it becomes possible to generate new content believably replicating the patterns. That seems enormous for better studying and understanding ourselves.

Yesterday, when the Center for AI Safety posted its Statement on AI Risk, I felt the underlying premises or motivation for the statement were concerning. The statement (signed by many business leaders, academics, and others with significant stakes or interests in AI) says:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

On the surface, this seems agreeable, however there’s so much to unpack in that statement. I’m not interested in attempting that now; I think AI is murkier than it’s presented there. I would like to refer to my initial point. Human-created, AI is essentially mirroring us. The statement’s formulation is both disingenuous and abdicates responsibility. The statement starts off in the wrong.

If AI systems produce results or get used in ways that create the real possibility of exterminating human-kind, then we need to seriously examine and change these patterns in human behaviour that enable that. What is it about the hundreds of thousands of social media conversations that were scraped to train the model, which repeats in such ways that it is possible to predict a new utterance that humans accept as the way we’d expect a human to talk? What is that in all the Wikipedia entries? What is that in all the digitized magazine and newspaper advertisements? Because who thought that human culture is so incredibly predictable that non-living, non-thinking, non-feeling algorithms can give the appearance of something indistinguishable from a human?

The fear expressed in the statement is not a fear of AI at all. It’s a fear of humans destroying ourselves. If the people developing AI treat it as an object of otherness, promote it as such, and encourage discourse of it as such–if we all adopt that discourse, we distance ourselves from ourselves and cannot understand what we are doing. I hope to see more serious research projects in the humanities and social sciences, which focus on the lens that AI is and then examine it for understanding our cultures and behaviours.