2024i22, Monday: Commonplace #3

2024i22, Monday: Commonplace #3
"I'll just stick it here for later reference...". Photo by Kelly Sikkema on Unsplash.

There's a bunch of LLM/AI stuff this time. It just happens that this is what was crossing my desk and caught my eye.


I made this: by John Siracusa. John used to write insanely long reviews of new Mac OS releases. They were essential reading for Mac nerds (as if it weren't obvious: guilty as charged). Always thoughtful, and a great writer; what's not to like? This is characteristic, on the point (which seems to be this issue's focus) of moving beyond purely technical discussions of AI and LLMs and looking at them necessarily from a broader perspective, arising from the way that "in its current state, generative AI breaks the value chain between creators and consumers".

Where is the act of creation?
This question is at the emotional, ethical (and possibly legal) heart of the generative AI debate. I’m reminded of the well-known web comic in which one person hands something to another and says, “I made this.” The recipient accepts the item, saying “You made this?” The recipient then holds the item silently for a moment while the person who gave them the item departs. In the final frame of the comic, the recipient stands alone holding the item and says, “I made this.”
This comic resonates with people for many reasons. To me, the key is the second frame in which the recipient holds the item alone. It’s in that moment that possession of the item convinces the person that they own it. After all, they’re holding it. It’s theirs! And if they own it, and no one else is around, then they must have created it!
This leads me back to the same question. Where is the act of creation? The person in the comic would rather not think about it. But generative AI is forcing us all to do so.


ChatGPT is an engine of cultural transmission: by Henry Farrell. Henry's another go-to writer (I seem to have a lot of them). Here, he talks about recent academic thought about the way that "LLMs operate in a space of information that is disconnected from base reality": in other words, it's inaccurate to talk about them "hallucinating" because there's no "real" to contrast with what appears "unreal" to us in the hallucination. And herein of course lies part of the risk:

That is why there are sharp limits on the ability of LLMs to map and understand the world. LLMs are perfect Derridaeians - “il n'y pas de hors texte” is the most profound rule conditioning their existence. If a piece of information isn’t available, at least by implication, somewhere in the corpus of text and content that they have been trained upon, then they are incapable of discovering it. And some things that are very obvious to humans are very hard for them to discover.
You can see some of the consequences if you provide LLMs and humans with descriptions of real world physical problems, and ask them to describe how these problems might be solved without the usual tools. For example, Gopnik and her co-authors have investigated what happens when you ask LLMs and kids to draw a circle without a compass? You could ask both whether they would be better off using a ruler or a teapot to solve this problem LLMs tend to suggest rulers - in their maps of statistical associations between tokens, ‘rulers’ are a lot closer to ‘compasses’ than ‘teapots.’ Kids instead opt for the teapot - living in the physical universe, they know that teapots are round.
This nicely deflates a lot of the more exuberant public OMG-AGI rhetoric. Of course, LLMs can improve their ability to answer this kind of question. Sometimes this is because non-obvious causal relationships are turned into text it can assimilate as, e.g. people start to write on social media about the ruler-teapot example. Sometimes it is because the text contains more latent information about such problems than you might think, and LLMs are getting better at uncovering that information. But - and this is Gopnik’s point - they can’t ever discover these relationships in the ways that human beings can - by trying things out in the real world, and seeing what works. Again - all they can see are the tokens for ‘soft’ and ‘drink’ and ‘stand’ - not the soft drink stand itself (other forms of ML - especially combined with robotics - are not subject to the same fundamental limitations).


AI as Algorithmic Thatcherism: by Dan McQuillan. A bit closer to the political bone here, perhaps. But thought-provoking:

Real AI isn't sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn't provide insights as it's just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.
But what about all the potential 'AI for good' - should we abandon all that hope just because AI has this dark side? The problem with the promised bounty of AI is that, like a mirage, it starts to fade from view the closer you get. The claimed generalisation from computation to the shifting complexity of our lived experience never seems to quite stack up. What comes into focus instead is AI's material dependencies. Thanks to its insatiable appetite for data, current AI is uneconomic without an outsourced global workforce to label the data and expunge the toxic bits, all for a few dollars a day. Like the fast fashion industry, AI is underpinned by sweatshop labour. Above all, AI a very physical technology: it consists of vast server and data centres, packed with computers that burn energy and generate heat. These hyperscale warehouses suck up vast quantities of cooling water, depleting whatever communities and ecologies are unlucky enough to host them.


Specific gratitude is not the same thing as wholesale approval: McKinley attempts to hack together some sort of atheist spirituality: by McKinley Valentine. Moving away from LLMs, and onto thankfulness. The Whippet, irregularly put out by McKinley Valentine, never fails to delight and provoke thought. This bit - right down at the bottom, after stuff about rats with metacognition, ancient Greek mouse names, and the way the moon was created, is great. As a confirmed godbotherer one of the things that angers me most about some (thankfully not too many) of my fellow believers is an apparent refusal to recognise that wisdom is found everywhere, and not just in those who profess what we do. Stupid, arrogant and frustrating. McKinley's thoughts on gratitude here represent an approach I esteem and support.

If I was at the beach, swimming between the flags [the lifeguard-patrolled zone], and I started drowning, and a lifeguard saved me, I would be deeply, intensely grateful to them, as is normal.

This lifeguard has no personal feelings towards me, doesn't like or dislike me, is just doing the exact thing that's expected of them – indeed they would have actively got in a huge amount of trouble and been publicly shunned if they hadn't tried to save me. They chose to be a lifeguard (it's an imperfect analogy) but by the day of my near-drowning, they basically didn't have a choice but to intervene - they're following the rules of the system they operate in.
They didn't save my life out of personal kindness to me – but none of that changes how grateful I would be.
And if I later found out they were a garbage person, an abusive spouse or a mob enforcer, I would feel very weird about that, I would not defend them, but I don't think it would stop me from being permanently grateful that they saved my life. That's locked in. The gratitude and the weirdness would just have to live with each other.

They remind me, too, of a bloke I used to play music with when I was in my 20s. We'd be walking down the street and get to a zebra crossing. As I still do, I'd nod thanks to the cars that stopped for us. And this would actively annoy him: "why are you thanking them for something they have to do?" Bonkers. Of course I was thanking them. They had to legally, sure; but they didn't have to in practice.


More in the queue. Maybe Wednesday, if I can?