AI Idealism
I’m really curious about framing the critique of technology through the Idealist vs Materialist lens, as per Jathan Sadowski. So after listening to another raft of news about AI valuations, I was trying to think about the Idealism of AI – and I’m wondering if they forgot to do the homework.
The early Internet formed around the idea of being the frontier - this new digital world, a chance to reimagine and reinvent everything. Social Media had the idea of connectedness – of sharing, inclusion and global togetherness. While neither lived up to these ideals, there was at least something to graft the hype onto.
With AI, what is the Ideal? I can’t spot it.
Every proposed ideal future feels like a dystopia. Then there are the apocalyptic overtones and alternatives that AI prophets dole out in equal measure. The result is a complete lack of Idealism to drive the hype machine or connect with people’s consciousness (or sub or un consciousness).
There is nothing in the AI futures that inspires, except for the fantasy of some congress of human and machine[1]. Behind every proposed rosy future has an equivalent Black Mirror where it all goes horribly wrong.
What you hear from AI prophets is an ideal world free from labour and effort, where AI and robots are capable of doing everything. Where all endeavour has become so commoditised, we need to rethink society. It’s also a world where human effort is worthless, and we are unmoored from our place in society and the world. It is a neocolonial state where we the people, become enslaved by the robot masters, that elite sphere of men that control the machines from their underground bunkers. And that’s the ideal?!?
The big sell is that you have no place in the world anymore, that anything that provides you with the slightest piece of worth or value is worthless and easily replaced by the ever-creeping colonial power. You write to make meaning? AI can do that. You draw and create images? AI can do that. It's that world, or the apocalypse. One where the world is destroyed by some paper clip replicating AI, or one that takes control of nuclear missiles and destroys us all because someone misprogrammed the concept of peace and balance in the algorithm.
The ideal of AI is not the one of the early Internet out on the frontier, where you can explore some great expanse. No AI has explored for you and deemed you unworthy. AI does not connect you with world, it provides summaries of it so you don't have to engage and stay isolated and unquestioning. The AI ideal is not one where the person or their experience is valued – it's one where you have no value.
Then there’s the Materialist critique - what can the AI do? Well, so far, it can do parlour tricks. It’s a modern day Mechanical Turk[2] where, in very specific circumstances, it can generate text (that's closely based on the training data that it ingested) and provide passable responses to a flawed evaluation system. What it’s able to do is trick us into believing that there is an inherent “ability”, as opposed to a statistically "more probable guess". They have fooled us into believing that intelligence is the ability to respond to a prompt correctly[3] is somehow an equivalent to intelligent behaviour. The only materialist perspective that AI is successful is its ability to delude and fool people into assuming this is anything more than fancy If-This-Then-That programming. That the “chat” window is anything more than fancier command line prompt. That somehow this “industry” is worth hundreds of billions of dollars of investment and expenditure, while the planet fucking burns. That’s the material success of AI – is to somehow want us to accept and invest(!) in this new colonial project where the most likely outcome is financial ruin and environmental collapse.
Epilogue
Look, I get this interest in AI. There are some cool things it enables people to do. Personally, I don’t engage in the kinds of tasks that it’s good at or where outsourcing this kind of labour benefits me. I’ve used AI, and on a few occasions, I’ve benefited from the textual analysis that LLMs are very proficient at doing. Is it Intelligent? No, it’s an app, like the dozen or so I interact with daily, except I have no daily requirement to use it. I don’t find the generative functions helpful or useful - but that’s because I use words to communicate thoughts, and choosing them is part of the cognitive process of thought - not an abstraction away from the thought itself.
I've found the language tools that LLMs provide useful, helping to translate and transpose language. I've seen some fantastic experiments using AI to write code, which is great because, among other things, we lack the language skills to effectively use these big 'ol digital machines and make them do what we want. But none of this is intelligence—it's programming. It's prompt engineering, which is what most of modern computing is anyway.
What exactly do you see when you run it past the idealist and materialist lens? I see ... just another app. I think this is what AI looks like when the sheen rubs off. It's just another technology that could be helpful, if we just used it in how it for what it was developed for and what it can functionally can do. Let's not oversell it.
Which is quite literally the fantasy of having sex with robots so that the transaction can be free of guilt, emotion and commitment ↩︎
No, not the colonialist and non-ironic low paid labour hire service where you pay third world workers a barely subsistence wage to be the “smarts” behind every application and service that has "Smart" baked in. I mean the o.g. man-pretending-to-be-a-robot trick. ↩︎
And the more defined the prompt the “better” it’s able to respond, I mean come on! It's on easy mode and your activating cheat codes at this point. An intelligence would be able, and does, more with less! ↩︎
Comments
Comment on this blog post by publicly replying to this Mastodon post using a Mastodon or other ActivityPub/​Fediverse account.
No known comments, yet. Reply to this Mastodon post to add your own!