Intellectual Outsourcing

One of the things that I fear within the AI hype and the desire to inject it into all aspects of life is that it's a process of outsourcing.

I've been trying to connect a few thoughts together to get to this point. I put together a note on what I feel about outsourcing, which frames my thinking about it as a more general concept, and in doing so, I dug up an old post in which I discussed outsourcing and centralising as typical features organisations use to find "efficiency". In that same way, I've started to see AI implementation as a form and extension of outsourcing.

I feel it’s important to note that the current generation of AI tools available are not “intelligence” but language analysis tools. They are sophisticated and in many ways masquerade as lifelike[1]. Still, they are not intelligent beyond language analysis. They are not fact machines. They don’t understand concepts or meaning beyond statistical analysis and probability. They are derivative machines, highly capable of simulating thought only because we’ve associated output and product with thinking.

Humans have developed language as a way to communicate thoughts. Language is a way we can share life's intangible aspects alongside various artistic endeavours - from painting to music to dance. The words aren’t the thinking or emotions themselves, they are descriptions of what we experience - attachments and appendages to our lived reality. They express meaning, but the meaning itself is something else, not the language or the words - the meaning is the experience.

As I listen to the latest murmurings about “AI’s” applications, I keep hearing exclamations that have confused the ability to produce language with the ability to think. They are not the same. An ability to recite numbers not correspond with an ability to complete mathematical tasks.

The media have unquestionably fed the hype that AI will be a replacements for people, jobs and entire industries. Yet when we look at what abilities AI currently has its scope and success only goes as far as to demonstrate their statistical ability to generate words correctly to a predefined set of parameters. Sure the accuracy of the words and language chosen is significantly better than previous iterations, but AI isn’t developing the prompt. It’s not creating the parameters or assessing the output or correcting its mistakes. It’s not thinking.

If a job fits those parameters (generate words correctly to a predefined set of parameters), then yes, AI will likely replace it. But if that is the job, the. It probably falls into what David Graeber calls Bullshit Jobs and may actually be of social benefit (or, better an understanding, these jobs are superfluous to society and a waste of resources). Beyond that though - what is it the AI will disrupt?

What I fear is the other “sell” – that these tools will allow us to outsource thinking[2]. Just imagine a world where you don’t have to think! To some, that is the goal - to outsource thinking completely. The bluff, though, is that they can’t. They can generate words, but there’s a big conflation between that and crafting a story, developing a solution or exploring a dilemma.

In my job, we have been presented with an AI tool to “help” with constructive alignment in courses. The idea is that you feed it learning outcomes, and then it provides you with a grading rubric for the course. The actual results are less than perfect and are mostly unsuitable and unusable because they are filled with errors, misconceptions and conflations. By engaging with the tool, the thinking required to do constructive alignment has been outsourced. The output of a grading rubric is usually the result of thinking about aligning skills, knowledge and activities with evidence of learning, the process of analysis and reflection, consideration of the course as a whole and pulling together expert knowledge and teaching experience. When you hand the production of the artefact to the AI, you have circumvented the thought that goes into the artefact. The content that the AI spits out is thoughtless and usually thoughtless slop, as the prediction of language is not a substitute for the thinking required to properly do constructive alignment. What I’ve found to be helpful in this situation is to avoid the tool altogether and to guide the thinking of alignment through a stepped process. By doing this instead, we have words that are considered and meaningful rather than probable. The rubric is thoughtful, ensuring that it accurately reflects the intentions and expectations of the course.

As a bit of coincidence, Miguel Guhlin posted a link to an interesting study, AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. It concludes,

"Our research demonstrates a significant negative correlation between the frequent use of AI tools and critical thinking abilities, mediated by the phenomenon of cognitive offloading. This suggests that while AI tools offer undeniable benefits in terms of efficiency and accessibility, they may inadvertently diminish users’ engagement in deep, reflective thinking processes."

My feelings align with this research, but I'd take it further. The only benefits of efficiency and accessibility are in text analysis and generation. If that is not the aim or purpose of the task, then there is no benefit, especially if it is an attempt to outsource or offload thinking.


  1. Built as an interface that resembles a conversation that we interact with and it understands, but in reality, it is an obfuscated command line with a language tool doing what it was made to do - analyse language ↩︎

  2. in a somewhat murky and distant future that could be tomorrow but probably not, because they certainly don't now ↩︎


Comments

Comment on this blog post by publicly replying to this Mastodon post using a Mastodon or other ActivityPub/​Fediverse account.

No known comments, yet. Reply to this Mastodon post to add your own!