AI as UI

While the generative components of AI capture the headlines, perhaps the chat interface is the more interesting.

Over the past year, much of the conversation in higher education has centred on what we do about AI. I think AI has actually done a service by exposing some deep problems within the educational system. The ability of a large language model to generate plausible content that could get students a passing mark is a genuine challenge. But many of these flaws already existed. What AI has done is supercharge problems that were already there.

That said, I want to step away from the generative AI debate for a moment and consider some other use cases. Towards the end of last year, I had the opportunity to attend a few conferences and came across some examples I found genuinely compelling. Several of them centred on using the chat interface not for generation, but as navigation.

Chat as a User Interface isn't something new; what's new is that it actually works! Chatbots have been deployed before, but they were universally terrible experiences. Without sophisticated language processing, if you misspell something or phrase a question poorly, you quickly hit a dead end. That's a significant problem in a learning context, because a student doesn't necessarily know what they don't know - that's the whole point of learning!

What made the examples I saw different was the deliberate decision to constrain the AI's generative component. Rather than letting the model roam freely, these implementations used it as a tool for textual and structural analysis of source materials – essentially creating a smarter and more specific search engine. The chat interface allows students to ask questions that resemble Socratic dialogue, rather than reading through structured paragraphs of text.

When I think about what kind of learning is actually happening here, it's less innovative than it might first appear. Applying my own typology of learning to what's going on: despite the "chat" element, this isn't really discursive learning. It's not about presenting different ideas, opinions, or competing positions. In fact, most of the examples I saw actively tried to prevent that; they wanted the AI to stay anchored to the source content and not drift into generative territory. What this amounts to is a new way of accessing information, not a new form of learning itself. So, while on the surface it appears to be a discussion, it masquerades as assimilative.

This reminded me of an argument from human-computer interaction that most of what we call "interaction" — a click event, for instance — isn't really interaction at all. It's simply navigation. And that's where I land on chat in this particular context: the AI becomes the UI – the user interface through which we access content. Using the chat is simply the same as clicking next.

That framing matters because it shifts the conversation. We move away from the grandiose promises of AI and towards something more modest and more useful: an evolution of the user interface. Like every interface evolution before it, this one comes with both gains and losses.

The most obvious recent parallel is the shift from mechanical interfaces to touchscreens. Touch was a genuine breakthrough for visual navigation – panning, pinching, and zooming through a map on a mobile device feels completely natural in a way that using a mouse never did. But text on a touchscreen is a different story. Every slight movement triggers a scroll or an accidental text selection. There's no cursor to orient yourself with, and editing text is a constant source of frustration. Anyone who has spent time in the Kindle Paperwhite's comments section knows there are hundreds of requests to bring back physical buttons. The interface that's brilliant for one task can be actively hostile for another.

AI as UI is no different. There are contexts where I can see it being genuinely powerful – legal documentation, for instance, where navigating dense text with a traditional interface is painful, and being able to ask direct questions could be a real improvement. For review and study – generating practice questions, finding answers to specific queries – I can see clear value. These are tasks that play to what large language models are actually good at: textual analysis and retrieval.

But for first encounters with new material, the case is much weaker. Text structured as a book or a chapter isn't just a container for information – it has a structure, a narrative, a sequence, and a flow that are deliberately designed to develop understanding. Concepts build on each other. The structure is part of the pedagogy. Replacing that with a chat interface, where a novice has to know what questions to ask before they even understand the terrain, doesn't remove friction – it just relocates it to a more dangerous place.

So where does that leave us? AI as UI is real, and it's new – but it's just a new user interface, not a revolution in learning. And like every interface before it, it will be genuinely useful in some contexts and actively counterproductive in others. The problem is that we don't yet have enough evidence to reliably distinguish between the two. We're in the middle of a hype cycle, and higher education has a long history of chasing the next medium as the thing that will finally replace or transform teaching, only to discover that the new medium works brilliantly in some situations but not at all in others.

The answer isn't to dismiss AI, nor to adopt it wholesale. It's about thoughtfully picking the right interface for the right situation — and remembering that no UI, however elegant, works well for everything.


Comments

Comment on this blog post by publicly replying to this Mastodon post using a Mastodon or other ActivityPub/​Fediverse account.

No known comments, yet. Reply to this Mastodon post to add your own!