Artificial Unintelligence
We are so inured to using AI that we believe we need it
Before people around the world were connected by the internet the proliferation of new ideas and modern technologies was slow. This was bad for the originators who had no means of quickly gauging where their product stood without taking risks for longer periods initially. It was not until the dust settled as more people, across more continents, slowly took to using this new invention that the companies producing them could rest easy.
Part one
Consider the vacuum cleaner. A common household appliance today, less than 10% of households owned one during the first few decades after its invention. Between 1908 and the 1920s the device was marketed as a means of keeping the house clean; only after that point, having been able to gauge the market, did manufacturers start making them feature-rich; and it was not until the mid-20th century that industrial designers and architects were employed to design vacuum cleaners that looked good. In other words it took half-a-century before companies realised people want appliances that matched the character and feel of their houses and only then did vacuum cleaners start to proliferate households across economic strata.
Understandably there are other factors that influenced the adoption of vacuum cleaners in this age—notably WWII and the standardisation and availability of household electricity—but it nevertheless demonstrates the difference between product adoption today compared to a century ago.
There are three key outcomes from this difference:
- Products end up being ‘tested’ and a niche is created for them if one does not exist
- Products end up being developed more callously because no adoption risk is involved i.e. if something does not work, discard it and make a new ‘version’ because there are always people looking to buy one
- Product sales rely heavily on novelty because untapped markets rarely exist for long after a product is introduced, which pivots the company’s focus from coaxing more people to buy a product to coaxing existing customers to upgrade, often by overstating the value of new features
Artificial Intelligence ticks all three boxes. It arrived with a flourish of novelty, seemingly capable of ‘understanding’ humans in a way no computer had done before it while in reality all it was doing was statistically assessing greater dumps of information than any computer had done before it. Its upgrades seemingly leapt forward even as the questions we asked it remained similar, and it gained artistic skills simply because what counts for art is understandably vaguely defined. It developed hallucinations and it spewed nonsense but none of that mattered because, with no worries as to adoption, all companies had to do was discard one version and shove a new one in our faces. Development was now by brute force.
Being constantly, inescapably exposed to LLMs meant we fell into the trap of finding reasons to use it, forcing it everywhere we looked, everywhere it barely appeared to function. These LLMs were not created as solutions to a problem; they were created as the supposed natural successors to increasing computational power, turning them into hammers in search of nails. Like some sort of technological Stockholm syndrome we began thinking explicitly about where we can use this shiny new technology. It no longer mattered what it did or how meaningfully. By its sheer insistence upon itself Artificial Intelligence made us grow accustomed to mediocrity.
Consequently we stopped questioning what it was doing and how. All that mattered was that it was doing something. If you fed it a question and it regurgitated an answer, it was successful. This is arguably the lowest bar for success in any machine. But, more dangerously, this meant we stopped questioning what it regurgitated. Its typewritten word looked so believable—lies printed out with glaring confidence—that we never bothered to question it let alone critically examine the things it said or even made up along the way.
The novelty was so great, its proliferation so overwhelming that we submitted to it. And to argue against it becomes harder as a result. People see it doing something so you cannot describe it as being useless. People see it answering to their liking so you cannot describe it as inadequate. It never does not know an answer which makes it look clever but really speaks to how statistics always has a value1. It has always been up to the interpreter to make sense of that value, but with LLMs the interpretor is often lost in their fascination of their subject.
All this recognition leads us to an important distinction: process versus result. Where and how we use LLMs should be decided based on whether the process or the result is is critical. In some ways this harkens back to the idea that we need AI to do the menial, repetitive tasks while we enjoy the pleasurable ones; but today the thing uses its mediocre ability to engage in traditionally pleasurable, creative tasks as a badge of its technological progress.
Part two
A singular example, one that I frequently come across in my daily work, is making notes. There are numerous AI note-taking apps2 today. There are apps for recording minutes of meetings as though doing that ever took someone competent an unnaturally long time in the first place. These are typical examples of using LLMs just for the sake of it. At least outside the boardroom, when making notes is important, rather than merely procedural or for record-keeping, the use of LLMs is not only pointless it is also both antithetical to the purpose of note-taking to the point of being detrimental.
It is important to clarify, because it is so often mistaken this way, that I am not referring to typing up notes versus handwriting them. Studies from back in 2015 as well as more recently from 2023 are in agreement that no discernable advantage exists to handwriting compared to typing.3 So, assuming we are all typing things up at some point—even if we originally handwrote our notes—the question is should we engage with AI at all during this process.
To better understand this, ask yourself what the point of your note taking is. Is it to serve an end result or for the process itself? As with any activity there can be indirect end results to note-taking, but these are not what we are talking about here. Taking notes may ultimately help you ace an exam or learn a skill or produce a brilliant essay, but the immediate result of note taking is, as it has always been, understanding.
Transcribing a meeting and then summarising its minutes is something it makes sense to relegate to an app simply because understanding does not matter. The outcomes of a meeting, the decisions taken during a meeting (assuming there were any at all), are what matter. But there is a problem here too: an app may discard several discussions irrelevant to summarising minutes that are nevertheless relevant to someone in that meeting who is tasked with writing a lengthier report on it. Such an individual cannot be helped beyond a point by simply noting the decisions taken during a meeting. The specific reasoning that led them to those decisions—an understanding of things—becomes increasingly relevant.
Sitting in a university lecture using an app that transcribes what the professor says for an hour or so can quickly leave you with plenty of content but no context. More often than not context is incredibly important in lectures. The note-taking equivalent to this is blind replication in class, jotting down notes under a trance as your hands write what your ears hear but your brain never quite comprehends because you are bored, sleepy or disinterested. A week or two later nothing you wrote down makes sense.
All of this is to say that note-taking is about comprehension and understanding. These are non-negotiable aspects of note-taking. As any undergraduate student would learn within a week into their new course, you take notes as you understand them. Not a replica of text books, not a transcription of lectures, and certainly not AI summaries of existing content. Summaries like this just produce a second, poorer copy of a text for you to not read or understand. And even if you do understand, your note taking is not done; note taking begins at this point because your understanding began here too. You have AI producing notes for you that you will have to read for the very first time same as if you had read the original text like you should have done in the first place. In sum, AI can summarise for you, it can simplify for you (hopefully correctly), but it can never understand for you.
Today’s LLMs are unreliable narrators. They do things but we will never quite know how much of it is rubbish if we do not do our due diligence anyway. Often then they are additional steps in a process we are already looking to reduce. But they are not obviously so, making them misleading to the point of becoming dangerous.
We can go a step beyond this. If note-taking is about understanding the thing you are taking notes about, that note should itself exist within a context. This context would be your broader understanding of a subject, a specific subfield, or life itself—however narrow or broad you might want it to be. Making these bridges out of thin air is a critical aspect of note-taking; it is why what you read or learnt last month still makes sense in the face of your new learning from earlier today. A bit of this is memory, or having what I like to call an ‘approximate picture’ of your knowledge, and the rest is drawing creative connections which your brain does mostly passively.
Proponents of AI will argue that it can do this for you, and indeed it might. But just because AI can make some connections (however exhaustive these may seem), and we assume here that it has made meaningful connections, it does not mean AI has made all possible connections. You will never know; you can never know. And if you use an AI-powered app to write your notes for you, and make connections for you, you end up losing track, context and understanding to the point where you can never take over and make new knowledge yourself, properly appreciate any new connections your AI has made, and—perhaps the worst of all—you will have to take for granted whatever the AI says because you have no way of knowing its truth or effectiveness, one way or another.
In the process of making notes, you are capable of making mistakes while AI (thinks it) is not4. A substantial chunk of human knowledge was created out of mistakes. Nobody intends to make mistakes of course, but when you do make one in the process of taking notes you still have an opportunity to start thinking about something entirely unplanned and unexpected—something that, by definition, would be beyond the scope for the AI-powered app to think about because it strives to keep getting things statistically and logically right and continues with the assumption that it has, in fact, done just what it planned to. Unplanned mistakes can be beneficial, if not while you are taking notes then long after the fact.
Finally, writing is about more than getting ideas on a page or understanding something. Most writing results in an argument, even when that was never the intention. Mostly these arguments are mental and are how we convince ourselves we understood something. When making notes you are forced to concretise your thoughts and you end up really confronting your own ideas. Suddenly, things that made sense in your head stop making sense. You start to point to holes in your argument and gaps in your understanding. The process of note-taking is what gets you here. Outsourcing this to an AI makes you a passive reader (if you do bother reading this new tertiary source at all) in the same way that a lot of people passively scroll through social media. We drown in a sea of ‘content’ giving us the illusion of knowledge and before we know it the chances of swimming out and saving ourselves shrink to zero.
I first started thinking about this essay shortly after Juhis asked me about my thoughts on the use of AI in note-taking many months ago—thanks, Juhis. However, I still have a few thoughts I have not addressed here and hope to expand upon them in a follow-up essay after thinking about them a bit (or a lot) more.
For classic examples, think of correlations and causations or means influenced by outliers. In both cases running a relevant statistical test will provide a value, but statisticians are trained to question just how meaningful that value is and whether an entirely different test needs to be run. ↩
I have an irresistible compulsion to clarify that I am using the term ‘AI’ reluctantly in these contexts. ↩
Of course I omit the fine print here but with reason: these studies were specifically for gauging literary comprehension and recognition, and they were tested for a certain well-defined group of learners. There is no indication that this is not generalisable to some extent, as I have done in the main text, and is moreover supported by other studies too showing typing may be marginally better for comprehension and long-term recall even as handwriting was slightly better for short-term recall, which is confirmed by still other studies. The long and short of it is that there is no need to conflate the use of AI with typing and thereby compare typing with handwriting in our current discussion. ↩
Before you come at me with a rebuttal here, the fact that AI is unaware of its mistakes makes it impossible for it to capitalise on them. And the fact that it continues regardless makes it increasingly susceptible to senseless outputs and hallucination. ↩