I remain very skeptical that the modern take on artificial intelligence (highly parameterized transformer architectures used to create implicit probability distributions over sequences of strings, and sometimes sequences of strings and images and digitized sound) will broadly transform society. There are technical reasons for this that I will mostly not get into here; instead, i will focus on culture. There are three essentially cultural reasons for my position – one of them is to do with model behavior by itself, two of them are to do with my own (biased!) view of human social networks and tendencies.
First up: novelty and motivation. No modern AI model has ever decided to do something novel by itself. To a select sub-culture of humans these days this statement is wildly controversial but it’s really pretty basic. What I mean is: even the wild west of Moltbook is a bunch of highly parameterized models just copying standard online human behavior. It’s not new. It’s not interesting. Even in the highly publicized case of an AI-generated pull request (PR) getting rejected (by a human) on matplotlib’s github repo and then the agent that submitted the PR “getting mad” (anthropomorphization!) about it and writing a snarky blog post to complain, there are approximately eight zillion examples of human developers doing this exact thing on the internet. It’s still rote copying of existing behavior. That copying is just happening at a much more sophisticated level than it used to due to increased model parameterization and the existence of (human-designed!) feedback loops.
In short, everything that AI does today - and, as far as I can see, forever - starts with a human-directed goal. Even if it’s extremely high level, it’s there. That may seem like a tiny distinction but i think that it’s a chasm that cannot be crossed. Obviously, setting your own goals is true agency. Deciding to do something completely different is true agency. A recent vignette illustrates this point handily: much ado regarding Donald Knuth’s revelation that an AI model was able to solve a problem he’d been working on (regarding Hamiltonian cycles) when he could not, and did it relatively quickly at that. To which I reply: But who posed the problem?
The second and third reasons are really one reason and its dual or enantiomer reflected in a colored mirror. A true economist, a great man once said, is one who knows that the vicissitudes of time and place often dominate; and so, without pretending to control my own bias, I will describe two worlds that I straddle, neither of them seeming to be affected by modern AI in the slightest.
World 1 is the DARPA program manager world. If I believed half the hype I heard about AI eliminating creative jobs or white-collar jobs or technical jobs or research jobs I’d be quaking in my boots. Yet I have never seen anything close to a half baked DARPA program concept from the most advanced AI I can find to turn loose on an idea. Put frankly, the quality of what is created when I (even interactively!) ask one or more frontier AI models to work on a Heilmeier Catechism paper is horrendous. It’s like a mid-tier college student had to blow chunks on a piece of paper after getting stoned all weekend. (Because, you know…who’s creating the training data, after all…) All the leaps and bounds change in “model capability” has resulted in precisely doodley squat relevant to “taking my job” because anything that’s created has been, and as far as I can tell will always be, like what has already happened or what has already been created. Model output will sound good if the model can use the contents of the DARPA program vault, sure. But what’s created won’t be new. And my job is by definition to make something new.
World 2 is a farm community in rural Massachusetts - well, two of them, one a land farm community (Carver, MA - cranberry bogs) and one an ocean farm community (Wareham, MA - oysters, mussels, quahogs, fish). People here are pretty equally politically divided (I should not have to explain why that’s relevant to a discussion at the intersection of AI and culture); candidly, no one really gives a damn about politics or most national issues writ large. They fish, they farm cranberries, people bitch and moan about the weather. They see their grandkids. They eat pancakes or drink a beer down at the Narrows or D’s Omelette Shop. Maybe they pull up ChatGPT to make a dumb picture on their phone sometimes – my brother-in-law did this to prank his wife on their one-year-old’s birthday. It was kind of funny! And then they move on and do…literally anything else. AI occupies almost zero brainspace for anyone. When you go to the doctor, maybe she says, “Hey, I am going to use an AI note taking app so I don’t have to write and can instead just talk to you more, is that okay?” You say, “sure,” and then they do talk, and you talk about your aching back, or you talk about your kidney stone, and again, no one gives a damn. No one’s job is being replaced. Human augmentation? Fantastic, if a queer curiosity in most cases, and even a curiosity to be admired when used to help those around one. Human replacement? “No thanks, I’d rather have a person,” is the default sentiment when there even is one. I’d love to see someone from Anthropic or OpenAI go and tell any of my cranberry farmer friends that their jobs will ever be replaced even by robots with transformer-based AI inside. “Who are you? Why are you here? Go away. Don’t get stuck in the mud or I’ll have to pull you out.”
Man is nothing if not the product of his environment. What I described is mine. And I suspect, but cannot prove, that the breathless voices in the news and in B2B SaaS firms and, yes, even in the AI companies, are a product of their environments: most went to good schools, most never had to work hard manual jobs, most have not actually done anything novel or meaningful or different in their lives (gasp!). Yes, maybe they are at risk for being automated.