I am a data scientist, machine learning enthusiast, and a devout follower of the tech religion. I strongly believe that technology makes the world a better place by creating abundance where there previously was none. I fully subscribe to Peter Thiel’s sentiment that technology, specifically software, turns labor into capital. Over the past decade, I have dedicated my life (okay just my career) to working at places that make software. But when I read articles about AI, which are pretty difficult to avoid these days, I find myself completely skeptical of the purported impacts these tools are going to have.
AI is being hailed as a paradigm shift, a fundamental change in the way that humans and technology interact. We can now turn few words into more words, many words into few words, ideas into software at the drop of a hat. We can turn words into pictures, and pictures to words (typically less than a thousand, mind you). We can even make words into movies. These are all fantastic developments to say the least, but I don’t believe they are the four horsemen of the labor apocalypse.
Technological barriers are often just a small piece of a much more complex puzzle.
Turning these nascent technologies into businesses will take a very, very long time. These reasons can often be regulatory, but they can also be that the final 1-5% of a project can often take decades to hammer out. The time horizon for many of these things is going to span careers-lengths, and by the time they are realized, most people probably will not have trained for those jobs that were replaced.
Look no further than the long list of failed robo-taxi companies founded in the mid 2010s, hailed (lol) as a paradigm shift for human society. Not only have these companies mostly folded by now, the majority of them are still more expensive and less effective than human operation. Sure, there are companies like Tesla that are approaching something like adoption, but it’s still cheaper for most people to drive themselves around than to pay $12,000 for Full Self Driving. Let that sink in. Fifteen years after the technology was first introduced, the cost of deploying it is still so high, and the function of the technology still dodgy enough, that most people will still choose to drive themselves.
The reality is, that creating a lawyer, or a doctor, is more complicated than creating a thing that can say the thing that a lawyer or doctor might say. People often have extreme optimism for technology because they can only see the problems 1-2 steps ahead of them. When Alexa was released, techies and regular people alike thought it would be another paradigm shift. A key component of IoT, and something that would be enabled by 5G. The reality is, being able to make a refrigerator, or laundry machine that can text me - or a thermostat that I can control with my phone just isn’t that compelling. It’s not a fundamental change in how I’m living my life, at most it helps me skip 1-2 steps from time to time. We are not living in a world where my refrigerator talks to my microwave (and thank god for that).
The internet was invented in 1983. Yes, it has completely changed our way of life, but most of the people who had jobs or were working in 1983 are pretty close to retirement. They learned to use the internet to the best of their ability (for work) and it was fine. People could not have predicted the ways in which the Internet transformed life, but many of the promises from this era have not come to pass. That is because the invention of a technology takes decades to be meaningfully applied at scale into society. By the time Blockchains, AI, or VR are producing any real economic value you’ll probably be 70 years old literally pooping in your pants.
Large Language Models (LLMs) are basically just bullshit artists.
At a high level, the model predicts the word, or sequence of words, that are most likely following an initial word. At first glance, it may appear to be intelligent because it sounds intelligent, but anyone who has sat in a room full of executives knows that it’s easy to sound intelligent. It’s easy to use the right words, and talk for long enough that you’ve successfully distracted everyone. I would use it for my performance reviews, because I know no one is reading those, but if I had something I actually cared about I would never GPT it.
I ask a simple question, what is the value of something that can speak endlessly, will respond to any prompt and cannot truly agree or disagree with someone else? I’m still new to management, and corporations in general, but I understand these entities in charge of producing goods and services to be large, somewhat decentralized, decision making machines. At each layer of the organization, trust between individuals helps govern the flow of information. Every party is responsible to make the “right” choice by interrogating the options, and choosing a path forward that seems most reasonable. It doesn’t matter if your machine can spew out an infinite amount of words. What matters is convincing this decentralized machine that your vision, or a vision, for what this machine should do is the right one. To do that you must rely on more than saying the right words. To do that job effectively, you must understand the motivations of other human beings.
AI Thought Leaders probably don’t remember what it’s like to have a real job.
When’s the last time Mark Zuckerberg knew anything about how a regular person does anything? The man rides a hydrofoil, owns half of Hawaii, and thinks people still want to use Facebook to connect with their friends. He thought we’d all want to sit around with goggles over our eyes instead of talking to our family members. Does that sound like anyone you know that has a job?
Marc Andreessen posts memes like this and thinks they are funny:
Elon Musk named his kid after Alphabet Soup
Jeff Bezos is actually kind of cool IMO, but he still hasn’t had a job in like 4 years
Sam Altman thinks we should raise 7 trillion dollars for some reason.
I’m not saying these people are dumb, they might be some of the smartest people. I’m saying that sometimes, once you’ve achieved this level of professional pontification you may not know as much about what it means to be an accountant, or a secretary even. You might think the people around you are just word boxes that turn more words into less words, or less words into more words. But that’s only the surface of what the people who seem like word boxes do. Even if LLMs can do what we word boxes do, it might cost much more to make them do what we do than it’s worth investing in. OpenAI is losing users each month, Microsoft is losing money on each subscriber to Copilot, and while I understand the concept of capturing the market, there’s no indication that with future iterations of ChatGPT, OpenAI will be able to improve the usefulness of it’s models enough to charge people what it’s actually costing them to provide these services. And that’s just for the wordbox. I suspect cobbling together a series of APIs, LLMs, and other services to replace office workers that cost you ~$70K-$200K per year to employ is probably not that strong of a tradeoff.
Exceptions that prove the rule
There are some cases where the wordbox is more immediately replaceable. Specifically in User Support operations, and some areas of software development where copying macros contextually (or code from other places) was basically what people were already doing. I believe those changes are temporary. Yes, these LLMs function incredibly well in a space where a lot of the work was fancy copy paste. But even in the engineering world, there are many, many more kinds of software that need to be written. I think, quite intuitively, software engineers can spend more time thinking about how things are architected, and operate at a higher level. LLMs may allow software engineering to become more abstract. Similarly, perhaps customer support will evolve to be a more human-centric function, focused on improving the user experience moreso than reducing liability and dealing with disgruntled customers.
In conclusion, AI isn’t real. At least the thing that billionaires and software engineers with a god complex suggest will replace with everyone’s jobs isn’t real. Change is slow, and as a student of history I’ve learned that seismic shifts rarely happen overnight. New technologies are always very exciting, and I’m sure these technologies will continue to grow in their impressiveness, their applications. I’m sure LLMs will do things I never imagined possible. I’m sure they will help humanity in ways that make everyone’s lives better. I just simply don’t believe it will happen tomorrow, and by the time it does we (the humans) will be off to better and bigger things.