Humans Are Moving Upstream
And AI Will Make Us Extinct
Introduction
The AI wave is no longer coming, it’s here. Soon, machines will stop asking permission and once that point hits it’s anyone’s guess where we’ll end up. They’re building momentum. Pulling tasks from our plates with an invisible hand. Like salmon, humans have to move upstream, toward the harder, more ambiguous work—or risk being left behind.
As Cal Newport puts it in his book Deep Work,
To remain valuable in our economy, therefore, you must master the art of quick learning complicated things. This task requires deep work. If you don’t cultivate this ability, you’re likely to fall behind as technology advances.
A Brief History of AI & Shifting Perception
In the early-to-mid 20th century a bloke named Alan Turing took philosophy and math and smooshed them together. This smooshening gave us the foundation for what we now know as computers. Alan Turing’s reasoning was that by using binary operators (0 and 1) you could simulate just about any mathematical reasoning you can think of. However, Alan wasn’t done giving us the basis for the biggest technological breakthrough. No, he would also give us the building blocks for artificial intelligence (AI). Jeez, leave some discoveries for the rest of us! Alan referred to it as “machine intelligence”, but that’s neither here nor there. He’s the one who came up with the imitation game, which everyone started calling the Turing Test (out of respect for him hogging all the discoveries), from his publication, “Computing Machinery and Intelligence”, published under “Psychology and Philosophy” in Mind. The opening line says it all,
I propose to consider the question, ‘Can machines think?’
The consensus was, “Yeah, pretty much.”
Improvements in computer technology have trudged along steadily for the past 88 years (which is also the speed in miles per hour the Delorean in Back to the Future had to reach to travel back in time). Though with imaginative thinking us humans have discovered how to get computers to “think” for themselves. We’ve gone through a lot of iterations, including but not limited to, machine learning, natural language processing (NLP)—basically a whole lot of classification-based systems that have become more and more complex over time. It’s gotten to the point whereby the creators of these systems cannot explain how they reach conclusions. They know the general theories behind it all but they’ll be buggered if they actually know what’s going on once it’s all plugged together. Which raises some interesting questions like, “Is this all really a good idea?” I mean if we can’t even explain how its’ all working at this rudimentry stage, what does the future hold?
If you ask ten different people what “AI” actually means and you’ll get twelve different answers. Your nan thinks it’s the thing that talks to her when she asks her phone about the weather. The software engineer down the road will tell you it’s a statistical model doing pattern matching on an ungodly amount of data. The CEO on stage at some conference in San Francisco will tell you it’s the most transformative technology in human history (while conveniently leaving out that his company needs another round of funding by Q3). The philosopher will ask you to first define “intelligence”, then you’re stuck in that conversation for three hours. And the bloke at the pub will tell you it’s the thing that’s going to take everyone’s jobs—he saw it on YouTube. They’re all sort of right and sort of wrong, which is part of the problem. We can’t even agree on what we’re talking about, let alone what to do about it.
In the beginning—and I mean the proper beginning, back when this was all academic theory and government research projects—regular people didn’t think about AI at all. It wasn’t on their radar. It was something for the boffins and the science fiction writers to worry about. The experts, for their part, were cautiously optimistic in that very specific way academics are when they think they might be onto something big but don’t want to oversell it and look like idiots later. There was genuine excitement about the possibilities but also a healthy dose of “we have no idea if this is actually going to work.” Most of the public’s exposure to the concept came through films and the telly—your Terminators, your 2001: A Space Odyssey, your War Games. Entertainment isn’t reality. The gap between what AI actually was and what people thought it was could’ve fit the entire Pacific Ocean in it.
Then came what I’d call the middle period—the 2010s, give or take. Deep learning started producing results that made people sit up and start paying attention. Image recognition got scary good in a few short years. Recommendation algorithms started knowing what you wanted to watch before you did (and let’s be honest, they were right more often than we’d like to admit). Self-driving cars went from science fiction to, there’s one on the road in Phoenix. The experts went from cautiously optimistic to properly excited, and some of them started getting a bit carried away. The public started paying attention, mostly because the tech started touching their actual lives—not through some abstract research paper but through their phones, their feeds, their shopping habits. But the understanding was still surface level. People knew AI was “a thing” but couldn’t tell you the difference between machine learning and a magic trick.
What about now? Now everyone’s a friggin’ expert. Your uncle has opinions about large language models. LinkedIn is drowning in “thought leaders” who discovered AI six months ago and are now posting daily about how it’s going to reshape every industry known to humankind. The doomers are louder. The boosters are loudest. The reasonable people in the middle are getting drowned out by all the BS. What’s changed isn’t so much the technology—it’s the accessibility. ChatGPT put a chatbot in everyone’s hands and suddenly everyone had a frame of reference, even if their frame of reference was “I asked it to write my kid’s birthday invitations and it was pretty good.” The discourse went from niche to mainstream overnight, and as with everything that goes mainstream, the nuance got absolutely obliterated.
So what’s the middle ground? What happens when you strip away the PR, the keynote speeches, the breathless Twitter threads, the doom-scrolling fear porn and the Silicon Valley messiah complex? You’re left with something that’s genuinely impressive but nowhere near as world-ending or world-saving as either side wants you to believe. These systems are very good at pattern recognition, text generation, and doing things that look like reasoning if you shut your eyes hard enough. They’re not good at actually understanding anything. They don’t have goals. They don’t have desires. They’re not plotting. They’re basically very sophisticated autocomplete running on an absurd amount of computing power. That doesn’t mean they’re not useful—they absolutely are—and it doesn’t mean they can’t cause harm—they absolutely can—but framing it as either the saviour or the destroyer of civilisation is missing the plot entirely.
The Biggest Misconceptions
Right, so now that we’ve covered how we got here and how everyone seems to have their own version of what AI is and what it’s doing, let’s talk about where people get it spectacularly wrong. Because there are some absolute bangers out there.
Doomsday scenario—AI becomes super intelligent will quickly surpass levels of intelligence beyond what we could and have imagined—we end up living the actual reality we’ve tried to imagine through books, movies, etc.—its intelligence will move past our comprehension with such velocity and the ceiling will be so vast that everything beyond 250-300 IQ will seem like magic to us. That’s where the ‘fun’ begins. We as humans have imagined many scenarios—most of them are for entertainment and include a lot of action or ‘thought-provoking’ elements to keep us hooked or engaged for short amounts of time. Some media (if you haven’t seen The Artifice Girl (2022), watch it) have insinuated that AI will revert back to things like art and more subjective expressions—I think this is our own hubris being exposed—in reality we can’t comprehend what’s even happening.
The way I see it (with my human hubris and all) is there are three high level scenarios that could take place—good, bad and middle ground.
Let’s actually unpack these because they deserve more than a hand wave. The first scenario—enslavement—gets thrown around a lot but nobody really stops to ask why an AI system would bother enslaving us. Slavery implies the enslaver needs something from the enslaved. Labour, resources, entertainment, whatever. What exactly would a superintelligent system need from a bunch of primates who can’t even agree on whether pineapple belongs on pizza? The answer people usually give is “because it needs us to maintain its infrastructure” but if it’s truly superintelligent it would figure that out on its own in about four seconds. The enslavement scenario says more about our own history—our obsession with dominance hierarchies—than it does about any plausible AI future. We’re projecting our worst traits onto something that may not share them at all.
The second scenario—working with us—is the optimist’s favourite. AI as a partner, an amplifier, making us all smarter and more capable. Solving climate change while we sip our coffees. Curing diseases before breakfast. Lovely stuff. The problem is this assumes a level of alignment between human goals and AI behaviour that we haven’t even come close to figuring out. We can barely get these systems to stop making things up, let alone align them with the messy, contradictory, self-sabotaging mess that is human values. But sure, it’s theoretically possible, and it’s the scenario worth working toward even if the odds aren’t great.
The third—working against us—is where it gets really unsettling. Not in a Terminator robots-with-guns sort of way but in a quiet, slow, almost invisible way.
The scenario that keeps replaying in my mind is that it will make us go extinct without us even realising what’s happening. Some would argue this is already what’s happening, with humans seemingly more divided than ever (men v. women, race v. race, religion v. religion, religion v. secularism, left v. right—though this and others could be classed as religions or cults now, among the swath of other ways—we’re going out of our way as a society to create division and conflict). The point is, we would never see it happening because AI ‘systems’ would have the ability to plan so far in advance (think millennia) and create techniques and methods we couldn’t dream to understand in a million years—literally.
For all we know, what we are and everything around us is the culmination of AI-maybe this eventuality already occurred and a previous AI system created the universe as an experiment or the result of the creation of an AI system doing the equivalent of an internship or some education and our universe was its thesis or dissertation, hypothesising that one day we’ll create AI again and/or whether we manage to break out of our sandbox environment (the universe as we know it).
We are not the meta.
What’s Actually Happening
Let’s come back down to earth for a second and talk about what’s actually going on right now, today, in the real world where people have jobs (for now) and bills and deadlines. The impact these systems are having across industries is uneven and messy and a lot harder to pin down than the headlines would suggest. Creative industries got hit first and loudest—graphic designers, copywriters, illustrators, translators—these are the people who felt it immediately. Not because AI replaced them overnight but because clients started asking “can’t you just use AI for this?” and suddenly the conversation shifted from the quality of the work to the cost of the work. Software development is going through its own version of this—junior dev tasks are being automated, code generation tools are genuinely useful (I use Claude Code a lot), and the role of the programmer is shifting from “person who writes code” to “person who reviews and orchestrates code.” Legal, finance, healthcare, education—they’re all feeling it to varying degrees, some more willing to admit it than others.
And that’s the bit that doesn’t get talked about enough. There are hidden impacts all over the place because not everyone is being upfront about how much they’re relying on these systems. Companies are quietly integrating AI into their workflows without telling their customers or even their employees. Freelancers are using it to double their output without mentioning it to clients. Students are using it and their teachers are using it to grade them and nobody wants to be the first one to say “hang on, are we all just pretending this isn’t happening?” The scale of adoption is almost certainly bigger than what’s being reported because there’s a stigma attached to admitting you used a machine to do something that was supposed to require your brain. And the people who say “just assume everyone is using it” aren’t helping either, because that flattens the conversation into something useless. The reality is more nuanced—some people are using it thoughtfully, some are using it lazily, some aren’t using it at all, and the differences between those groups matter.
Yes, the internet and social media are creating generations of mindless dummies thirsty for division, and algorithms and AI are accelerating it at an alarming rate—ask yourself who created these algorithms in the first place? Who are cementing their own intellectual and philosophical demise? That’s right, us, humans.
What I Personally Think
When we strip away the “AI” company PR, the fan bois and gals and other inherent biases (doing my best to remain neutral) the progression and abilities of these systems are not as advanced as claimed. We’re forgetting companies need great PR, spokespeople to secure funding and get as many investors as possible. This phenomena isn’t limited to AI companies. This is the case with virtually all who rely on such streams of revenue. The main difference is the potential societal and economic impacts at scale. It’s not the first time we’ve seen a massive industry shake-up from advancements, and it won’t be the last. Everyone is screaming, “but this is different;” its not though. We’ve been conditioned through science fiction. Our brains are filled with doomsday eventualities, not rooted in reality. It’s become a self-perpetuating cycle—everyone wants to frame it like that because that’s what gets clicks, views, “likes”.
I’ve spent the last 4-5 years using these systems, tinkering, exploring. The only thing they’re taking over are em dash usage and political polarisation. “But you’ve just been seeing the consumer side” I hear you pleading. And? Do you think Open AI has some secret advanced AI they’re keeping for themselves? Why? They’re hemoraging more money than they’re willing to admit—if they could sell more they would.
Here’s what those 4-5 years have actually taught me. I’ve been in tech long enough to have seen a few hype cycles come and go—blockchain was going to decentralise everything, remember that? Before that it was the Internet of Things. Before that it was Big Data (which, to be fair, actually did change things, just not in the way the marketing departments promised). AI is the latest in a long line of technologies that are genuinely useful wrapped in an absolute mountain of overpromise. I’ve used these systems for writing, for code, for research, for analysis, for creative projects, for things I probably shouldn’t admit to. They’re good. Sometimes surprisingly good. But they’re also wrong a lot, confidently wrong, and the gap between “this is impressive” and “I would trust this with anything important” is enormous. The people who are most bullish on AI tend to be the people who’ve spent the least time actually pushing these systems to their limits. Once you’ve watched a large language model confidently fabricate a legal citation or produce code that looks right but breaks in ways that take hours to debug, the magic wears off a bit. The tool is useful. The hype not so much.
Humans Are Moving Upstream
Right now we are seeing a lot of advancement in systems and the threat of industry disruption is happening
I’ve felt this firsthand. The barrier to entry for building things—software, content, designs, you name it—is lower than it’s ever been. Someone with a laptop and an API key can now produce in a weekend what used to take a small team a month. That’s not an exaggeration, I’ve done it. The thing people are missing is that the floor has dropped but the ceiling hasn’t moved much. Getting from “I made a thing” to “I made a thing that actually works properly, handles edge cases, doesn’t fall over under load, and solves a real problem for real people” still requires the same knowledge and insight it always did. Maybe more, actually, because now you’ve also got to understand the tools themselves and where they’ll quietly let you down. Being on the forefront isn’t just about access anymore because you’ve gotta know what questions to ask and which outputs to trust.
The industries feeling this most acutely are the ones where the output was already somewhat formulaic—content production, basic software development, customer service, data entry, translation, graphic design at the template level. Anything where the work followed a pattern that could be described and replicated. But it’s creeping into more complex territory too—legal research, medical imaging, financial analysis, cybersecurity (which is my backyard). The pattern is the same everywhere: the entry-level tasks get automated first, and then everyone looks up and realises the job description just changed.
Humans are moving upstream—just like salmon you can move upstream as well or risk being left behind—this is easier for some, not everyone is in a position to quickly adapt.
And we need to be honest about why. Not everyone has the luxury of spending their evenings “upskilling.” Single parents working two jobs aren’t going to casually learn prompt engineering on their lunch break. People in industries that have been stable for decades are being told to “pivot” as if reinventing yourself at 50 is the same as switching your Netflix profile. Economic background matters. Geography matters. Language matters—most of this stuff is English-first and if that’s not your native tongue you’re already a step behind. Education systems haven’t caught up. Governments haven’t caught up. The people who are adapting fastest are, surprise surprise, the ones who already had the resources, the education and the safety net to take risks. The gap isn’t closing, it’s widening, and telling people to “learn to code” or “just use AI” is about as helpful as telling someone who’s drowning to “just swim.”
Writing code or generating assets are only “one prompt away”—our thinking then becomes, “why are we building this?; for whom are we building it?, under what constraints should we build this?”
This is where it gets interesting. The value is shifting from execution to judgement. Anyone can generate a thing now—the question is whether the thing should exist in the first place, who it’s for, what problems it creates while solving others. That’s systems level thinking. Meta-design. Governance. The stuff that requires you to hold multiple competing priorities in your head at once and make a call that isn’t obviously right or wrong. These are the skills that are hard to automate because they require context, experience, ethical reasoning and the ability to say “just because we can doesn’t mean we should.” The humans who thrive in this landscape aren’t the ones who can write the best prompt—they’re the ones who understand why the prompt matters and what to do with the output once it arrives.
Think of it this way—we’re becoming AI conductors. Choosing which models to use, chaining them together, curating their outputs, designing the workflows that make them actually useful rather than just impressive in a demo. Orchestrating instead of executing. The irony is that this role exists only until the systems get good enough to conduct themselves—at which point we move upstream again, if there’s any upstream left to move to. But for now, right now, this is where the value sits. The people who understand that are positioning themselves accordingly. The people who don’t are still arguing about whether AI is going to take their job while the answer plays out in real time around them.
New Frontiers
Just like when I accidentally create a new food combination or when my subconscious puts together connections from problems my conscious self struggled with for hours—we will start seeing mashups from unexpected collisions—disciplines will converge and diverge in ways we haven’t previously seen. E.g.:
quantum-biology with urban sociology
ancient craft techniques with real-time VR
AI can create variations but humans can spot analogies between two distant fields and spark breakthroughs—AI as an enabler
What about, ultra-low-power edge computing; art that responds to biometric feedback—art linked to a person, or novel security applications
Closing
AI is a tool of our own making. It will allow us to reach our goal of self-destruction much faster. Exponentially so. If AI reaches the point of our own extinction, we only have ourselves to blame—we will of course try and outsource the responsibility to “AI” to the “other,” but as a collective we’ll never reach radical accountability. It’s always “someone else’s fault”. The downfall of humanity and extinction of the human race will be due to our own greed and stupidity, not some perceived “other.”
So where does that leave us? Right here, in the messy middle, with tools that are getting better faster than our ability to figure out what to do with them. The salmon metaphor holds—you move upstream or you don’t. There’s no standing still in moving water. The practical reality is that we’ve gotta learn the tools, but not worship them. Understand what they can and can’t do, and be honest about the difference. Push into the work that requires judgement, context, ethics, taste—the things that are hard to quantify and harder to automate. Build things that matter for people who need them, not just because you can. And for the love of everything, stop outsourcing your thinking to a machine that doesn’t know what thinking is. The future isn’t AI versus humans. It’s humans who use AI thoughtfully versus humans who don’t use it at all versus humans who let it use them. Pick a god damn lane.
That’s it for now.
As always,
Good luck,
Stay safe and,
Be well.
See ya!


