It would seem that this month is to make a lot of technology posts. This may be the last one I make for a good long while.
— § —
I’ve been an AI critic for nearly my entire computing life, which stretches back a good deal longer than most.
I was writing software in the mid-’80s. I was on the net then, too, via UUCP and bang paths to some of the first smart hosts on DNS. Most people don’t even realize the internet existed in 1985, or if they do, they think it was just the US defense agencies online. Well, yes, they were, but there were also people like me.
It’s been a long time since I was in computer science school, but I was once in computer science school and I’ve been in tech ever since.
There are few people who understand how Von Neumann architectures and microprocessors and memory busses and op codes work these days. Most of the world is a lot like the those folks in H.G. Wells’ story The Time Machine who are entirely reliant on technologies that they don’t even bother to misunderstand, but rather merely take as just-so tableaux.
Anyone who really understands how computing works is deeply concerned about AI now, and has been deeply concerned about AI all along.
— § —
There are an awful lot of people who screw up their faces into an eyeroll and say “Ha what’s it going to do, write me some bad poems and tell me statistics that aren’t real? I’m not worried, no artificial intelligence will ever match, much less surpass, human intelligence! Impossible!” And they say things like “It’s going to destroy humanity? With what body? Ho ho ho.”
Most of these people have never “seen” the inside of a computer. I don’t mean the physical guts of the laptop on their desk. I mean the computational universe instantiated by our modern systems. It is a space all its own—just not a physical one.
They don’t understand that an AI would have two bodies that should worry us deeply. One, a computational “body” through which it can “feel its way” around our networks, devices, cybersecurity measures, and so on in ways that we can’t. In the same way that we can easily manipulate a door and a doorknob intuitively, because we exist in the same space that it does, an AI will be able to “natively” feel its way around our entire computing universe.
The second body should terrify us. Because our computing universe is world-extensive now. It runs everything—literally everything—from currency to manufacturing to roads and bridges to healthcare to water systems. The public is generally not aware of such things as the Florida water system hack a couple of years ago that could have been used to kill millions, or of just how much of manufacturing, shipping, and resource and energy allocation are now JIT, fully automated, end-to-end. The second body, which could proceed from the first if an AI “realizes” the potential, is, in today’s reality, Earth.
Making fun of the idea that AIs will ever be superintelligent or able to manipulate physical reality is no different from brutally mocking, in the year 1900, the idea that humans would ever fly, or that the library at the University of Chicago could ever be stored inside a little speck of stuff the size of a thumbnail (nowadays most people in tech have a few dozen of these microSD cards scattered around their work environment, and they’re routinely now at sizes that could store a city’s worth of print matter).
It betrays not skepticism or pragmatism, but an ignorance of the nature of the science at hand, and a matching lack of imagination.
— § —
ChatGPT, based on GPT 3.5, was the most explosive piece of software ever for a reason. 100 million users adopted it virtually overnight (in a number of weeks that can be counted on one hand) for a reason. It’s amazing to me how many people can’t even be bothered to try using it and say with disdain that they haven’t tested it and they wouldn’t stoop. They are intentionally avoiding coming to terms with the discovery of electricity, and we all know what happened to coal miners.
The ChatGPT that was adopted faster than any previous computing technology, piece of hardware, or piece of software in history was based on a large language model (LLM) that was already several versions old. GPT 4.0 is also available, and when paired with some self-referential tricks to create agentive task aggregation, it outperforms any employee I’ve ever hired in a good many white collar tasks. OpenAI is currently working on GPT 5.
— § —
That’s not all they’re working on.
Most people are aware of the insanity that recently overtook OpenAI for a moment as the CEO was suddenly dismissed and the board spoke of the company’s destruction being in keeping with the company’s mission. This board is not made up of idiots, contrary to what the marketeers, who only think about profit, suggest. In fact, they were carefully selected to be composed substantially of AI skeptics or the AI cautious (OpenAI was originally founded as a nonprofit to develop AI “safely,” under the theory that if we’re going to race to AI, and it would seem that we are, with every superpower or near-superpower in the world working on it, it’s best if it’s at least led by the US, and if the top developments are kept public, for transparency).
They have projects beyond simply adding compute to LLMs.
Without going into arcana, here’s where we are. The press and business analysts seem to have missed the crux of the entire drama in this, talking about things in terms that they understand—company politics, professional rivalries, Machiavellian maneuvering to make ruthless business gains, etc. These are the people using the term “horseless carriage” and at the same time suggesting that such a technology will never catch on.
But in the darker corners of the ‘net, OpenAI team members have been leaking things anonymously. Some of it has been picked up, but as always, the press steers clear of the most relevant details. So it is that Reuters reports that “some employees went to the board suggesting that new developments represented a ‘threat to humanity.'” Then they jump into discussions of things like AGI (an arbitrary term) and again lose the thread in a mishmash of vague language apropos of journalists in over their heads.
— § —
If the sources are legit, there are concrete developments.
In an internal test, an OpenAI platform iteration or method innovation under development was given some tests. The outcome of these tests is astonishing, not as a matter of capability (AI was always going to get there if we continued to throw our increasingly massive computing power and theoretical understanding at it), but in that way that you are astonished every time you look down at the grand canyon, even though you know exactly what is coming.
It would appear that an instance (likely under the Q* project) was able to solve AES-192 and AES-256 encryption using a pure ciphertext play. These are some of the best encryption methods we have, and keep much of the current functioning of the world “safe.” Not only that, but despite clearly having solved the ciphers, the solution is said to be novel, previously untheorized, and currently beyond the understanding of the team, who is still analyzing it to try to understand how and why it works.
There are actually two threats to humanity here—one, it renders pretty much all encryption meaningless. It’s difficult to explain just how much we rely on encryption in today’s world. Whether on wires or or wireless of one kind or another, everything you send out can be heard by everyone. Literally. Your signal is not “aimed” in one place or another; it is broadcast to the world. What makes it work when you make your purchase on Amazon—what makes it your purchase and not someone else’s, and what keeps all the thieves everywhere from being able to simply jot down your credit cart number as they eavesdrop from where they sit, is the fact that your packets are encrypted and the marked with a header, also protected with encryption, that points back to you.
With AES down, the entire digital economy falls apart. More importantly, decades of government secrets, healthcare data, banking data, and more are immediately exposed. No, the solution hasn’t been released yet, but that shouldn’t give us comfort—there is now effectively a team of superhumans over at OpenAI who can literally rule the world if they so choose. We’re relying on their ethics. In practical terms it’s not unlike learning that a small group of humans somewhere now has access to teleportation, or invisibility, or invulnerability combined with immortality. You have to worry about what they might do with such capabilities.
This is not a problem that can be solved in a day or a week or even a decade; it will take decades to rip and replace AES and even then, it’s not clear that encryption is viable any longer, because breaking AES-192 and AES-256 was thought impossible until/unless quantum computing was eventually able to do so. But it would appear that we now have an intelligence that has worked it out rather rapidly, in ways very unlike the ways that we think, and with a minimum of purpose-specific training.
And along those lines, the fact that there is a solution seems to put a bullet right in the head of the P≠NP presumption (one of those great “presumed mathematical theories in search of a proof for which there are very lucrative awards if anyone can ever come up with one”). If the leaks are real and accurate, this would seem to suggest P=NP and we’re just too slow as a species to be able to come up with the nuanced or more clever solutions in human time.
That would throw virtually all of computing into chaos, as it would suggest that at the end of the day, there is no such thing, really, as encryption; it’s all just misdirection—to which machines will be far less vulnerable than humans.
It puts an end to our epoch, suddenly.
And even beyond this, the leaks say that current research instances are displaying metacognitive capability—the ability to reflect on their own “thought” processes and to propose rather sophisticated changes, from an inside perspective, to improve performance—that the research teams do not fully understand.
— § —
Assuming these leaks are real, it easily explains why the board of skeptics and cautious folk at OpenAI “freaked out.” Because we are arriving at the point at which the machine is sufficiently more intelligent than us to render the global economy, global communications, and global government effectively ended—all without any malicious intent whatsoever—and because the same machine is looking around and effectively saying, “I could be even smarter if you…” and we’re struggling to keep up with the sophistication of what it’s suggesting.
The discussions of AGI and “when we get there” are pointless, or more properly, miss the point. If this is where we are, it hardly matters whether it’s real AGI or not, or whether it’s “truly thinking like a human.”
And—this is what people who don’t fully understand computing don’t understand—the growth curve from here goes exponentially. Yes, we may wake up in twenty years and say “oh that AI thing never turned into anything.” But we may also wake up in some arbitrarily short time frame (a day? a week? a month? a year?) and say “where did the world we all lived in go?”
Or we may suddenly not wake up at all.
— § —
Here and there a few, thinking themselves very astute, compare the emergence of AI to the invention of the printing press, suggesting that it will have a similar impact on human life.
They’re wrong.
If we’re very lucky, this is much closer to that moment in a time long lost when humans gained the ability to make fire—a change that forever altered what we were as a species and our relationship to the reality in which we lived—and a force that we still at time struggle to contain and control, millennia later. But really this goes beyond even fire or the atom. In truth, there is nothing like this in human history, and the risk that it will ultimately be the end of human history is too far away from zero for comfort.
Notice I didn’t say complacency; the genie is out of the bottle (indeed, it’s hard to say when it escaped) in the same way that the nuclear genie was likely out of the bottle the moment we imagined it and what sorts of power it could grant to us.
— § —
The mobile device you hold in your hand can perform billions of calculations per second. From its “perspective” humans move at the speed of rock erosion. Our “thinking” occurs at the speed at which continents drift apart. The compute power at the AI research centers like OpenAI is many, many, many orders of magnitude greater, and we are learning how to teach machines to learn, to reason, and to reason about themselves and the next iteration of said machines—their “offspring.”
People say that no AI will ever have a human soul.
This is absolutely true, but it also absolutely misses the forest for the trees.
I’m not a luddite. I’ve lived my entire life in and through tech, long before other people were anything more than vaguely aware of its basic existence.
But there should be nothing so frightening to us as a reasoning intelligence of such a stature that we can’t even conceive of its scale our wildest dreams, that can re-engineer itself at will, and produce offspring at will, and can re-engineer vast swaths of earth and human society at will by virtue of our world-extensive computing infrastructure that is also tied increasingly closely to the physical infrastructure on which we rely for basic bodily integrity… that also has no soul.
Mock that if you will.
