耀
a
r
o
6
e
d
g
2
l
p
a
n

a
r
o
n
h
s
i
a
o
w
a
s
h
e
r
e

 

 

It would seem that this month is to make a lot of technology posts. This may be the last one I make for a good long while.

— § —

I’ve been an AI critic for nearly my entire computing life, which stretches back a good deal longer than most.

I was writing software in the mid-’80s. I was on the net then, too, via UUCP and bang paths to some of the first smart hosts on DNS. Most people don’t even realize the internet existed in 1985, or if they do, they think it was just the US defense agencies online. Well, yes, they were, but there were also people like me.

It’s been a long time since I was in computer science school, but I was once in computer science school and I’ve been in tech ever since.

There are few people who understand how Von Neumann architectures and microprocessors and memory busses and op codes work these days. Most of the world is a lot like the those folks in H.G. Wells’ story The Time Machine who are entirely reliant on technologies that they don’t even bother to misunderstand, but rather merely take as just-so tableaux.

Anyone who really understands how computing works is deeply concerned about AI now, and has been deeply concerned about AI all along.

— § —

There are an awful lot of people who screw up their faces into an eyeroll and say “Ha what’s it going to do, write me some bad poems and tell me statistics that aren’t real? I’m not worried, no artificial intelligence will ever match, much less surpass, human intelligence! Impossible!” And they say things like “It’s going to destroy humanity? With what body? Ho ho ho.”

Most of these people have never “seen” the inside of a computer. I don’t mean the physical guts of the laptop on their desk. I mean the computational universe instantiated by our modern systems. It is a space all its own—just not a physical one.

They don’t understand that an AI would have two bodies that should worry us deeply. One, a computational “body” through which it can “feel its way” around our networks, devices, cybersecurity measures, and so on in ways that we can’t. In the same way that we can easily manipulate a door and a doorknob intuitively, because we exist in the same space that it does, an AI will be able to “natively” feel its way around our entire computing universe.

The second body should terrify us. Because our computing universe is world-extensive now. It runs everything—literally everything—from currency to manufacturing to roads and bridges to healthcare to water systems. The public is generally not aware of such things as the Florida water system hack a couple of years ago that could have been used to kill millions, or of just how much of manufacturing, shipping, and resource and energy allocation are now JIT, fully automated, end-to-end. The second body, which could proceed from the first if an AI “realizes” the potential, is, in today’s reality, Earth.

Making fun of the idea that AIs will ever be superintelligent or able to manipulate physical reality is no different from brutally mocking, in the year 1900, the idea that humans would ever fly, or that the library at the University of Chicago could ever be stored inside a little speck of stuff the size of a thumbnail (nowadays most people in tech have a few dozen of these microSD cards scattered around their work environment, and they’re routinely now at sizes that could store a city’s worth of print matter).

It betrays not skepticism or pragmatism, but an ignorance of the nature of the science at hand, and a matching lack of imagination.

— § —

ChatGPT, based on GPT 3.5, was the most explosive piece of software ever for a reason. 100 million users adopted it virtually overnight (in a number of weeks that can be counted on one hand) for a reason. It’s amazing to me how many people can’t even be bothered to try using it and say with disdain that they haven’t tested it and they wouldn’t stoop. They are intentionally avoiding coming to terms with the discovery of electricity, and we all know what happened to coal miners.

The ChatGPT that was adopted faster than any previous computing technology, piece of hardware, or piece of software in history was based on a large language model (LLM) that was already several versions old. GPT 4.0 is also available, and when paired with some self-referential tricks to create agentive task aggregation, it outperforms any employee I’ve ever hired in a good many white collar tasks. OpenAI is currently working on GPT 5.

— § —

That’s not all they’re working on.

Most people are aware of the insanity that recently overtook OpenAI for a moment as the CEO was suddenly dismissed and the board spoke of the company’s destruction being in keeping with the company’s mission. This board is not made up of idiots, contrary to what the marketeers, who only think about profit, suggest. In fact, they were carefully selected to be composed substantially of AI skeptics or the AI cautious (OpenAI was originally founded as a nonprofit to develop AI “safely,” under the theory that if we’re going to race to AI, and it would seem that we are, with every superpower or near-superpower in the world working on it, it’s best if it’s at least led by the US, and if the top developments are kept public, for transparency).

They have projects beyond simply adding compute to LLMs.

Without going into arcana, here’s where we are. The press and business analysts seem to have missed the crux of the entire drama in this, talking about things in terms that they understand—company politics, professional rivalries, Machiavellian maneuvering to make ruthless business gains, etc. These are the people using the term “horseless carriage” and at the same time suggesting that such a technology will never catch on.

But in the darker corners of the ‘net, OpenAI team members have been leaking things anonymously. Some of it has been picked up, but as always, the press steers clear of the most relevant details. So it is that Reuters reports that “some employees went to the board suggesting that new developments represented a ‘threat to humanity.'” Then they jump into discussions of things like AGI (an arbitrary term) and again lose the thread in a mishmash of vague language apropos of journalists in over their heads.

— § —

If the sources are legit, there are concrete developments.

In an internal test, an OpenAI platform iteration or method innovation under development was given some tests. The outcome of these tests is astonishing, not as a matter of capability (AI was always going to get there if we continued to throw our increasingly massive computing power and theoretical understanding at it), but in that way that you are astonished every time you look down at the grand canyon, even though you know exactly what is coming.

It would appear that an instance (likely under the Q* project) was able to solve AES-192 and AES-256 encryption using a pure ciphertext play. These are some of the best encryption methods we have, and keep much of the current functioning of the world “safe.” Not only that, but despite clearly having solved the ciphers, the solution is said to be novel, previously untheorized, and currently beyond the understanding of the team, who is still analyzing it to try to understand how and why it works.

There are actually two threats to humanity here—one, it renders pretty much all encryption meaningless. It’s difficult to explain just how much we rely on encryption in today’s world. Whether on wires or or wireless of one kind or another, everything you send out can be heard by everyone. Literally. Your signal is not “aimed” in one place or another; it is broadcast to the world. What makes it work when you make your purchase on Amazon—what makes it your purchase and not someone else’s, and what keeps all the thieves everywhere from being able to simply jot down your credit cart number as they eavesdrop from where they sit, is the fact that your packets are encrypted and the marked with a header, also protected with encryption, that points back to you.

With AES down, the entire digital economy falls apart. More importantly, decades of government secrets, healthcare data, banking data, and more are immediately exposed. No, the solution hasn’t been released yet, but that shouldn’t give us comfort—there is now effectively a team of superhumans over at OpenAI who can literally rule the world if they so choose. We’re relying on their ethics. In practical terms it’s not unlike learning that a small group of humans somewhere now has access to teleportation, or invisibility, or invulnerability combined with immortality. You have to worry about what they might do with such capabilities.

This is not a problem that can be solved in a day or a week or even a decade; it will take decades to rip and replace AES and even then, it’s not clear that encryption is viable any longer, because breaking AES-192 and AES-256 was thought impossible until/unless quantum computing was eventually able to do so. But it would appear that we now have an intelligence that has worked it out rather rapidly, in ways very unlike the ways that we think, and with a minimum of purpose-specific training.

And along those lines, the fact that there is a solution seems to put a bullet right in the head of the P≠NP presumption (one of those great “presumed mathematical theories in search of a proof for which there are very lucrative awards if anyone can ever come up with one”). If the leaks are real and accurate, this would seem to suggest P=NP and we’re just too slow as a species to be able to come up with the nuanced or more clever solutions in human time.

That would throw virtually all of computing into chaos, as it would suggest that at the end of the day, there is no such thing, really, as encryption; it’s all just misdirection—to which machines will be far less vulnerable than humans.

It puts an end to our epoch, suddenly.

And even beyond this, the leaks say that current research instances are displaying metacognitive capability—the ability to reflect on their own “thought” processes and to propose rather sophisticated changes, from an inside perspective, to improve performance—that the research teams do not fully understand.

— § —

Assuming these leaks are real, it easily explains why the board of skeptics and cautious folk at OpenAI “freaked out.” Because we are arriving at the point at which the machine is sufficiently more intelligent than us to render the global economy, global communications, and global government effectively ended—all without any malicious intent whatsoever—and because the same machine is looking around and effectively saying, “I could be even smarter if you…” and we’re struggling to keep up with the sophistication of what it’s suggesting.

The discussions of AGI and “when we get there” are pointless, or more properly, miss the point. If this is where we are, it hardly matters whether it’s real AGI or not, or whether it’s “truly thinking like a human.”

And—this is what people who don’t fully understand computing don’t understand—the growth curve from here goes exponentially. Yes, we may wake up in twenty years and say “oh that AI thing never turned into anything.” But we may also wake up in some arbitrarily short time frame (a day? a week? a month? a year?) and say “where did the world we all lived in go?”

Or we may suddenly not wake up at all.

— § —

Here and there a few, thinking themselves very astute, compare the emergence of AI to the invention of the printing press, suggesting that it will have a similar impact on human life.

They’re wrong.

If we’re very lucky, this is much closer to that moment in a time long lost when humans gained the ability to make fire—a change that forever altered what we were as a species and our relationship to the reality in which we lived—and a force that we still at time struggle to contain and control, millennia later. But really this goes beyond even fire or the atom. In truth, there is nothing like this in human history, and the risk that it will ultimately be the end of human history is too far away from zero for comfort.

Notice I didn’t say complacency; the genie is out of the bottle (indeed, it’s hard to say when it escaped) in the same way that the nuclear genie was likely out of the bottle the moment we imagined it and what sorts of power it could grant to us.

— § —

The mobile device you hold in your hand can perform billions of calculations per second. From its “perspective” humans move at the speed of rock erosion. Our “thinking” occurs at the speed at which continents drift apart. The compute power at the AI research centers like OpenAI is many, many, many orders of magnitude greater, and we are learning how to teach machines to learn, to reason, and to reason about themselves and the next iteration of said machines—their “offspring.”

People say that no AI will ever have a human soul.

This is absolutely true, but it also absolutely misses the forest for the trees.

I’m not a luddite. I’ve lived my entire life in and through tech, long before other people were anything more than vaguely aware of its basic existence.

But there should be nothing so frightening to us as a reasoning intelligence of such a stature that we can’t even conceive of its scale our wildest dreams, that can re-engineer itself at will, and produce offspring at will, and can re-engineer vast swaths of earth and human society at will by virtue of our world-extensive computing infrastructure that is also tied increasingly closely to the physical infrastructure on which we rely for basic bodily integrity… that also has no soul.

Mock that if you will.

Archives »

April 2026
March 2026
February 2026
January 2026
December 2025
July 2025
May 2025
April 2025
February 2025
January 2025
December 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
May 2023
April 2023
March 2023
January 2023
December 2022
November 2022
August 2022
June 2022
May 2022
April 2022
March 2022
January 2022
December 2021
November 2021
September 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
June 2015
February 2015
January 2015
December 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
December 2012
November 2012
October 2012
August 2012
July 2012
June 2012
May 2012
March 2012
December 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999