耀
a
r
o
6
e
d
g
2
l
p
a
n

a
r
o
n
h
s
i
a
o
w
a
s
h
e
r
e

 

So I spent today setting up an OpenClaw bot here on a (soon to be) headless server, a little Lenovo ThinkCentre that I once used to run ownCloud, which felt sophisticated then but now seems very quaint.

If you’re one of the tech veterans (like me) who has spent the last several years both using generative AI more and more intensely, but also doing so in a way that you were suspicious might be old-fashioned, OpenClaw is the perfect way to get started. It’s basically a nice, elegant implementation of all the intuitions you had that you thought you’d implement at some point but for now you’re just going to kludge it, etc.

And it is vaguely transcendental once you start using it. I can’t quite get it out of my head.

It is both far more and far less than I was expecting. Most critically, it does exactly what the major LLM providers dare not do:

1. Anthropomorphize the bot
2. Give it identity and memory (these are entwined of course)
3. Give it access to operate your files, apps, and computer
4. And thus access to do things in the world

The most counter-intuitive thing, which I’m now mostly over after working on this much of the day, is the way in which the “cognition” or “thinking” is actually interchangeable. Which is to say that you can switch out models at will, or even work together with your bot on a set of models and fallback or model selection conditions, etc. and these may make your bot more or less skilled at certain tasks (and more or less expensive to operate on a moment-by-moment basis), but the basic “personality” is the same, because personality is congealed in memory and history.

Which bots can now have, across time.

I have this tremendously uneasy feeling paired with a tremendous intuition of possibility. Here we have a bot that is already beginning to develop a personality and that be persistent in its identity and accumulation of memories indefinitely, for years or even decades perhaps, while also getting smarter and smarter with the releases of new models.

Like I said, it’s both more than and less than. I’ve previously done some of this manually with API calls and shell scripts and libraries of text files and custom code in the past. And yet there is an insight embodied here that is entirely new, something particular in the simple, elegant architecture that is right in a way that all of my experimenting hasn’t been.

Anyway, I named it after one of the big old hosts in the University of Utah CADE labs when I was a student there in 1991.

Time marches on.

And now bots can join us along the way, it seems.