So at the same time that I’ve been slid laterally at work into something approximating an AI leadership role, I am also moving to boot AI up in my life. It’s about time. I’ve been in tech so very, very long—yet this is really the first time I’ve been slow to, or maybe even missed, the “early adopter” period. Although maybe this time my timing is actually better. I mean:
-
1983 — Started learning software engineering
-
1986 — Went “online” with asynchronous UUCP via a local gateway
-
1993 — Adopted Linux as my primary OS
-
1997 — Moved from film photography to digital photography
-
1999 — My whole life is in a tablet computer (Newton MessagePad 2000)
-
etc.
The problem with all my “early adopting” over the years has generally been that I start it, master it, write the book about it, then move on, all years before anyone’s willing to pay me for it. I generally have moved on to the “next technology” while people are still making pronouncements about how the previous technology will never catch on. Then I get frustrated a decade later watching people build careers out of what I did a decade before, that was generally considered arcana and occult geekdom at the time.
So maybe my timing is better this time, because the world is actually accelerating into AI right now. The old me would have played with LLMs in the early stages, but been transitioning away from them onto the next set of projects around the time the chatbots (ChatGPT et. al.) were launching and stunning the world with what Large Language Models could do.
— § —
So, without much fanfare, I declare the next iteration of the “monster” up and running. Here and there in all of this I’ve made references to the “monster” which is the larger-than-average PC that I’ve always had in my life, since way back in the mid-’80s, often built halfway out of spare parts. I still have some of the parts from old incarnations in the basement. For example the instance that had dual Pentium 200 MMX CPUs and nine (9) 5.25″ full height 1GB SCSI drives in a RAID-5 configuration. Back then, it was a monster. (This has also generally always been the reason to run Linux… basically there are people writing drivers and systems for it that just aren’t there for other platforms.)
Where are we now?
-
Core i9-9900k (you’re like ‘pffffftt’, but wait a bit)
-
128gb RAM (you’re like okay biggish but ‘monster’ ummmm not sure)
-
40TB online storage, about 50% SSD, attached to SAS (I have a lot of DSLR photos, like >350k)
-
76GB VRAM across 3x AMD Navi2x GPUs (← told you there was a monster in here somewhere) via 2x Radeon Pro V620 and 1x Radeon RX6700XT
-
LTO4 internal (I mean, nothing says ‘big weird computer’ like streaming tape
What are we doing with it? Running 30ish-billion parameter LLM models with large amounts of q8_0 context. It’s a lot of compute, but it’s doing good work, giving me fast local LLM that’s pretty damned solid.
Now if I can figure out how to make it pay…
