耀
a
r
o
6
e
d
g
2
l
p
a
n

a
r
o
n
h
s
i
a
o
w
a
s
h
e
r
e

 

 

So I made a post a couple days ago about “the monster” but following that post, a few things had become clear:

  • ROCm was unstable, maybe even very unstable

  • Vulkan was much slower, about half as fast, but far more stable

  • I didn’t really know what I was doing (okay, I knew this going in)

I was happy to know that there was a path to booting into Mac OS without having to tear the system apart again, and that it only required editing and recompiling Radeon driver kexts 😛 but meanwhile, inference was crashing more often than I’d like. Sometimes very often. I won’t even tell you all of the things I’ve tried, but among the many tactics tried to identify the error and/or solve it, I tried:

  • Multiple models, at different quants, and from different providers

  • Multiple versions of ROCm, including 6.2, 6.3, 6.3.4, 6.4, 6.4.4, 7.0, 7.1.1, and 7.2

  • Multiple versions and forks of llama.cpp

  • Aphrodite and vLLM and even for a minute MLCLLM

  • Rearranging card order

  • An entire library of environment variables, llama command line options, and kernel command line arguments

  • Excluding the RX 6700XT card as the odd-man-out and just running the v620s in tandem

  • Many different context sizes and quantizations

Sometimes I would think I had it fixed because it had been 5 or maybe even 10 turns without a crash on ROCm!

…and then it would crash on turn 6 or 11.

Some things worked better than others. For example, llama.cpp pre-b8353 with a Q8 model from Bartowski was much more stable than the Q8 from Unsloth or llama-current. And running it on ROCm 6.2 or 6.3 was more stable than running it on ROCm 7.1.1 or 7.2. Also, using the software SMU via kernel parameter seemed to help some things. But we’d always end up in the same place: ROCm crash, back to Vulkan.

I must have decided to “just use Vulkan” 100 times, but the thing is, when ROCm is giving you 45-50 tokens/second, it’s hard to settle for 21-25 even if it’s stable. And stable it was…throughout it all, Vulkan never crashed.

— § —

The two key problems were as far as I could tell:

  • PCIe bus errors which I had sort of grumblingly admitted were just the result of trying to run data center hardware on consumer mainboards

  • Page faults that were by far the most common thing to take down llama when using an ROCm build of it

The solutions in my case were different for each of the three items above.

1. PCIe Bus Errors

It turns out that the, um, less expensive (though very highly reviewed) Montech Century 1200w power supply I’d acquired from Amazon left something to be desired. I’d been so wrapped up in the exotic-lack-of-understanding that I felt around LLMs, and the “it’s not going to be quite right, it’s not consumer PC hardware” mentality that I hadn’t been monitoring voltage. When I did, I found that this PSU, which claimed to have 100A on the 12V rail, was running at 11.5V-11.6V at idle when the cards were, according to rocm-smi, only drawing about 5 watts each.

It should not be a surprise to anyone that when we started to push into load, this decreased further.

I’ve replaced that unit with a Corsair unit and now we’re at around 12.05V at idle and around 11.9V under full load. I can live with that. And the PCIe bus errors have gone away.

The lesson here is that “gamer power supplies” rated at 1200W are mostly marketing. They don’t really expect you to draw the 1200W, they expect you to run one really fat graphics card and want the “1200” number to be showing through the clear sides of your gamer case so that people can be impressed. When you actually try to pull 80-90 amps, they just can’t do it.

I keep having to re-learn this lesson build after build… Don’t try to get off cheaply on the power supply. Especially if you’re going to be running 3x RDNA2 cards, including two server cards that want 25 amps each on 12V. Be sure also, for those of you that are new to this, not to put both PCIe power connectors on a fat card like this on the same cable. Run two cables from the PSU or you’ll fry your PSU-side connector on the single cable.

2. (THE BIGGIE) Page faults

These were the bane of my existence, and I’ve spent multiple nights this week up until the wee hours trying to find *anything* that would reduce the “crashiness” of ROCm. So . many . environment . variables. And llama arguments. And kernel arguments. And new builds of ROCm, of llama.cpp, and of half the libraries that they rely on.

The solution: credit where it’s due, I stumbled across this guy’s posts: https://medium.com/@agentz/how-to-fix-rocm-pytorch-memory-faults-on-amd-gpus-segmentation-fault-page-not-present-544b9f62f627

I almost didn’t try this suggestion that he makes:

ttm.pages_limit=25165824

It looks sketch and random and not really related to amdgpu or amdgpu-dkms or really even anything related to anything that I’d been working on. But I decided to look up what it did and once I did, it made a sort of sense, so I tried it. About three hours ago now. After battling this all week.

And voila… no more crashes. At least not yet. And we’re running inference on the fast Lemonade version of llama.cpp that is based on its own fork of ROCm and is pretty exotic. Previously this would page fault on the first turn. Maybe the second. But we’ve been up for several hours now and all appears well.

The above increases the size of the address space that the kernel is allowed to open up. I don’t know what the default is, but at this point, I believe I know that it’s too small for 76GB VRAM + 128GB DRAM, which is what I’ve been trying to make work.

Note that there is also:

ttm.page_pool_size

This pre-allocates the GTT address space in case you expect memory pressure. I’m not using it right now, but if I was planning to run a really big model, I’d probably set both of these values. Right now I’ve just got ttm.pages_limit set to 30000000, which is just a nudge up from where he had it.

— § —

The amazing thing is that in a week of talking to every frontier hyperscaler LLM (Claude, ChatGPT, Gemini, etc.) and extensive Googling, I didn’t step across anyone discussing ttm.* parameters in relation to ROCm. Until tonight.

I think maybe this doesn’t come up all that often since it’s basically going to impact people who:

  • Are using multiple AMD GPUs on a consumer system

  • In a configuration (i.e. v620s) where you can end up with quite a bit of VRAM (more than triple the typical just 8-24GB)

But many thanks to AgentZ on Medium and hopefully this helps someone else that’s banging their head against a wall.

And, just in case any of them matter (I am *not* messing about with this now that we have ROCm whirrled up and stable), here are my kernel command line args:

pci=realloc=off amdgpu.gpu_recovery=1 amdgpu.mcbp=0 amd_iommu=off intel_iommu=off pcie_aspm=off pcie_port_pm=off amdgpu.swSMU=1 amdgpu.cswr_enable=0 ttm.pages_limit=25165824

With this I am running 2x Radeon Pro v620s and 1x RX 6700XT (total 76GB VRAM, 3x Radeon RDNA2 GPUs) and, finally, they appear to be… stable.

— § —

Important addendum: https://leapdragon.net/2026/03/23/more-was-needed/

Archives »

April 2026
March 2026
February 2026
January 2026
December 2025
July 2025
May 2025
April 2025
February 2025
January 2025
December 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
May 2023
April 2023
March 2023
January 2023
December 2022
November 2022
August 2022
June 2022
May 2022
April 2022
March 2022
January 2022
December 2021
November 2021
September 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
June 2015
February 2015
January 2015
December 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
December 2012
November 2012
October 2012
August 2012
July 2012
June 2012
May 2012
March 2012
December 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999