Mon build avec un Core i9-13900K et double 5090

J’ai limité la puissance à 400W sur les GPU via Nvidia-SMI et à 150W sur le 13900K. Toutes les températures sont restées sous 70°C pendant l’exécution de prompts à contexte géant sur QwQ 32B, ce qui était mon principal critère. La consommation de puissance de crête a juste dépassé les 1 kW pendant le traitement des prompts, lorsque les deux GPU étaient à 100% d’utilisation.

Oui, à première vue, cette alimentation a l’air d’être de la mauvaise marque générique, mais elle a en fait très bien performé dans les tests de HWbusters. C’est aussi la PSU la plus puissante disponible de 150mm ou moins, ce qui m’a permis d’avoir ce ventilateur avant que je jugeais crucial. Si quelqu’un veut tenter ce type de build dans ce boîtier, la Cooler Master V Platinum 1600 V2 est l’alimentation la plus puissante de 160mm ou moins qui tiendra.

Mais attention, si vous la prenez, la rangée du bas des connecteurs d’alimentation sera bloquée (j’ai joint une capture d’écran pour montrer) à cause de l’épaisseur du ventilateur avant. Avec une PSU ATX de 150mm ou 140mm, il n’y aura pas de problème de blocage. J’aurais aussi probablement opté pour des Phanteks T30s à l’avant et à l’arrière si je n’étais pas aussi obsédé par l’esthétique noir et blanc.

Désolé, je n’ai pas fait beaucoup de tests de performance ou thermiques avant de tout démonter pour y installer des composants double 3090. Je construis un PC pour un collègue chez qui la portabilité est plus importante que pour ma propre config. Mes pièces sont maintenant dans un rig en open frame (j’ai fait un post à ce sujet il y a quelques semaines).

J’avais commandé un jeu de câbles d’alimentation personnalisés noir et blanc, mais ils ne sont pas arrivés à temps avant le changement des composants. Liste des composants PCPartPicker Type|Produit|Prix
 :—-|:—-|:—-
**Processeur** | Intel Core i9-13900K 3 GHz 24-Core Processor | 300,00 $ **Refroidissement CPU** | Thermalright Peerless Assassin 140 77.8 CFM CPU Cooler | 43,29 $ @ Amazon **Carte mère** | Asus ROG MAXIMUS Z790 HERO ATX LGA1700 Motherboard | 522,99 $ **Mémoire** | TEAMGROUP T-Create Expert 32 GB (2 x 16 GB) DDR5-7200 CL34 Memory | 108,99 $ @ Amazon **Stockage** | Crucial T705 1 TB M.2-2280 PCIe 5.0 X4 NVME Solid State Drive | 142,99 $ @ Amazon **Carte graphique** | NVIDIA Founders Edition GeForce RTX 5090 32 GB Video Card | 3200,00 $ **Carte graphique** | NVIDIA Founders Edition GeForce RTX 5090 32 GB Video Card | 3200,00 $ **Alimentation** | Super Flower LEADEX VII XG 1300 W 80+ Gold Certified Fully Modular ATX Power Supply | 219,99 $ **Ventilateur de boîtier** | Thermalright TL-B14 82.5 CFM 140 mm Fan | 11,06 $ @ Amazon **Ventilateur de boîtier** | Thermalright TL-B14 82.5 CFM 140 mm Fan | 11,06 $ @ Amazon **Ventilateur de boîtier** | Thermalright TL-K12 69 CFM 120 mm Fan | 11,90 $ @ Amazon **Ventilateur de boîtier** | Scythe Grand Tornado 97.82 CFM 120 mm Fan | 19,98 $ @ Amazon **Ventilateur de boîtier** | Scythe Grand Tornado 97.82 CFM 120 mm Fan | 19,98 $ @ Amazon **Ventilateur de boîtier** | Scythe Grand Tornado 97.82 CFM 120 mm Fan | 19,98 $ @ Amazon **Ventilateur de boîtier** | Thermalright TL-K12RW 69 CFM 120 mm Fan | 11,90 $ @ Amazon **Ventilateur de boîtier** | Thermalright TL-H12015 56.36 CFM 120 mm Fan | 10,59 $ @ Amazon **Ventilateur de boîtier** | Thermalright TL-H12015 56.36 CFM 120 mm Fan | 10,59 $ @ Amazon **Ventilateur de boîtier** | Thermalright TL-H12015 56.36 CFM 120 mm Fan | 10,59 $ @ Amazon **Personnalisé**| Mechanic Master c34plus | 200,00 $ | *Les prix incluent la livraison, les taxes, les remises et les réductions* | | **Total** | **8075,88 $** | Généré par PCPartPicker 2025-06-02 19:47 EDT-0400 |

Choose a language:

67 replies

  1. Alix Jacquet · 1 month ago

    C’est intéressant de voir comment tu gères la puissance pour éviter les surchauffes. Le 13900K à 150W semble bien contrôlé. Les températures doivent être très bonnes avec cette configuration.

  2. Jörg Hartmann · 1 month ago

    Make sure everyone is already evacuated before gaming

    1. Iris Holland · 3 weeks ago

      LOL. Won’t be any gaming on this rig. Strictly AI doing my job for me.

  3. Joseph Bruns · 1 month ago

    Not sure why you’re saying the PSU is generic crap at first glance. Super Flower is well known to be extremely good, if not one of the best.

    1. Iris Holland · 6 days ago

      My bad, I am starting to realize that. I just never heard of the brand before

      1. Carlotta Neumann · 6 days ago

        All good! I actually wanted to get one for my build but it was more on the expensive side. Enjoy your build 😀

  4. Makiko Ishiguro · 4 weeks ago

    Semi-unrelated but what’s your job for having the need of such a setup ?

    1. Iris Holland · 4 weeks ago

      Government work not too far from content that would be covered by HIPAA, so gotta be on-prem to be compliant .

  5. Olivia Foster · 3 weeks ago

    Silverstone HELA 1300R Platinum is 150mm as well, arguably a bit more ‘premium’ but also more expensive.

      1. Evelyn Soto · 2 weeks ago

        If you’re only ever using one model, sure. But, if you want to switch from model to model, you’ll probably be better served with more system RAM for caching. Sure, your NVMe drive can do ~3GB/second, but do you want to have to wait 20 seconds for a response to even start, if you’re using a ~60 gig model?

        1. Iris Holland · 2 weeks ago

          You guys successfully shamed me. 96gb came in today

          1. Marc Barthelemy · 2 weeks ago

            Well, since I only showed up \~4 hours ago, I don’t know if I count among the « guys » XD But, I’d love to hear in a reply here if this actually makes a practical difference! When it comes to model-switching (and probably load times), I have to guess it will.

      2. Airi Hattori · 1 week ago

        You could load larger models or additional context that’ll spill into system RAM dude, 32gb is how much ram you’d get with a build an eighth of the price lol

        1. Tristan Frank · 1 week ago

          You’re stuck on system RAM while the machine has 64GB of vram. VRAM is much faster for this use case.

          1. Jude Walker · 1 week ago

            Yeah but 64gb VRAM isn’t much in the AI space. Most builds I’ve seen also have 96, 128, or 192gb RAM (256/512 for DDR4 systems) because you can offload layers with acceptable speeds onto the CPU. Really important for longer memory contexts or MoE models. 32gb is legit surprisingly low for an $8k budget. This person literally spent more on just case fans than RAM, haha

          2. Jonathan Griffin · 3 days ago

            That’s definitely a trade-off but if you can already load the models you want in 64GB of VRAM, run the system with just the 32GB of system RAM then you have all of the 64GB just for the model. 32GB is fine for GPU offloading. You’re not going to have fun with 96GB of RAM, the token speed is going to be slow as shit compared to the 5090s here. The 5090 has an insane bandwidth of 1.79 TB/s. You can’t even come close to that with DDR4 or DDR5. The closest to that is M4 Max @ 546 GB/s, which is _still_ more than 1 TB/s short of the 5090. The M4 Max is probably the best bang for buck option though unless the AMD Ryzen AI Max delivers on compatibility (still only 273 GB/s). The only downside here is the PCI 5.0 bandwidth between the two GPUs, which is just 128 GB/s. Unless there’s some direct GPU linking that I’m not able to find info on, it’s going to limit the token speed but the qwq:32b is a 20GB model so they’re loading that on one GPU and probably loading something else on the other and doing some agentic workflows instead of loading one giant model to do everything, which is going to be subpar anyway.

  6. Lily Little · 3 weeks ago

    Are the dual GPUs linked? Are you doing some agentic workflows? Would love to read more on the tools used and the workflows with this setup. I’m looking into a cheaper, M4 Max 128GB setup later this year.

  7. Maxwell Harrison · 3 weeks ago

    May I ask what you’re doing with LLMs? And which ones?

    1. Iris Holland · 2 weeks ago

      Government work not too far from content that would be covered by HIPAA, so gotta be on-prem to be compliant. Until recently QwQ 32b, but just found that Devstral 14b is really good for the long and very structured reports I have to do with a mix of headings and tables and lists and summaries and long form narratives.

      1. Ida Geiger · 2 weeks ago

        Thanks for the detailed response! Very interesting!

  8. Amy Andrews · 3 weeks ago

    Cool. I wager that middle 5090 is going to be cooking heat wise. Why not invest in a larger case for 1% more cost?

  9. Hendrik Paul · 3 weeks ago

    Its been a long time since i last saw dual GPU build. Back then it was xrossfire and sli with the little bridging cable. Now just plug in the 2 x16 slot and activate from bios.

    1. Iris Holland · 2 weeks ago

      Got the NVlink bridge you’re talking about in a dual 3090 rig in this same case in my last mygoodcool post

      1. Chloe Byrd · 1 week ago

        Those days. the GPU really get boosted by adding 1 more. I had mine with hd 6870 it was fast. Back then. 🤣 now it’s crap compare to modern PC ganmes.

  10. Iris Holland · 3 weeks ago

    Mainly start by transcribing long confidential interviews with aTrain (Whisper Turbo), I then run a complicated prompt which provides the model with the first draft transcript with the reports that are the context surrounding the interviews so that it can make a second draft of the interviews where it corrects typos where it misunderstood what was said and having the context should help fix those errors, and assigns speaker labels to make a final most accurate transcript then run an extra long and complicated prompt which uses XML tags to separate sections involving role, general format, style, and jargon guidelines, desired output examples To teach it my very specific format and style, in language patterns. Then give it the transcripts and all the new reports that led to those interviews, which may be up to 200 pages. Then finally ask the model to reformat all the reports plus interviews into a final report in the style of the examples. Generally, the prompts tend to be 30 to 60,000 words long. The output style is very difficult for these models because it’s a mix of formats involving some sections which are summaries. Some sections which are bullet lists some sections which are tables and some sections which are long narrative form, and the local AI models can to be good at any one format but have trouble outputting documents with these multiple styles and formats, but I’m starting to realize that the models like Devstral that are built for coding are better at these long mixed format outputs

    1. Iris Holland · 2 weeks ago

      And forgot to mention for now I’m just using QwQ 32b q4m and Devstral 14b through Ollama through AnythingLLM.

    1. Iris Holland · 2 weeks ago

      Has built-in automatic halon gas fire suppression system

  11. Iris Holland · 2 weeks ago

    Sick. Love the extra rigidity built into the GPU bracket. BTW, I couldn’t get a dual 3090 rig to fully utilize both GPUs in a ROG z690 Maximus Hero, and I suspect that it was because they were completely different brand

    1. Julia Soto · 2 weeks ago

      That doesn’t speak well to my TUF + FE combination I want to try till I replace the tuf…

      1. Iris Holland · 1 week ago

        Keep me updated. I even tried with an NVlink bridge and couldn’t get it to work, so there might be something going wrong other than the motherboard or different brand GPUs. I tried a fresh install of Windows and clean install of the latest Nvidia drivers. Could just be an RTX 30 series issue and the modern cards work fine

  12. Dorothy Caldwell · 2 weeks ago

    310 x 315, will check my stuff if i can send it to you in one piece segments, or 2 piece segments, i think you should be good. give me a few hours, gotta get home from work.

  13. Isamu Numata · 1 week ago

    Nice, I have a C28 and love it, only problem I have is that the fan screws bow out the top and bottom panels because they are not flush with the case.

  14. Cora Owens · 1 week ago

    I hope you get at least 60 FPS on minecraft with this rig.

    1. Iris Holland · 1 week ago

      Somewhat ironically it will likely never run a single game

  15. Mizuki Kurita · 1 week ago

    Super Flower is a prestigious brand, they make some of the best power supplies on the market. Glad that one is being used to create more souped-up spell correct garbage. Enjoy having a machine do a bad job of thinking for you.

    1. Ava Braun · 1 week ago

      I was gonna say is that power supply even big enough to handle two fifty ninety’s?

    2. Iris Holland · 1 week ago

      My machine does an excellent job of proving that I am 90% replaceable

    3. Steven Walker · 5 days ago

      Super Flower has some crazy PSUs too (Leadex 2800w platinum), also quality is, usually, on par with Seasonic. They aren’t as know in the west tho, and their availability here isn’t as big either.

      1. Serenity Stewart · 5 days ago

        Wow, I didn’t know they went all the way up to 2800 watts!

        1. Takashi Yamamoto · 5 days ago

          It is, likely, the PSU that you want to use in a TR pro 9995WX system with 4 rtx pro 6000. And maybe enough rgb to be seen from the moon.

      2. Ami Hattori · 5 days ago

        Think they’re also the OEM for a lot of western brands like corsair

  16. Claire Frazier · 1 week ago

    Any chance you have the STL for this? Driving or I’d Google my self

    1. Eleanor Lawrence · 4 days ago

      I have the STL yes! What board do you have? Mines very precise due to the backplate for the x870e hero

      1. Maxwell Fields · 4 days ago

        That should work, x670 cross hair extreme

        1. Oskar Arndt · 4 days ago

          whats your printer, i have a p1s so the bed size is like 240×240 so some of the pieces i had to cut in half basically. (they have keys and holes that latch together,

  17. Valentin Reichert · 1 week ago

    Do you have a personal nuclear power plant?

    1. Iris Holland · 1 week ago

      No, but I built the rig to have AI teach me how to build a nuclear power plant /s

  18. Takato Asano · 1 week ago

    Just saw the request, thanks a bunch. I’ll start some of it up after I get back from errands today!

  19. Isabella Wallace · 5 days ago

    Hey man, I came across this post through a link on another submygoodcool. I’ve been wondering, what do you gain from a machine like this? Like, this isn’t going to bring in any money, right? So what’s the deal with these crazy expensive LLM rigs?

    1. Iris Holland · 2 days ago

      It compiles and processes sensitive reports and documents that can’t touch the internet and create summary reports that usually took me 8 hours and now only take me about 15 minutes

  20. Jennifer Baker · 5 days ago

    Damn I didn’t know this case existed. Next logical step is dual RTX Pro 6000s. Nice build.

    1. Ashley Greene · 5 days ago

      Their case is what got my MATX to SFF lol.

  21. Léna Lemaire · 3 days ago

    Why did you get rid of your awesome 3D printed atx frame! 😂

    1. Iris Holland · 3 days ago

      That’s actually where these components are now. Moved a dual 3090 system into this c34plus case, which you can see in another post I made.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués *