{"product_id":"dc-ai-multi-gpu-montjuic-amd-ryzen-threadripper-9960x-rtx-5090-32gb-x2-128gb-ddr5-2tb-gen5-nvme-ubuntu-ai-workstation","title":"DC AI Multi-GPU Montjuïc – AMD Ryzen Threadripper 9960X | RTX 5090 32GB x2 | 128GB DDR5 | 2TB Gen5 NVMe | Ubuntu AI Workstation","description":"\u003cp data-start=\"142\" data-end=\"471\"\u003eThe \u003cstrong data-start=\"146\" data-end=\"176\"\u003eDC AI Multi-GPU – Montjuïc\u003c\/strong\u003e is where AI workstations evolve into true high-performance compute systems. Designed for advanced developers, AI engineers, and teams working with demanding models and large-scale workflows, this system delivers \u003cstrong data-start=\"389\" data-end=\"470\"\u003eparallel GPU power, massive memory bandwidth, and workstation-class stability\u003c\/strong\u003e.\u003c\/p\u003e\n\u003cp data-start=\"473\" data-end=\"721\"\u003eInspired by the scale and spectacle of the \u003cspan class=\"hover:entity-accent entity-underline inline cursor-pointer align-baseline\"\u003e\u003cspan class=\"whitespace-normal\"\u003eMagic Fountain of Montjuïc\u003c\/span\u003e\u003c\/span\u003e, this machine represents synchronised power — multiple forces working together in perfect harmony to deliver results at a level single-GPU systems simply cannot match.\u003c\/p\u003e\n\u003chr data-start=\"723\" data-end=\"726\"\u003e\n\u003ch3 data-section-id=\"qxomjp\" data-start=\"728\" data-end=\"774\"\u003e\u003cspan role=\"text\"\u003e⚙️ \u003cstrong data-start=\"735\" data-end=\"774\"\u003eThreadripper-Class Processing Power\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"775\" data-end=\"940\"\u003eAt the core is the \u003cstrong data-start=\"794\" data-end=\"826\"\u003eAMD Ryzen Threadripper 9960X\u003c\/strong\u003e, a high-core-count workstation processor built for extreme multitasking, data processing, and parallel workloads.\u003c\/p\u003e\n\u003cp data-start=\"942\" data-end=\"955\"\u003eThis enables:\u003c\/p\u003e\n\u003cul data-start=\"956\" data-end=\"1115\"\u003e\n\u003cli data-section-id=\"1qv7tp4\" data-start=\"956\" data-end=\"1008\"\u003eFaster data preprocessing and pipeline execution\u003c\/li\u003e\n\u003cli data-section-id=\"hbem9v\" data-start=\"1009\" data-end=\"1058\"\u003eSeamless handling of multiple AI environments\u003c\/li\u003e\n\u003cli data-section-id=\"sj4gi2\" data-start=\"1059\" data-end=\"1115\"\u003eHigh-throughput development and simulation workflows\u003c\/li\u003e\n\u003c\/ul\u003e\n\u003cp data-start=\"1117\" data-end=\"1178\"\u003eThis is the foundation required for true multi-GPU computing.\u003c\/p\u003e\n\u003chr data-start=\"1180\" data-end=\"1183\"\u003e\n\u003ch3 data-section-id=\"158ra1l\" data-start=\"1185\" data-end=\"1244\"\u003e\u003cspan role=\"text\"\u003e🔥 \u003cstrong data-start=\"1192\" data-end=\"1244\"\u003eDual GPU Performance – A New Level of Capability\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"1245\" data-end=\"1353\"\u003eEquipped with \u003cstrong data-start=\"1259\" data-end=\"1291\"\u003e2× NVIDIA RTX 5090 32GB GPUs\u003c\/strong\u003e, this system unlocks a completely new tier of AI performance:\u003c\/p\u003e\n\u003cul data-start=\"1355\" data-end=\"1541\"\u003e\n\u003cli data-section-id=\"9awqm9\" data-start=\"1355\" data-end=\"1393\"\u003eRun multiple models simultaneously\u003c\/li\u003e\n\u003cli data-section-id=\"py1nn\" data-start=\"1394\" data-end=\"1441\"\u003eAccelerate training and inference workloads\u003c\/li\u003e\n\u003cli data-section-id=\"luku2x\" data-start=\"1442\" data-end=\"1493\"\u003eDistribute workloads across GPUs for efficiency\u003c\/li\u003e\n\u003cli data-section-id=\"gq894a\" data-start=\"1494\" data-end=\"1541\"\u003eHandle larger and more complex AI pipelines\u003c\/li\u003e\n\u003c\/ul\u003e\n\u003cp data-start=\"1543\" data-end=\"1610\"\u003eThis is where performance stops being linear — and starts to scale.\u003c\/p\u003e\n\u003chr data-start=\"1612\" data-end=\"1615\"\u003e\n\u003ch3 data-section-id=\"1dumvly\" data-start=\"1617\" data-end=\"1669\"\u003e\u003cspan role=\"text\"\u003e🧠 \u003cstrong data-start=\"1624\" data-end=\"1669\"\u003eMassive System Memory for Heavy Workloads\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"1670\" data-end=\"1914\"\u003eWith \u003cstrong data-start=\"1675\" data-end=\"1696\"\u003e128GB of DDR5 RAM\u003c\/strong\u003e, the Montjuïc is built to handle large datasets, multiple containers, and memory-intensive processes without compromise. Perfect for developers working across multiple environments or managing large-scale AI projects.\u003c\/p\u003e\n\u003chr data-start=\"1916\" data-end=\"1919\"\u003e\n\u003ch3 data-section-id=\"11pajbc\" data-start=\"1921\" data-end=\"1955\"\u003e\u003cspan role=\"text\"\u003e🚀 \u003cstrong data-start=\"1928\" data-end=\"1955\"\u003eUltra-Fast Gen5 Storage\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"1956\" data-end=\"2211\"\u003eA \u003cstrong data-start=\"1958\" data-end=\"1992\"\u003e2TB Crucial P510 Gen5 NVMe SSD\u003c\/strong\u003e delivers up to \u003cstrong data-start=\"2008\" data-end=\"2034\"\u003e10,000MB\/s read speeds\u003c\/strong\u003e, ensuring datasets, models, and applications load instantly. Whether you're working with massive training data or large AI models, your workflow remains fast and uninterrupted.\u003c\/p\u003e\n\u003chr data-start=\"2213\" data-end=\"2216\"\u003e\n\u003ch3 data-section-id=\"16hrh0d\" data-start=\"2218\" data-end=\"2267\"\u003e\u003cspan role=\"text\"\u003e🧊 \u003cstrong data-start=\"2225\" data-end=\"2267\"\u003eWorkstation Cooling for Sustained Load\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"2268\" data-end=\"2596\"\u003eThe \u003cstrong data-start=\"2272\" data-end=\"2307\"\u003eARCTIC Liquid Freezer WS360-SP6\u003c\/strong\u003e is specifically designed for workstation-class CPUs, ensuring the \u003cstrong data-start=\"2374\" data-end=\"2396\"\u003eThreadripper 9960X\u003c\/strong\u003e maintains peak performance under sustained workloads. Combined with the high-airflow \u003cstrong data-start=\"2482\" data-end=\"2531\"\u003ePhanteks Enthoo Pro II Server Edition chassis\u003c\/strong\u003e, this system is built for continuous operation under heavy load.\u003c\/p\u003e\n\u003chr data-start=\"2598\" data-end=\"2601\"\u003e\n\u003ch3 data-section-id=\"ptbznx\" data-start=\"2603\" data-end=\"2648\"\u003e\u003cspan role=\"text\"\u003e🧩 \u003cstrong data-start=\"2610\" data-end=\"2648\"\u003eBuilt for Multi-GPU Infrastructure\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"2649\" data-end=\"2864\"\u003eHoused on the \u003cstrong data-start=\"2663\" data-end=\"2706\"\u003eASUS PRO WS TRX50-SAGE WIFI motherboard\u003c\/strong\u003e, this platform is engineered for stability, expansion, and professional workloads — providing the bandwidth and reliability required for dual-GPU AI systems.\u003c\/p\u003e\n\u003chr data-start=\"2866\" data-end=\"2869\"\u003e\n\u003ch3 data-section-id=\"1dnu3lp\" data-start=\"2871\" data-end=\"2902\"\u003e\u003cspan role=\"text\"\u003e🌐 \u003cstrong data-start=\"2878\" data-end=\"2902\"\u003eConnected \u0026amp; AI-Ready\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"2903\" data-end=\"3164\"\u003eWith \u003cstrong data-start=\"2908\" data-end=\"2948\"\u003eGigabit Ethernet + WiFi connectivity\u003c\/strong\u003e, the system integrates seamlessly into local or hybrid environments. Pre-installed with \u003cstrong data-start=\"3037\" data-end=\"3047\"\u003eUbuntu\u003c\/strong\u003e, it is ready for AI frameworks such as PyTorch, TensorFlow, CUDA, and distributed workloads straight out of the box.\u003c\/p\u003e\n\u003chr data-start=\"3166\" data-end=\"3169\"\u003e\n\u003ch3 data-section-id=\"19qfoi5\" data-start=\"3171\" data-end=\"3215\"\u003e\u003cspan role=\"text\"\u003e🔌 \u003cstrong data-start=\"3178\" data-end=\"3215\"\u003eHigh-Capacity, Ultra-Stable Power\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cp data-start=\"3216\" data-end=\"3392\"\u003eThe \u003cstrong data-start=\"3220\" data-end=\"3250\"\u003eCorsair HX1500i (2025) PSU\u003c\/strong\u003e delivers ultra-efficient, low-noise power for dual GPU operation and sustained compute workloads — ensuring stability even under peak demand.\u003c\/p\u003e\n\u003chr data-start=\"3394\" data-end=\"3397\"\u003e\n\u003ch2 data-section-id=\"9ugc3l\" data-start=\"3399\" data-end=\"3434\"\u003e\u003cspan role=\"text\"\u003e🚀 \u003cstrong data-start=\"3405\" data-end=\"3434\"\u003eScale Your AI Performance\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h2\u003e\n\u003cp data-start=\"3435\" data-end=\"3725\"\u003eThe \u003cstrong data-start=\"3439\" data-end=\"3469\"\u003eDC AI Multi-GPU – Montjuïc\u003c\/strong\u003e is built for those who need more than raw power — it’s designed for \u003cstrong data-start=\"3538\" data-end=\"3553\"\u003escalability\u003c\/strong\u003e. With dual GPUs working in parallel and a workstation-class platform underneath, this system enables faster iteration, larger workloads, and more efficient AI development.\u003c\/p\u003e\n\u003cp data-start=\"3727\" data-end=\"3896\"\u003eInspired by the \u003cspan class=\"hover:entity-accent entity-underline inline cursor-pointer align-baseline\"\u003e\u003cspan class=\"whitespace-normal\"\u003eMagic Fountain of Montjuïc\u003c\/span\u003e\u003c\/span\u003e, it represents synchronised performance at scale — where multiple systems work as one to deliver something greater.\u003c\/p\u003e\n\u003chr data-start=\"3898\" data-end=\"3901\"\u003e\n\u003ch3 data-section-id=\"dbl3lc\" data-start=\"3903\" data-end=\"3925\"\u003e\u003cspan role=\"text\"\u003e🧠 \u003cstrong data-start=\"3910\" data-end=\"3925\"\u003ePerfect For\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h3\u003e\n\u003cul data-start=\"3926\" data-end=\"4139\"\u003e\n\u003cli data-section-id=\"1najxv7\" data-start=\"3926\" data-end=\"3965\"\u003eMulti-GPU AI development \u0026amp; training\u003c\/li\u003e\n\u003cli data-section-id=\"vh1onj\" data-start=\"3966\" data-end=\"4008\"\u003eRunning multiple models simultaneously\u003c\/li\u003e\n\u003cli data-section-id=\"wsegm3\" data-start=\"4009\" data-end=\"4050\"\u003eFaster inference and iteration cycles\u003c\/li\u003e\n\u003cli data-section-id=\"1iqp9au\" data-start=\"4051\" data-end=\"4090\"\u003eAdvanced machine learning pipelines\u003c\/li\u003e\n\u003cli data-section-id=\"1om9yov\" data-start=\"4091\" data-end=\"4139\"\u003eAI engineers, studios, and development teams\u003c\/li\u003e\n\u003c\/ul\u003e\n\u003chr data-start=\"4141\" data-end=\"4144\"\u003e\n\u003ch2 data-section-id=\"1us584q\" data-start=\"4146\" data-end=\"4189\"\u003e\u003cspan role=\"text\"\u003e💡 \u003cstrong data-start=\"4152\" data-end=\"4189\"\u003ePart of the DC AI Fountain Series\u003c\/strong\u003e\u003c\/span\u003e\u003c\/h2\u003e\n\u003cp data-start=\"4190\" data-end=\"4397\" data-is-last-node=\"\" data-is-only-node=\"\"\u003eFrom Trevi to Swarovski, capability grows — but Montjuïc is where performance scales. This is the step into \u003cstrong data-start=\"4298\" data-end=\"4336\"\u003etrue high-performance AI computing\u003c\/strong\u003e, where systems don’t just run models — they accelerate them.\u003c\/p\u003e","brand":"DIRECT COMPUTERS","offers":[{"title":"Default Title","offer_id":56699118813564,"sku":"DC-AI-Montjuïc","price":13999.99,"currency_code":"GBP","in_stock":true}],"thumbnail_url":"\/\/cdn.shopify.com\/s\/files\/1\/0580\/6886\/1080\/files\/e7f4ff02100313bb87f529f1cff39c11.avif?v=1777473559","url":"https:\/\/directcomputers.co.uk\/products\/dc-ai-multi-gpu-montjuic-amd-ryzen-threadripper-9960x-rtx-5090-32gb-x2-128gb-ddr5-2tb-gen5-nvme-ubuntu-ai-workstation","provider":"Direct Computers","version":"1.0","type":"link"}