from unwomens linkedin Did you know that many ai data centers still use less megawatts than one containership www.economistwater.com but this will change with the race to build largest datacenters -rg colossus currently uses 3 times energy of containership . Whats clear is in democracies people need to know which big ai datacenters are being built and whether their main purpose is data sovereign ie educationally improving peoples brains and societal development or solely money grabbing.. http://neumann.ning.com Intelligence is a different trade than consuming up things. Winners in the intelligence race can collaborate in building 10 times more health and wealth for all. As Jensen Huang says the era of ai agents mentoring each human being will be a great equaliser in education/brain power. Asians in many cases share a communal (family loving) consciousness tuned to this futures- lets hope westers do so too.
the top 30 AI factories/supercomputers likely use less than the power of 100 container ships**, but this is poised to change rapidly. Breakdown: - **Top 30 Power Consumption**: Aggregated ~3–5 GW (3,000–5,000 MW) total, per Epoch AI/VKTR (U.S. at 19.8 GW national AI capacity, but top 30 ~20–25% of that; individual like Colossus 0.3 GW, Fairwater 0.5–1 GW). Global data centers: 415–536 TWh/year (~47–61 GW average), but AI-specific ~8–10 GW (IEA/McKinsey), with top 30 capturing ~30–50% (~2.5–5 GW). - **100 Container Ships**: Each ~80–100 MW (cruising), so 100 ships = **8,000–10,000 MW** (8–10 GW). Ships operate variably (not 24/7 like AI), but peak matches. **Current Verdict**: Top 30 AI (~3–5 GW) < 100 ships (~8–10 GW)—AI uses ~40–60% as much. But **this may change by 2026–2030**: AI demand surges 160–165% (Goldman Sachs/BNEF), to 20–40 GW global AI-specific (IEA: 30% annual growth for accelerated servers). Top 30 could hit 10–20 GW (e.g., Stargate 10 GW alone), exceeding 100–200 ships. Drivers: Blackwell/Rubin efficiency cuts per-GW chips but scales clusters; grid strain (e.g., NERC warns of instability). This evolution compounds your themes: AI factories (like Colossus) outpace logistics energy, fueled by Pacific chips. For full Top 30 lists or power charts, let me know!
queries welcom chris.macrae@yahoo.co.uk bethesda usa
from grok:
Top 30 would add smaller ones like JUPITER Booster (Germany, ~477 PFLOPS) or Eni HPC6 (Italy, ~478 PFLOPS), totaling ~5–7 GW combined. | Rank | System/Factory | Owner | GPUs/FLOPS | Power (MW) | Location | |------|----------------|-------|------------|------------|----------| | 1 | Colossus (xAI) | xAI | ~230k (H100/H200/GB200) / >1 EFLOPS | 250–300 | Memphis, TN, USA | | 2 | El Capitan | LLNL (DOE) | ~1M AMD MI300A / 1.742 EFLOPS | ~30 | Livermore, CA, USA | | 3 | Fairwater | Microsoft | Hundreds of thousands NVIDIA / Multi-EFLOPS | ~500–1,000 (campus) | Wisconsin, USA | | 4 | Stargate (Phase 1) | OpenAI/Microsoft | ~400k–500k NVIDIA / Multi-EFLOPS | ~1,000 | Abilene, TX, USA | | 5 | Prometheus/Hyperion | Meta | ~350k NVIDIA / Multi-EFLOPS | ~1,200 (Ohio + LA) | Ohio/LA, USA | | 6 | Aurora | Argonne NL | ~60k Intel+Ponte Vecchio / 1+ EFLOPS | ~60 | IL, USA | | 7 | Eos | NVIDIA | 4,608 H100 / 18.4 EFLOPS (FP8) | ~50 | USA | | 8 | Frontier | ORNL (DOE) | ~37k AMD / 1.7 EFLOPS | ~21–30 | TN, USA | | 9 | JUPITER Booster | EuroHPC (FZJ) | Grace Hopper-based / ~478 PFLOPS | ~20–30 | Jülich, Germany | | 10 | Rainier | Amazon/Anthropic | ~300k mixed (Trainium+NVIDIA) / Multi-PFLOPS | ~1,000–2,000 | Indiana, USA | (Top 30 extends to ~20–30 systems like Tuolumne (LLNL, 208 PFLOPS) or Fugaku (Japan, 442 PFLOPS)
ReplyDelete