Z-Angle Memory: The Next-Generation Stacked DRAM for AI and HPC

The rapid growth of large-language models, generative AI, and high-performance computing (HPC) has pushed traditional memory architectures to their physical limits. High Bandwidth Memory (HBM) has long been the gold standard for AI accelerators, but it faces growing bottlenecks in capacity, power consumption, and thermal management as stack heights and densities rise.
Intel and SoftBank Subsidiary Saimemory Partner on ZAM
In early February 2026, Z-Angle Memory (ZAM) emerged as a purpose-built solution to these challenges. Co-developed by Intel and SAIMEMORY, a wholly-owned subsidiary of SoftBank Corp., ZAM is a next-generation stacked DRAM architecture designed to redefine memory scaling for data-intensive workloads. This guide breaks down ZAM’s core technology, advantages, market position, and roadmap in clear, accessible terms for storage and computing professionals.

What Is Z-Angle Memory

Z-Angle Memory (ZAM) is a 3D-stacked DRAM architecture built to overcome the scaling limits of conventional HBM. Its name comes from its defining innovation: a diagonal, Z-shaped interconnect topology that replaces the vertical through-silicon vias (TSVs) used in all current HBM designs.
next gen zam memory prototype new angle architecture Z-Angle Memory: The Next-Generation Stacked DRAM for AI and HPC
Unlike traditional memory stacks that route signals straight up and down, ZAM uses staggered diagonal wiring to move data through the stack. This small but radical change addresses three critical pain points: insufficient capacity for large AI models, excessive power draw in data centers, and unmanageable heat buildup in dense packages. ZAM is not an incremental upgrade. It is a ground-up redesign of stacked memory, targeting commercial deployment for AI data centers and HPC systems in the 2030 timeframe.

Innovations techniques fondamentales

ZAM’s performance gains come from five tightly integrated technical breakthroughs, each designed to work within modern semiconductor manufacturing rules.
Diagonal Interconnect Topology. The foundation of ZAM is its shift from vertical TSVs to staggered diagonal interconnects. This structure spreads mechanical stress and heat evenly across the stack, rather than concentrating both along narrow vertical columns. It also shortens average signal paths, reducing latency and power loss.
Copper-Copper Hybrid Bonding. ZAM replaces legacy microbumps and solder connections with direct copper-copper hybrid bonding. This atomic-level connection lowers resistance and inductance, improves signal integrity, and allows the stack to behave like a single, monolithic silicon block rather than a series of discrete dies.
Via-in-One Manufacturing. ZAM uses a simplified via-in-one process to form its diagonal interconnects in a single production step. This cuts manufacturing complexity, improves yield, and reduces production costs compared to the multi-step TSV process required for HBM.
oscoo 2b banner 1400x475 1 Z-Angle Memory: The Next-Generation Stacked DRAM for AI and HPC
Capacitorless Design. ZAM eliminates on-die capacitors entirely. This frees up valuable silicon area for memory cells, directly boosting storage density without shrinking process nodes. It also simplifies chip design and improves electrical efficiency.
EMIB Integration. ZAM is optimized for Intel’s Embedded Multi-die Interconnect Bridge (EMIB) packaging. This enables high-speed, low-latency connectivity between ZAM stacks and AI processors, creating a cohesive, high-performance compute complex.

ZAM vs. HBM

The table below distills how ZAM compares to widely deployed HBM3e and upcoming HBM4 solutions, using publicly disclosed prototype and design targets.
Metric ZAM HBM3e (Current) HBM4 (Upcoming)
Capacity per Stack Jusqu'à 512 Go 24–36GB 24–48GB
Max Stack Layers 50+ layers 12–16 layers 16–20 layers
Consommation électrique 40–50% lower than HBM3e Baseline ~20% lower than HBM3e
Interconnect Type Diagonal Z-angle copper Vertical TSVs Vertical TSVs
Thermal Performance Central thermal pillar; low hotspots High hotspots at high layers Moderate improvement
Target Use Case Large AI training, HPC Cloud AI inference Mid-to-large AI workloads

Key Advantages of ZAM

Unmatched Memory Capacity. ZAM delivers 2–3 times the capacity of current HBM stacks, with a target of 512GB per stack. This allows larger foundation models to run on fewer accelerators, simplifying system design and lowering total cost of ownership.
Dramatic Power Efficiency. Power use is reduced by 40–50% compared to HBM3e. For large-scale AI clusters, this cuts energy costs, reduces cooling demands, and helps meet sustainability targets.
Superior Thermal Management. Traditional HBM is capped at roughly 16–20 layers due to thermal bottlenecks. ZAM’s diagonal routing creates a central thermal pillar that distributes heat across the entire stack, enabling reliable stacking of 50+ layers without dangerous hotspots.
Enhanced Mechanical Stability. Diagonal interconnects spread stress evenly across the die, reducing warping and failure risks in tall stacks. This improves long-term reliability in enterprise and data center environments.
Simplified Manufacturing. The via-in-one process and capacitorless design streamline production. Early estimates suggest ZAM can be manufactured at a lower cost than complex HBM stacks while delivering far higher capacity.

Development Background & Industry Partnerships

The technology draws from Intel’s Next Generation DRAM Bonding (NGDB) program, developed with support from the U.S. Department of Energy’s Advanced Memory Technology (AMT) project and Sandia National Laboratories. This research focused on breaking the power–capacity–bandwidth tradeoffs that limit conventional DRAM.
SAIMEMORY was founded in December 2024 as a SoftBank subsidiary with a single mission: develop next-generation memory for AI. The formal Intel–SAIMEMORY partnership was announced on February 2, 2026, and ZAM made its global prototype debut one day later at Intel Connection Japan 2026. Under the collaboration, Intel contributes advanced packaging and bonding expertise, while SAIMEMORY leads architecture development and commercialization.

Real-World Use Cases

ZAM is engineered for the most demanding workloads in modern computing:
  • Large-Scale AI Model Training. Massive per-stack capacity removes memory bottlenecks for trillion-parameter foundation models, allowing faster training and simpler cluster design.
  • Cloud AI Inference at Scale. Lower power consumption reduces operational costs for hyperscale cloud providers running continuous inference workloads.
  • High-Performance Computing. Scientific simulations, weather modeling, and financial modeling benefit from higher capacity and stable, low-latency memory access.
  • CXL Memory Pooling. ZAM’s efficient stacking and high bandwidth make it a natural fit for compute express link (CXL) memory pooling, enabling flexible, shared memory resources in modern data centers.
  • Edge AI & Autonomous Systems. Improved power efficiency supports AI deployments in power-constrained edge environments, from industrial automation to autonomous vehicles.

Current Status & Future Timeline

As of early 2026, ZAM remains in active development with a clear, public roadmap:
  • February 2026: First prototype demonstration at Intel Connection Japan, focused on thermal management.
  • 2027: Engineering samples and test chips expected to be released to hardware partners.
  • 2030: Target mass commercial deployment for AI data centers and HPC systems.
The platform is still being refined, but early prototype results validate its core claims around capacity, power, and thermal performance. ZAM is widely viewed as a leading candidate to succeed HBM in the post-2030 AI memory landscape.

Z-Angle Memory represents a paradigm shift in stacked DRAM design. By replacing vertical TSVs with a diagonal, Z-shaped interconnect topology, it tackles HBM’s most persistent constraints. But the competitive landscape for AI memory is dynamic. Rival technologies, such as Samsung’s recently announced zHBM, are also targeting the post-HBM4 era with aggressive performance claims. Furthermore, the successful commercialization of any new memory architecture depends on achieving high manufacturing yield, competitive cost structures, and—critically—adoption by major AI accelerator and system vendors. Therefore, while ZAM presents a compelling blueprint, its journey from prototype to industry standard will be contingent on overcoming these real-world engineering and ecosystem challenges.

Défiler vers le haut

Nous contacter

Remplissez le formulaire ci-dessous et nous vous contacterons dans les plus brefs délais.

Formulaire de contact