On February 25, 2026, SK Hynix and SanDisk held a joint launch event at SanDisk’s headquarters in Milpitas, California, to announce High Bandwidth Flash (HBF)—a next-generation memory architecture built for the AI inference era. The two companies also kicked off a global standardization effort under the Open Compute Project (OCP) framework, with a dedicated workstream to define universal HBF specifications across the industry. This move marks a major step toward solving one of the most pressing challenges in modern AI infrastructure: balancing speed, capacity, and cost for large-scale inference deployments.
What Is HBF
High Bandwidth Flash (HBF) is a new storage tier designed to sit between High Bandwidth Memory (HBM) and traditional SSD storage. It is not intended to replace either technology, but to act as a high-performance bridge that eliminates bottlenecks in AI systems. HBF uses 3D NAND flash as its core medium, while adopting the advanced stacking and packaging techniques used in HBM. This combination lets it deliver far greater bandwidth than SSDs and much higher capacity than HBM at a more accessible cost. HBF can be understood as a “middle-ground memory” that brings HBM-like speed and SSD-like scale to real-world AI services.
The AI Storage Gap HBF Solves
The AI industry has rapidly shifted from model training to large-scale inference, where millions of users access generative AI, cloud services, and intelligent applications simultaneously. This transition has exposed a critical gap in today’s storage hierarchy.
HBM offers exceptional bandwidth for real-time computing, but it is limited in capacity and expensive to scale. It is not practical to use HBM alone for storing full large language models. SSD provide massive capacity at low cost, but their bandwidth is too low to keep up with AI inference throughput, creating performance bottlenecks.
HBF was developed to close this gap. It supports large model datasets without the high cost of expanded HBM, while delivering speed far beyond traditional SSDs. This balance makes it ideal for data centers, edge AI systems, and mainstream AI inference hardware.
First-Generation HBF Key Specifications
The first generation of HBF sets clear performance and physical targets to ensure compatibility and real-world usability. Below are the official specifications announced by SK Hynix and SanDisk.
| Parameter | 사양 |
|---|---|
| Max Read Bandwidth | Up to 1.6 TB/s |
| Single Die Capacity | 256 GB |
| Max Stack Capacity | 512 GB per stack |
| 물리적 호환성 | Matches HBM4 footprint, height, and power |
| Power Trait | Non-volatile, no refresh power needed |
| Real-World Performance | Within 2.2% of “unlimited HBM” setup in LLM tests |
These numbers confirm HBF’s role as a high-capacity, high-bandwidth workhorse for AI. At 1.6 TB/s, it is more than 50 times faster than top-tier PCIe 5.0 SSDs, while offering 8–16 times the capacity of comparable HBM stacks.
Core Technical Innovations
HBF’s performance comes from targeted engineering that combines strengths from both parent companies. SK Hynix contributes its industry-leading HBM packaging and 3D stacking expertise, using through-silicon via (TSV) and vertical stacking to achieve dense, reliable multi-die assemblies. SanDisk provides advanced BiCS NAND and CMOS Bonding Array (CBA) architecture, which optimizes NAND for low-latency, high-bandwidth access.
A key design choice is full physical compatibility with HBM4. HBF uses the same pin layout, dimensions, and power profile as next-generation HBM, meaning hardware makers can adopt it without major system redesigns. HBF’s non-volatile nature also cuts power use compared to DRAM-based HBM, which requires constant power to retain data. Together, these innovations create a drop-in companion for HBM in AI servers.
Standardization and Commercial Timeline
Standardization is central to HBF’s success. By launching a dedicated workstream within OCP, SK Hynix and SanDisk aim to build an open, cross-industry ecosystem rather than a closed proprietary solution. This will encourage adoption by GPU makers, server vendors, cloud providers, and data center operators worldwide.
The commercial roadmap is clearly defined:
- Second half of 2026: First HBF samples to be delivered by SanDisk
- Early 2027: Initial AI inference devices with HBF enter sampling
- 2027–2028: Small-scale commercial deployment
- 2030 and beyond: Widespread adoption as a standard AI inference component
This timeline reflects a realistic path from prototype to mass production, supported by mature manufacturing and packaging capabilities from both partners.
Industry Impact and Market Outlook
HBF is poised to reshape AI infrastructure by enabling HBM+HBF hybrid architectures that optimize performance and total cost of ownership (TCO). By offloading capacity-heavy tasks to HBF, system designers can reduce the amount of HBM required, lowering costs while maintaining near-peak inference speed.
Market analysts expect HBF-related demand to accelerate around 2030, as AI inference scales globally. The technology also levels the playing field, allowing more companies to deploy large models without investing in extreme HBM configurations. In the broader storage landscape, HBF adds a new optimized layer between volatile memory and block storage, creating a more efficient pyramid for AI and data center workloads.
The launch of High Bandwidth Flash (HBF) by SK Hynix and SanDisk is more than a product announcement—it is a foundational shift in how AI storage will be built for the next era of computing. By standardizing an open, balanced tier between HBM and SSD, the two companies are addressing a real industry pain point with clear, measurable benefits.





