Digital Storage Projections For 2020, Part 2

Dec 23 2019

Tom Coughlin

This will be the second in our set of three blogs about projections for the digital storage and memory industry for 2020 (the start of a new decade).  Our last blog dealt with magnetic storage.  This one will focus on the current state and projections for solid state storage and memory.  It also includes how these storage and memory options, including new connection protocols such as NVMe and NVMe-oF are sculpting future computer architectures, enabling IoT, big data and AI applications, in the data center, at the edge and in the smart connected endpoint.

Solid state storage has enabled enormous changes in the storage and memory hierarchy and enabled many of our most valuable applications.  For instance, an article from 2019 points out that average smart phone capacity would exceed 80 GB by the end of 2019 and smart phones have been announced with 1 TB of storage capacity using NAND flash memory.  Even with a lot of data moving to the cloud, smart phone storage capacity will continue to grow. Flash memory technology has also enabled new SSD form factors, not possible with hard disk drives, since SSDs are basically a bunch of chips on a circuit board.  At the 2019 Flash Memory Summit Toshiba presented a slide showing the proliferation of SSD form factors, shown below.   These new form factors have enabled smaller client computers, tablets and enterprise storage applications that can pack a lot of storage into a small space.  At the 2019 FMS Toshiba also introduced a new very compact NVMe storage form factor, XFMEXPRESS, (about the size of an SD card), targeted at mobile applications, automobiles and gaming consoles.

NAND flash memory applications are being driven by their higher performance and decreasing $/TB prices.  The figure below shows recent history and projections out to 2022 for NAND flash memory.  Current products have up to 128 3D NAND flash cell layers although volume shipments are for 96 layers or less.  By 2020 128-layer 3D NAND products will be in volume production with perhaps 192-layer 3D NAND sampling.  By 2022 3D NAND flash with over 200 layers will be available.

Today In: Innovation

However, the cost of manufacturing goes up with the number of flash layers, so this will not translate linearly into a storage capacity cost reduction.  For that reason, NAND flash companies are pushing to use three bit per cell and four bit per cell technologies in their 3D NAND flash for more applications, particularly for enterprise and client applications.  At the 2019 Flash Memory Summit, Toshiba, inventor of flash memory, discussed a five-bit per cell technology.

But the more bits per cell, the lower the endurance of the NAND flash memory cells to erase/write induced wear.  Since many enterprise and client applications are write intensive, this has limited the use of four bit per cell 3D NAND flash memory.  However, with clever write aggregation, conditioning the write voltages for requirements for early life and later life, improved garbage collection and other methods implemented in the flash memory controllers, three bit per cell flash SSDs are available and these methods may help introduce more four bit per cell NAND flash SSDs in 2020.

There are some storage systems companies, such as those from VAST, who are using Intel’s Optane Memory SSDs (3D XPoint) as write cache to reduce the write frequency to four bit per cell flash memory, claiming many years of extended endurance for enterprise applications.  This brings us to the growing importance of so-called non-volatile emerging memory technologies, such as magnetic random access memory (MRAM), ferroelectric RAM (FRAM), resistive RAM (RRAM) and phase change memory (PCM), such as 3D XPoint.  3D Xpoint products in the form of Intel’s Optane memory are available in NVMe SSDs as well as in DIMMs.

The DIMMs in particular are intended for use with Intel’s advanced server processors and Intel is using this technology to differentiate themselves from competitors like AMD for the next generation of server CPUs.  As pointed out earlier, other companies such as Dell, HPE and are using Optane memory in their storage systems as an intermediate storage element that allows them to use less DRAM, as a write cache to achieve higher endurance NAND flash and other applications.

The financial return on selling these enterprise CPUs more than compensates for the loss from making Optane chips.  The plan is that higher production volumes for 3D XPoint and its use in more applications will drive down the capacity costs and reduce or eliminate the net loss on their production.  Note that Optane memory sells for a per capacity price between NAND flash and DRAM.

Standalone MRAM chip sales from Everspin now exceed 100 M units.  In addition, many major semiconductor foundries are offering MRAM (as well as RRAM) as options for embedded memory to replace NOR, higher level (slower) SRAM and some DRAM.  Some embedded products using MRAM for inference engine weighting memory applications appeared in 2019.

We expect to see a lot more MRAM (and perhaps some RRAM) embedded memory used in SoCs for use in industrial and consumer IoT and data center applications.  Both stand alone and embedded MRAM (as well as 3D XPoint) will increase in applications in 2020 and further into the decade driving total shipped capacity growth as shown in the figure below.

All of these solid-state memory applications are enabled by developments of interface technologies that allow utilizing the inherent higher performance available for these technologies.  In particular, the NVMe protocol, based upon the PCIe bus, and the use of this protocol supported various storage fabric technologies (NVMe over fabric, or NVMe-oF), combined with software and firmware that support the use of this protocol, are becoming instrumental in the development of the modern storage and memory hierarchy.  With the move from PCIe generation 3 to PCIe gen 4 the data transfer rate of the PCI bus and thus the NVMe interfaces will double.

With PCIe generations now being announced about every three years, this improvement in interface data rate performance will continue to climb, cementing the use of this interface for solid-state primary storage.  NVMe is built around the characteristics of solid-state storage and thus doesn’t have the built in latencies of SATA and SAS.  For this reason, NVMe interfaces will displace other interfaces for solid state storage over the next few years.  In 2019 many companies, including IBM, Dell, Hitachi and HPE introduced storage systems using NVMe and NVMe-oF technology. The figure below shows the NVMe specification roadmap, from the 2019 Flash Memory Summit, showing the latest developments and current plans out to 2021.  In 2019, TCP transport binding creating a standardized use of NVMe-oF in Ethernet-based storage systems.  Work in 2020 and 2021 will expand the capabilities of NVMe and NVMe-oF.

With the development of NVM-oF has also come new applications, one of particular note is the rise of computational storage, with the object of bringing some computational capability closer to the storage.  Today, this is accomplished with specialized processing capability in storage devices (such as SSDs from NGD) or embedded in the storage fabric (such as Eideticom).  SNIA now has a working group working on computational storage.  Computational storage could be a pre-curser to true in-memory storage that creates an even closer (lower latency) connection between memory/storage and processing power.  At the 2019 IEDM conference there were many papers on using emerging non-volatile memory for neural network computing and logic application that could allow closer connection between processing and storage/memory.  There are also various efforts to create heterogenous chip modules that bring memory and processing power closer to each other.  These technologies have been in discussion as various IEEE and industry events during 2019 and offer additional ways to bring memory and processing closer together.

In our 2019 Part 1 blog we mentioned Western Digital’s work to incorporate processing in their memory devices (SSDs and HDDs) using the RISC-V architecture.  This provides an open programable approach for storage and processing integration that could be useful for future applications.  At the 2019 Open Compute Summit there was also discussion of Open-Channel SSDs where management of the physical solid-state storage is done in the computers operating system, an approach, which on the face of it, moves storage management away from the storage device.  It is likely that actual storage system management will include host management and on-device processing, where these approaches have the most usefulness.

Overall demand for storage technology of all types continues to grow, as the total amount of stored data increases.  This storage capacity demand will drive demand for all types of storage technology.  The chart below shows our historical trends and projections for the growth in shipped storage capacity for magnetic tape, HDDs and SSDs.

Solid state storage is now primary storage in many facilities.  This is driving a move from SATA to SAS to NVMe and NVMe-oF for storage networking.  Computational storage and other approaches to bring memory/storage and processing together are growing and non-volatile emerging memories should become more common place in IoT and AI applications at the edge and endpoints as well as in data centers.


355 Goddard, Suite 200
Irvine, CA 92618

For Sales Inquiries:

For General Inquiries: