Apr 04 2019
Our last blog explored the challenges that content delivery networks (CDNs) encounter with content encryption, and how these challenges impact the performance and capacity of CDN content servers. In this blog, we will examine how computational storage and in-situ processing can eliminate those latency challenges, and provide CDNs with processing that scales linearly with storage capacity. In computational storage systems, compute resources are embedded directly in storage systems or devices, allowing data to be acted upon while it is still in the device. This is the concept of “in-situ” (in-place) processing. The primary benefit of in-situ processing is that it eliminates the need to move data from storage to a server’s main memory and CPU complex. For large data sets in the petabyte-range, in-situ processing can eliminate the latency associated with this data movement, as well as the power consumption and cooling required to move this data (a recent data center study showed that up to 50% of data center power and cooling is consumed moving data between storage systems/devices, servers, and access points).
Computational storage turns this dynamic on its head by eliminating the need to move data from storage devices to the server and back. This data movement creates a total of four seconds of latency for the typical 2-hour HD movie (approximately 8GB of data). Computational storage eliminates this by performing the encryption of protected data, key management, and other tasks inside the computational storage SSD. For instance, the NGD Systems Newport NVMe Computational Storage SSD, the most advanced U.2 SSD available today, combines an advanced multi-core ARM processor, accelerators for encryption and artificial intelligence, and 16 TB of data storage into a 15mm thick U.2 package. And just in case you thought that the power consumption for the encryption was simply moved from the server processor to the SSD, the Newport NVMe U.2 SSD consumes less than 10 watts of power.