At one point, mechanical hard drives were the single major bottleneck of every computer system. Mechanical hard drive speeds are capped at around 150 MB/s and expansion became necessary.
From the introduction of the first SSD (solid-state drive), it was evident that it was going to be an industry changer. Today, we have advanced even further with the mass adoption of NVMe drives.
The goal of this article is to provide an overview of the development of storage options, from traditional SATA storage and its mechanical beginnings to SATA SSDs. Then dive into the NVMe interface and PCIe SSDs that offer unmatched performance.
This article describes different types of flash-based storage. It provides an in-depth look into NVMe and Intel® Optane, which is changing how modern data centers approach storage.
Traditional SATA Storage
Serial ATA (SATA) was introduced in the year 2000 as a successor to the Parallel Advanced Technology Attachment (PATA). SATA uses the Advanced Host Controller Interface (AHCI) to access data. One of the most significant features brought by AHCI is Native Command Queuing (NCQ), explicitly designed to speed up mechanical hard drives and enable hot-swapping.
Note: Hot-swapping is the process of removing or switching hard drives without powering off the system. Very useful for enterprise use cases.
However, it became evident that AHCI is one of two factors limiting the progress of SSDs. Namely, AHCI can work on one queue at a time, and store up to 32 pending commands. What seemed like a sensible number for slow-moving heads in mechanical hard drives is just a fraction of what NAND storage can process.
The second factor limiting transfer speeds is latency due to SATA’s indirect connection to the CPU. Since its launch, the SATA host controller interface was upgraded several times, boosting transfer rates from 150 MB/s to up to 600 MB/s. That is the maximum theoretical top speed of most high-quality solid-state drives over AHCI (in real-world workloads, most top off at 550 MB/s).
Note: The SATA Express protocol breaks through the 600 MB/s barrier. It still uses the SATA physical connector, but thanks to its use of two PCIe lanes, it performs better than regular SATA 3.0. However, its bulky connector and no drive support meant it was never really adopted by the market.
The U.2 interface is an upgraded alternative. It uses twice as many PCIe lanes than SATA Express, four (4) in total. It has established itself as an excellent option for dedicated server hosting.
Although 550 MB/s is a lot faster than a 7200-RPM hard disk drive, flash storage devices could do much better with the right interface and physical protocol. The solution – PCI Express, a general-purpose connection bus lane directly connected to the CPU that offers read/write speeds of up to 1 GB/s on a single lane. PCIe was already in use for components such as GPUs, and it fit the bill perfectly. In parallel, NVMe was being developed as the interface for accessing data on PCIe SSDs.
Types of SSDs
Computer engineers had to work with obsolete physical standards and interfaces that were inadequate for SSDs. Over time, new protocols and standards were developed to get the most out of modern storage devices.
The existence of multiple physical and interface standards for flash-based storage has led to some confusion. SSDs come in all shapes and sizes, and their performance may vary. Let’s take a closer look at the various types of SSDs and their corresponding interfaces.
2.5-Inch SSD (Supports the SATA Controller and AHCI)
The oldest type of SSD uses the SATA host controller interface, allowing theoretical transfer speeds of up to 600 MB/s. This is the cheapest and most available SSD on the market.
mSATA SSD (Supports the SATA Controller and AHCI)
Roughly the size of a credit card, these solid-state drives are smaller than 2.5-inch SSDs. mSATA SSDs are usually found in portable devices, such as laptops, netbooks, and tablets.
M.2 SSD (Supports PCIe NVMe, AHCI, and SATA)
M.2 is a physical standard that defines the shape, dimensions, and the physical connector itself. When it comes to storage devices, it usually comes down to offering the most significant capacity in the smallest possible form-factor and at the highest possible, write and read speeds. The “stick of gum memory,” or M.2, currently offers just that.
It offers by far the smallest form factor connects seamlessly without any cables, and the bus itself powers the drive. M.2 is backward compatible with SATA/AHCI, and some M.2 ports support SATA only. Most commercially available NVMe drives use the M.2 port.
U.2 SSD (Supports PCIe NVMe, AHCI, and M.2)
The U.2 interface is an upgraded version of SATA Express. It uses up to four PCI Express lanes and was launched for enterprise use cases. U.2 SSDs are compatible with M.2 ports and can be plugged into an M.2 port using an adapter. Most importantly, NVMe solid-state drives may be manufactured as U.2 drives.
A U.2 SSD is enclosed in a 2.5-inch case, just like a traditional SATA SSD. That offers superior physical protection and cooling when compared to M.2 drives. These drives also support hot-swapping, an option not supported by M.2 drives.
What is NVMe?
NVMe stands for Non-Volatile Memory Express, a host controller interface designed explicitly for PCIe SSDs. It features low latency; of under 250 microseconds. In simple terms, non-volatile memory does not lose its data when it is powered off, and its content is accessed in a particular way. This interface introduced an entirely new way of accessing data, allowing superior command queuing for solid-state drives.
NVMe makes use of four (4) PCI express lanes for data transfer, allowing speeds up to 4000 MB/s, making it five times faster than SATA flash-based drives.
NVMe Command Queuing Made Better
Command queuing is the number of data requests a drive can process at a time. As mentioned above, the Advanced Host Controller Interface (AHCI) can handle 32 pending commands in a single queue, while Serial Attached SCSI (SAS) can process 256 commands in a single queue. That makes perfect sense for a mechanical hard drive with slow-moving parts, but modern flash-based memory can do so much better. This is where the NVMe interface can help.
Unlike AHCI, NVMe allows up to 64 thousand queues, and each queue can have up to 64 thousand commands at the same time. In return, that means NVMe needs fewer CPU cycles compared to SATA or SAS. For example, when it comes to video rendering, the faster your storage feeds your CPU, the faster it will render.
NVMe chunks up a single task into smaller actions and runs them in parallel, thus speeding up the process. It processes data much like multi-threaded CPUs that create smaller actionable items similarly. This translates to up to 440,000 random read IOPS and 360,000 random writes IOPS performance at a queue depth of 32.
Note: IOPS stands for inputs/outputs per second. IOPS is a unit in which drive performance may be measured. Unlike transfer speeds, this unit is measured in integers. Bear in mind that IOPS numbers may vary depending on the workload and, much like transfer rates, vendors usually speak of maximum theoretical scores.
Command queuing and IOPs scores are even more impressive when it comes to Intel Optane NVMe drives based on 3D XPoint technology.
Intel Optane (Supports PCIe NVME, M.2, and U.2 Form Factors)
Developed by Intel, this top of the line solid-state drive with support for NVMe addresses the gap between RAM and flash-based storage. It offers DRAM-like performance, but at the same time being less expensive per MB of storage. It features very high data density and low latency, much like DRAM, but saves and accesses data like flash storage, making it the most efficient option on the market.
The read/write speeds of Intel Optane is on par with what you can expect from NVMe SSDs. However, Optane has several other strong points:
- Input/Output Operations per Second (IOPS). Intel Optane delivers excellent performance with up to 550,000 IOPS and an impressive 500,000 IOPS in 4K random reads and writes. Of course, this may vary depending on the workload, but this SSD is becoming well-known for its quality of service.
- Very Low Latency. This might not be so significant for home users, but enterprises can benefit from the low latency times of Intel Optane memory. Read speeds are consistently high, regardless of the write operations that may be running in parallel. According to Intel, read response times are below 30 microseconds on average while maintaining a 70% read and 30% write workload.
- Reliability. It is predictably fast, making it ideal for critical workloads. It maintains up to 63x better read response times than flash-based drives. Additionally, the Intel Optane DC series is designed to handle a high number of read/write cycles, displaying high endurance in Intel’s benchmarks.
- Endurance. Due to its unique nature, Intel Optane SSDs have very high endurance of up to 60 Drive Writes per Day (DWPD). A traditional SSD with over-provisioning applied could potentially offer up to 10 DWPD, six times less than Intel’s Optane solution. Excellent storage endurance is one more reason why Intel Optane is excellent for high performance caching, which is one of the most demanding write workloads with approximately 3+ DWPD.
Note: Drive Writes per Day (DWPD) is the most common unit in which storage drive endurance is measured. DWPD is the number of times you could overwrite the drive’s entire storage capacity each day of its life. A crucial parameter for enterprise write workloads.
What can NVMe and Intel Optane Do for Data Centers?
NVMe and Intel Optane are sure to establish a firm foothold in the data center industry. As an interface standard, NVMe resolves many of the SATA’s shortcomings and enables enterprises to eliminate bottlenecks and accelerate applications. Most importantly, it provides more CPU headroom so it could support more users and apps.
The latest iteration of NVMe has introduced visualization enhancements. It defines how NVMe drives are used in a shared storage environment in which primary and secondary storage controllers exist. Besides that, NVMe is already being applied in a RAID level 0 setup for specific use cases where fast throughput is vital.
For Intel Optane, the future looks even brighter. According to benchmarks, it performs much like traditional RAM, making it remarkably fast. Servers can benefit the most from Intel’s breakthrough in storage memory. It can be used for caching, fast logging, or for extending your pool of DRAM. Intel Optane offers a high-quality service with less random moments of underperformance, making it an excellent option for critical latency-sensitive workloads.
For example, Intel Optane is excellent for heavy workloads such as machine learning. It can be used in a shared memory pool with DRAM at the application or OS level, thus providing more memory at bargain prices. For data center servers, it comes in U.2 and PCIe NVMe form factors.
Summary: NVMe vs. SATA Storage
The excellent performance of NVMe PCIe solid-state drives combined with the cost reduction we have seen during the past two years has fueled the rise in NVMe usage in the enterprise setting.
Even though traditional SATA and SAS solid-state drives hold a large partition of the market, we can expect them to be pushed aside by NVMe and Intel Optane as prices go down. Flash-storage is also getting more endurable, so it would not come as a surprise if NVMe takes over a portion of the market that typically belongs to mechanical hard drives.
NVMe and Intel Optane will remain the winners in the long run. Data centers are quick to adapt and are adopting this emerging technology and offering it at fair prices. One of the cheapest tickets to the world of Intel Optane is the affordable Intel Xeon E-2186G platform for data centers.