What is CXL, and why should you care?

By Keith Thuerk | SCIFI Future | 24 Aug 2022

What is CXL, and why should you care?


CXL (AKA Compute Express Link) allows sharing compute and memory among components and devices, potentially leading to more efficient use of data-center resources. CXL is an open standard for high-speed central processing unit (CPU)-to-device and CPU-to-memory connections, designed for high performance data center computers. 

Where is CXL supported? CXL is supported by pretty much every hardware vendor and built on top of PCI Express (PCIe) for coherent memory access between a CPU and a device, such as a hardware accelerator, or a CPU and memory.

Who is backing CXL? All the Information Technology (IT) titans such as Cisco Systems, Dell EMC, Google, IBM, HPE, Intel, Microsoft and Samsung. As you can see its a diverse group all with the common goal of making this technology work and be adopted.

What will need to change to support CXL? The easy answer is nothing, as CXL operates at the system level.

What are the benefits of CXL?  Low latency cache and memory transactions via the following two constructs

  • Memory pooling, think about how much memory databases (regardless of kind) consume massive amounts of memory. Wouldn't it be nice to have them be able to handle spiking workloads from a pool of available resources? CXL allows this to take place.
  • Compute Pools - AI & ML require lots of compute resources, Compute pooling can take advantage of free CPU, FGPA, GPU as well as network ports. 

What is CXL displacing?  Storage Class Memory (AKA SCM) was co-developed by Micron technology and Intel. Storage class memory was developed to kill off DRAM which is an ultrafast, persistent, and expensive resource. SCM was designed for super fast access in the range of 10-20 microseconds, which turns out to be still slightly slower than DRAM.  Another major benefit of CXL is this persistent memory did not suffer from wearing out like SSD/Flash drives do (this is combatted by wear leveling technology) Late in '21 Micron walked away from the technology and just in the past few weeks Intel walked away from SCM as well... Intel wrote off $359 investment in SCM (Ouch, I would loved to have just 1% of that write off). With both parties walking away from SCM technology due to little enterprise IT adoption it made sense to pivot to a more adoptable solution.

CXL Protocols

  • An enhanced version of a PCIe 5.0 protocol for initialization, device discovery, and connection to the device.
  • CXL.cache: This protocol defines interactions between a host and a device, allowing attached CXL devices to efficiently cache host memory with extremely low latency using a request-and-response approach.
  • CXL.mem: This provides a host processor with access to the memory of an attached device, covering both volatile and persistent memory architectures. CXL.mem is the big one, starting with CXL 1.1. If a server needs more RAM, a CXL memory module in an empty PCIe 5.0 slot can provide it. There’s slightly lower performance and a little added latency, but the tradeoff is that it provides more memory in a server without having to buy it. Yes, there is slightly lower performance and a little added latency, a small trade off to get more memory in a server without having to buy it. Of course you do have to buy the CXL module.


Key integrations - OpenCAPI to be folded into CXL


  • V3 - submitted Aug 2 '22
    • 64GTs (Gigatransfers) w/ no added latency over V2
    • Full backward compatibility with CXL 2.0, CXL 1.1, and CXL 1.0
  • V2 - Memory Pools adds support for CXL switching through which multiple CXL 2.0-connected host processors can use distributed shared memory and persistent (storage-class) memory.
    • Has its own directly connected DRAM and the ability to access external DRAM across the CXL 2.0 link
  • V1.1 - Current in use protocol
  • V1 - March '19 enables server CPUs to access shared memory on accelerator devices with a cache coherent protocol.


CXL Standards Bodies to watch and follow - currently I have only found: CXL Consortium and you can learn quite a bit from following Hot Chips 34 conferences. 

Summary - CXL is going to become a standard feature in every new server and perhaps storage array in the next few years as the concept of fabrics takes off and brings higher levels of resource efficiency to all data centers. Are you excited to see this tech take-off?

How do you rate this article?


Keith Thuerk
Keith Thuerk

Currently learning about Crypto & DeFi to combat the Inflationary Tidal wave coming our way!

SCIFI Future
SCIFI Future

Quantum Computing In Bite Size Pieces & SCIFI items

Send a $0.01 microtip in crypto to the author, and earn yourself as you read!

20% to author / 80% to me.
We pay the tips from our rewards pool.