Address Space Isolation for Linux

ASI is a proposed technique for preventing exploitation of a large set of CPU vulnerabilities. This site is specifically about the Linux kernel feature, an implementation of which has been developed at Google and deployed on certain servers.

This site serves to hold info and resources that are useful for the effort to get this feature into the upstream kernel.

ASI in a Nutshell

If you prefer to watch a video, the recording of the LSF/MM/BPF session from 2024 provides a basic introduction to ASI.

If you prefer a more detailed exegesis, see the resources section. More recent iterations include documentation patches that might be illustrative.

The basic idea of ASI is to introduce a new kernel address space that doesn’t contain any user data (userspace process memory or KVM guest memory). This new address space is called the nonsensitive address space; other than having user data unmapped it’s exactly the same as the normal kernel address space, which is called the sensitive address space.

Note: Earlier ASI resources used the term “restricted” and “unrestricted” to describe the nonsensitive and sensitive address spaces respectively.

As much as possible, the kernel runs in the nonsensitive address space. Since speculative execution can’t access unmapped data, all data is fully protected from transient execution attacks during this time. When the kernel does need to access user data, this access triggers a page fault. In the page fault handler, the kernel switches to the sensitive address space and then continues, so that the faulting memory access is retried and succeeds and execution continues. This fault-driven approach means ASI is transparent to most of the kernel; only very low-level code needs to be aware of the two separate address spaces.

ASI in a nutshell

Transitions between sensitive and nonsitive states provide new hook-points for instantaneous mitigation actions such as:

ASI high-level flow

By restricting those actions to the instants where truly needed, ASI amortises their cost and thus enables robust mitigations that would otherwise be too expensive.

ASI avoids mitigation costs

When it’s working well, ASI is faster than Linux’s existing mitigations, while being much more general and flexible. For example, while Google saw CPU overheads on the order of 5% when evaluating upstream mitigations for SRSO, ASI’s overheads almost always stay below 1% of whatever endpoint is being measured.

This website will be updated with more detailed performance information as time goes on. Please contact Brendan Jackman via the address you’ll find on LKML if you have specific questions or workloads you’re interested in evaluating.

Status (Oct 2025)

ASI has been deployed in certain Google environments. No code is upstream yet. Various prototypes and RFCs have been shared over the years, see the resources section for details.

Attempts began in earnest to merge code upstream in Sept 2025.

Resources

Presentations

The next presentation will be at LPC 2025 at the x86 microconference.

Most recent first:

Code & LKML discussions

The most up-to-date ASI code is the asi/next branch on Brendan Jackman’s Github repository. This is currently (Oct 2025) in a very messy state, it will be updated in coming weeks with more info added to this site.

Most recent first: