ASI is a proposed technique for preventing exploitation of a large set of CPU vulnerabilities. This site is specifically about the Linux kernel feature, an implementation of which has been developed at Google and deployed on certain servers.
This site serves to hold info and resources that are useful for the effort to get this feature into the upstream kernel.
ASI in a Nutshell
If you prefer to watch a video, the recording of the LSF/MM/BPF session from 2024 provides a basic introduction to ASI.
If you prefer a more detailed exegesis, see the resources section. More recent iterations include documentation patches that might be illustrative.
The basic idea of ASI is to introduce a new kernel address space that doesn’t contain any user data (userspace process memory or KVM guest memory). This new address space is called the nonsensitive address space; other than having user data unmapped it’s exactly the same as the normal kernel address space, which is called the sensitive address space.
Note: Earlier ASI resources used the term “restricted” and “unrestricted” to describe the nonsensitive and sensitive address spaces respectively.
As much as possible, the kernel runs in the nonsensitive address space. Since speculative execution can’t access unmapped data, all data is fully protected from transient execution attacks during this time. When the kernel does need to access user data, this access triggers a page fault. In the page fault handler, the kernel switches to the sensitive address space and then continues, so that the faulting memory access is retried and succeeds and execution continues. This fault-driven approach means ASI is transparent to most of the kernel; only very low-level code needs to be aware of the two separate address spaces.
Transitions between sensitive and nonsitive states provide new hook-points for instantaneous mitigation actions such as:
- Flushing microarchitectural data buffers to block side-channels,
- Flushing control-flow buffers (such as branch predictors) to block mistraining,
- Instantaneously pausing (“stunning”) hyperthread siblings to prevent concurrency-based attacks.
By restricting those actions to the instants where truly needed, ASI amortises their cost and thus enables robust mitigations that would otherwise be too expensive.
When it’s working well, ASI is faster than Linux’s existing mitigations, while being much more general and flexible. For example, while Google saw CPU overheads on the order of 5% when evaluating upstream mitigations for SRSO, ASI’s overheads almost always stay below 1% of whatever endpoint is being measured.
This website will be updated with more detailed performance information as time goes on. Please contact Brendan Jackman via the address you’ll find on LKML if you have specific questions or workloads you’re interested in evaluating.
Status (Oct 2025)
ASI has been deployed in certain Google environments. No code is upstream yet. Various prototypes and RFCs have been shared over the years, see the resources section for details.
Attempts began in earnest to merge code upstream in Sept 2025.
Resources
Presentations
The next presentation will be at LPC 2025 at the x86 microconference.
Most recent first:
- Slides from Linux MM Alignment Session October 2025.
- Slides & LWN coverage from LSF/MM/BPF 2025.
- Slides & recording from LPC 2024.
- Recording & LWN coverage from LSF/MM/BPF 2024. This session included a basic conceptual intro to ASI.
Code & LKML discussions
The most up-to-date ASI code is the asi/next branch on Brendan Jackman’s
Github repository. This is
currently (Oct 2025) in a very messy state, it will be updated in coming weeks
with more info added to this site.
Most recent first:
Sept 2025:
[PATCH 00/21] mm: ASI direct map managementThis is the first
[PATCH]posting, i.e. the first code that’s been presented as more than a prototype or proof-of-concept.This was an attempt to introduce basic page table management without the actual address-space-switching logic. Feedback from Dave Hansen suggests this is the wrong approach to getting ASI merged:
Just to be clear: we don’t merge code that doesn’t do anything functional. The bar for inclusion is that it has to do something practical and useful for end users. It can’t be purely infrastructure or preparatory.
Aug 2025:
[Discuss] First steps for ASI (ASI is fast again)This introduces a proof-of-concept for how to solve performance issues with the page cache. It attempts to generate a consensus on whether the kernel wants ASI, at least in principle.
Discussion centers on implementation of that solution. Lorenzo Stoakes suggests “we should just get going with some iterative series”.
Jan 2025:
[PATCH RFC v2 00/29] Address Space Isolation (ASI)A general proof-of-concept patchset introducing a minimal implementation of ASI. Compared to previous iterations, the key addition is support for protection against native processes (not only VM guests).
July 2024:
[PATCH 00/26] Address Space Isolation (ASI) 2024Compared to previous patchsets, this is just a simplification, attempting to make technical discussion more practical by shrinking the scope.
There is some discussion of implementation details.
Feb 2022:
[RFC PATCH 00/47] Address Space Isolation for KVMThis is Google’s first public posting of ASI. (Note, this was not the first implementation of this feature, see the references of that post for more history).
Other interesting links
Mike Rapoport’s
unmapped_alloc()branch.This endeavour had some overlaps with ASI in that it added support to allocate pages that are missing from the direct map.
Sep 2025:
[PATCH v7 00/12] Direct Map Removal Support for guest_memfdby Patrick Roy.This is a feature to allow KVM guest memory that is allocated via
guest_memfdto be completely removed from the direct map. This solves a set of problems that overlaps with the ones solved by ASI:It immediately prevents a large class of CPU exploits, or at least requires them to be re-engineered/more complicated.
It also appears to be helpful for preventing exploits of software bugs, which ASI is certainly not (ASI is completely transparent to architectural execution).
However, it requires coordination across the entire platform stack to be useful: it only protects KVM guest memory, and only with a compatible hypervisor stack.