Let's get started with a Microservice Architecture with Spring Cloud:
OpenJDK Project Loom
Last updated: November 26, 2025
1. Overview
In this tutorial, we’ll take a quick look at Project Loom. In essence, the primary goal of Project Loom is to explore, incubate, and deliver Java VM features and APIs built on top of them for the purpose of supporting easy-to-use, high-throughput lightweight concurrency and new programming models on the Java platform.
2. Project Loom
Project Loom is an attempt by the OpenJDK community to introduce a lightweight concurrency construct to Java. At the outset, project Loom envisages introducing lightweight concurrency pervasively via new features, APIs, and optimizations across the whole JDK. The prototypes for Loom have introduced a change in the JVM and the Java library. The main feature of Project Loom is virtual threads, and it has already been implemented.
Although there is no scheduled release for a JDK version that completely implements Loom yet, we can access the Project Loom Early-Access Builds.
Before we discuss the various concepts of Loom, let’s discuss the current concurrency model in Java.
3. Java’s Concurrency Model
Presently, Thread represents the core abstraction of concurrency in Java. This abstraction and other concurrent APIs make it easy to write concurrent applications. To elaborate, we use Thread to create platform threads that are typically mapped 1:1 to operating system kernel threads. The operating system allocates a large stack and other resources to platform threads; however, these resources are limited. Nevertheless, we use platform threads for executing all types of tasks.
However, since Java uses the OS kernel threads for the implementation, it fails to meet today’s concurrency requirements. There are two major problems in particular:
- Threads cannot match the scale of the domain’s unit of concurrency. For example, applications usually allow up to millions of transactions, users, or sessions. However, the number of threads supported by the kernel is much less. Thus, a Thread for every user, transaction, or session is often not feasible.
- Most concurrent applications need some synchronization between threads for every request. Due to this, an expensive context switch happens between OS threads.
A possible solution to such problems is the use of asynchronous concurrent APIs. Common examples are CompletableFuture and RxJava. Provided that such APIs don’t block the kernel thread, it gives an application a finer-grained concurrency construct on top of Java threads.
On the other hand, such APIs are harder to debug and integrate with legacy APIs. Thus, there is a need for a lightweight concurrency construct that is independent of kernel threads.
4. Tasks and Schedulers
Any implementation of a thread, either lightweight or heavyweight, depends on two constructs:
- Task (also known as a continuation) – A sequence of instructions that can suspend itself for some blocking operation
- Scheduler – For assigning the continuation to the CPU and reassigning the CPU from a paused continuation
Presently, Java relies on OS implementations for both the continuation and the scheduler.
Now, in order to suspend a continuation, it’s required to store the entire call stack. And similarly, retrieve the call stack on resumption. Since the OS implementation of continuations includes the native call stack along with Java’s call stack, it results in a heavy footprint.
A bigger problem, though, is the use of an OS scheduler. Since the scheduler runs in kernel mode, there’s no differentiation between threads. And it treats every CPU request in the same manner.
This type of scheduling is not optimal for Java applications in particular.
For example, consider an application thread that performs some action on the requests and then passes on the data to another thread for further processing. Here, it would be better to schedule both these threads on the same CPU. However, since the scheduler is agnostic to the thread requesting the CPU, this is impossible to guarantee.
Project Loom proposes to solve this through user-mode threads which rely on Java runtime implementation of continuations and schedulers instead of the OS implementation.
5. Virtual Threads
OpenJDK 21 introduced virtual threads, along with a provision to create them in the existing API (Thread and ThreadFactory).
5.1. How Are Virtual Threads Different?
Platform threads and virtual threads are different in that the latter are typically user-mode threads, along with other differences:
- Scheduling – Virtual threads are scheduled by the Java runtime rather than the operating system
- User-mode – Virtual threads wrap any task in an internal user-mode continuation. This allows the task to be suspended and resumed in Java runtime instead of the kernel
- Naming – Virtual threads do not require, or have a thread name by default; however, we can set a name
- Thread Priority – Virtual threads have a fixed thread priority that we can’t change
- Daemon Threads – Virtual threads are daemon threads; therefore, they don’t prevent the shutdown sequence
5.2. What Are the Pros/Cons of Virtual Threads?
Virtual threads have their pros and cons:
| Pros | Cons |
|---|---|
| Virtual threads are lightweight. | As lightweight threads, they are not suitable for CPU-bound tasks. |
| Virtual threads can be created by the user. | Many virtual threads share the same operating system thread. Virtual threads block in constructs involving synchronized methods and statements because virtual threads are pinned to their underlying platform threads. |
| We can readily create virtual threads when we need them. | Project Loom developers have to modify every API in the JDK that uses threads so that it can be seamlessly used with virtual threads. |
| Virtual threads typically require few resources. As an example, a single JVM can support millions of virtual threads. | Thread-local variables would require a lot more memory if each of a million virtual threads had its copy of thread-local variables. |
5.3. When to Use Virtual Threads?
We can use virtual threads when we want to execute tasks that spend most of their time blocked. We use lightweight, user-mode virtual threads instead of platform threads for tasks that are mostly waiting for I/O operations to complete.
However, we shouldn’t use virtual threads for long-running CPU-intensive operations.
5.4. How to Create Virtual Threads?
We have two main options for creating virtual threads. The Thread class adds a new class method called ofVirtual that returns a builder for creating a virtual Thread or ThreadFactory that creates virtual threads.
Accordingly, we can start a virtual thread to run a task:
Thread thread = Thread.ofVirtual().start(Runnable task);
Alternatively, we can use the equivalent form to create a virtual thread to execute a task and schedule it to run:
Thread thread = Thread.startVirtualThread(Runnable task);
Furthermore, we can use a ThreadFactory that creates virtual threads:
ThreadFactory factory = Thread.ofVirtual().factory();
Thread thread = factory.newThread(Runnable task);
We can use the isVirtual() method to find if a thread is virtual:
boolean isThreadVirtual = thread.isVirtual();
A thread is virtual if this method returns true.
5.5. How Are Virtual Threads Implemented?
Virtual threads are implemented using a small set of underlying platform threads called carrier threads. Operations, such as I/O operations, can reschedule a carrier thread from one virtual thread to another. However, the code running in a virtual thread is not aware of the underlying platform thread. Accordingly, the currentThread() method returns the Thread object for the virtual thread and not the underlying platform thread.
Let’s go through some other optimizations for lightweight concurrency.
6. Delimited Continuations
A continuation (or co-routine) is a sequence of instructions that executes sequentially and that can yield and be resumed by the caller at a later stage.
Every continuation has an entry point and a yield point. The yield point is where it was suspended. Whenever the caller resumes the continuation, the control returns to the last yield point.
It’s important to realize that this suspend/resume now occurs in the language runtime instead of the OS. Therefore, it prevents the expensive context switch between kernel threads.
Delimited continuations are added to support virtual threads; therefore, they don’t need to be exposed as a public API. Let’s discuss a delimited continuation, which is essentially a sequential sub-program with an entry point, with a pseudo-code example. We can create a continuation in the main() with an entry point as one().
Subsequently, we can invoke the continuation, which passes control to the entry point. The one() may call other sub-routines, for example, two(). Execution is suspended in two(), which passes control outside of the continuation and the first invocation of continuation in main() returns.
Let’s invoke the continuation in main() to resume, which passes control to the last suspension point. All of this happens within the same execution context:
one() {
...
two()
...
}
two() {
...
suspend // suspension point
... // resume point
}
main() {
c = continuation(one) // create continuation
c.continue() // invoke continuation
c.continue() // invoke continuation again to resume
}
For stackful continuations, such as the one we discussed, the JVM needs to capture, store, and resume callstacks not as part of kernel threads. To add to the JVM the ability to manipulate call stacks, unwind-and-invoke (UAI) is a goal of this project. UAI allows unwinding the stack to some point and then invoking a method with given arguments.
7. ForkJoinPool & Custom Schedulers Support in Virtual Threads
Earlier, we discussed the shortcomings of the OS scheduler in scheduling relatable threads on the same CPU.
Although it’s a goal for Project Loom to allow pluggable schedulers with virtual threads, ForkJoinPool in asynchronous mode will be used as the default scheduler. OpenJDK 19 added several new enhancements to the ForkJoinPool class including setParallelism(int size) to set target parallelism, thus controlling the future creation, use, and termination of worker threads.
ForkJoinPool works on the work-stealing algorithm. Thus, every thread maintains a task deque and executes the task from its head. Furthermore, any idle thread does not block, waiting for the task, and pulls it from the tail of another thread’s deque instead.
The only difference in asynchronous mode is that the worker threads steal the task from the head of another deque.
ForkJoinPool adds a task scheduled by another running task to the local queue. Hence, executing it on the same CPU.
8. Structured Concurrency
The OpenJDK has introduced a Preview feature for structured concurrency that falls within the purview of project Loom. The objective of structured concurrency is to treat groups of related tasks running in different threads as a single unit of work, with a single scope. Its benefit is that it streamlines error handling and cancellation, and thus improves reliability, and observability.
For this purpose, it introduces the preview API java.util.concurrent.StructuredTaskScope, which splits a task into multiple, concurrent subtasks. Further, the main task must wait for the subtasks to complete. Using the fork() method we can start new threads to run sub-task, and the join() method to wait for the threads to finish. This API is designed to be used within a try-with-resources statement, as an example:
Callable<String> task1 = ...
Callable<String> task2 = ...
try (var scope = new StructuredTaskScope<String>()) {
Subtask<String> subtask1 = scope.fork(task1); //create thread to run first subtask
Subtask<String> subtask2 = scope.fork(task2); //create thread to run second subtask
scope.join(); //wait for subtasks to finish
// process results of subtasks
}
Afterward, we can process the results of the subtasks.
9. Conclusion
In this article, we discussed the problems in Java’s current concurrency model and the changes proposed by Project Loom.
In doing so, we discussed how lightweight virtual threads introduced in OpenJDK 21 provide an alternative to Java using kernel threads.
















