Project Loom is a project being developed by Oracle for Java that adds lightweight threads, or “virtual threads,” to the language. The goal of Project Loom is to make it easier to write concurrent code by allowing asynchronous execution without explicitly using complex abstractions like threads, callbacks, or coroutines.
Key Features
Virtual Threads:
The main innovation of Loom. Virtual threads are lightweight threads implemented at the JVM level that allow you to process thousands or even millions of tasks in parallel.
Support for parallel calls:
Thanks to virtual threads, you can write blocking code (for example, synchronous sleep, wait, or I/O calls) that will still execute tasks in parallel.
Support since JDK 19:
Virtual threads are included as a preview feature starting with JDK 19. They are expected to become a standard feature in the JDK in the future.
Benefits
Simplified asynchronous code:
Code written with virtual threads can be linear and blocking, and does not require asynchronous APIs or complex constructs.
Reduced overhead:
There is no need to create real OS threads for each task, which reduces overhead and allows the JVM to run hundreds of thousands of virtual threads on a limited number of OS threads.
Scalability:
Virtual threads make applications that handle a large number of parallel tasks (e.g., high-load servers) scalable.
Disadvantages
JVM dependency:
Loom only runs on the JVM, which limits its use if compatibility with another platform or a Java version below JDK 19 is required.
Limited support and stability:
Project Loom is still in development and requires further improvements and optimization.
Lack of reactive library functionality:
Loom does not offer built-in reactive programming tools, such as data transformations and operators, that libraries like RxJava have.
Working principle
Creating a virtual thread:
Virtual threads are created in the same way as regular Java threads (Thread), but they are much lighter and do not take up a significant amount of system resources.
When a virtual thread is created, the JVM does not allocate an OS thread to it. Instead, it runs as a task that can be suspended, started, and suspended again without constantly occupying an OS thread.
Thread.startVirtualThread(() -> { // code running in a virtual thread });
JVM Thread Scheduler:
Virtual threads are managed by a special scheduler at the JVM level, which distributes their execution to real OS threads (carrier threads).
The scheduler operates similarly to OS schedulers, but within the JVM, which allows for more flexible management of the suspension and resumption of virtual threads.
Blocking operations and suspension:
When performing a blocking operation, such as I/O or sleep, the virtual thread is automatically suspended by the JVM and releases the skeleton thread resource.
This involves a context switch, in which the virtual thread releases the skeleton thread. This process is significantly faster and less resource-intensive than working with OS threads.
When the blocking operation completes, the virtual thread is again queued to run on one of the skeleton threads.
Virtual thread resumption:
When the blocking operation completes, the virtual thread is rescheduled and picked up by the skeleton thread.
This allows the JVM to run hundreds of thousands of virtual threads with a limited number of OS threads, providing scalability.
Stack Management:
One of the main goals of the Project Loom implementation was to create lightweight virtual threads with support for suspension and resumption, which required modifications to the stack.
When a virtual thread is suspended, the JVM saves the call stack of that thread, allowing the thread to be resumed later as if it had never been suspended.
Efficient use of resources:
Since virtual threads are not tied to OS threads, the JVM can run thousands of virtual threads using just a few skeleton threads.
This allows for efficient use of CPU time and reduced overhead, since the JVM does not make system calls to create new OS threads.
Technical Components and Aspects
Virtual Threads:
A key component of Project Loom, virtual threads are created and managed by the JVM, not the OS, and can be suspended and resumed as needed, freeing up resources for other tasks.
Carrier Threads:
These are OS threads that are used by the JVM to execute virtual threads. The JVM manages a pool of carrier threads that execute suspended virtual threads.
Advanced stack management:
Loom uses stack copying on suspend, allowing the current execution state of a thread to be saved and restored on resume.
Optimized I/O management:
Loom is built into the JVM so that blocking calls such as reading from files or network requests are handled automatically by pausing virtual threads until the I/O is complete.
Compatibility with existing code:
Virtual threads appear as regular Threads to the rest of the Java program, allowing Loom to be used with existing code without modification.
Compatibility
Project Loom is supported from JDK 19 onwards. Virtual threads were introduced as a preview feature, and Project Loom has gradually gained stability and improvements with each subsequent JDK release since 19. Project Loom is developed under OpenJDK, so its support is only available on OpenJDK-compatible JVMs.
JDK 8 – JDK 18:
On these versions, Loom virtual threads are not supported and cannot be enabled. For these versions of Java, you will have to use alternative approaches for asynchronous programming, such as standard threads, CompletableFuture, RxJava, Kotlin coroutines, and others.
Third-party JVM implementations based on JDK 8-18:
Even if it is a third-party implementation (e.g. Amazon Corretto, GraalVM, Zulu, etc.), if it is based on JDK 8-18, then Project Loom virtual thread support will also be missing.
JDK 19 and above:
Loom is available for use starting with this version, but as a preview, which requires an explicit flag to be enabled to activate virtual threads.
JDK 20 and 21:
Project Loom continues to improve in these versions, adding stability and performance improvements.
OpenJDK JVM:
Project Loom is developed within OpenJDK, so all OpenJDK-based JVM implementations (e.g. Zulu, AdoptOpenJDK, etc.) also support virtual threads if they use JDK 19+.
Non-OpenJDK JVMs:
If the JVM is not based on OpenJDK (such as IBM J9 or other proprietary implementations), Project Loom support is not guaranteed, even if the version matches JDK 19 or higher. Such JVMs will likely not include Loom, as the project is specific to OpenJDK and depends on its internal implementation.
GraalVM:
OpenJDK 19+-based GraalVM supports Project Loom if an OpenJDK-compatible configuration is used. However, if it is a specialized version of GraalVM that is not fully compatible with OpenJDK (e.g. Native Image), support may be limited or absent.
Proprietary builds that are not compatible with OpenJDK:
Some proprietary JVM builds may not include all the features available in OpenJDK 19+, so Loom support will also be missing.
Differences from Kotlin Coroutines
Execution model:
Project Loom:
Loom’s virtual threads are implemented at the JVM level. They act as lightweight threads, and the JVM manages their scheduling. This brings Loom closer to the traditional blocking execution model, but without the overhead of creating a full thread for each task.
Kotlin Coroutines:
operate at the library level and use a system of suspend and resume to create asynchronous code. Coroutines are managed at compile time, where suspend points are indicated by suspend.
Support and integration:
Project Loom:
This is a JVM native solution, meaning that any library or code that uses blocking calls will support virtual threads without modification.
Kotlin Coroutines:
work exclusively in Kotlin and require writing asynchronous code using suspend functions and other coroutine constructs.
Semantics and blocking:
Project Loom:
You can use “blocking” code with virtual threads without a significant performance penalty.
Kotlin Coroutines:
code must be written with asynchrony in mind, using suspend and await, which sometimes requires restructuring the usual synchronous algorithms.
Performance:
Project Loom:
has an advantage on the JVM because it creates and manages virtual threads directly, which minimizes overhead compared to OS threads.
Kotlin Coroutines:
are very performant, as they require significantly less resources to manage asynchrony and transitions between tasks, but they cannot run at the JVM level and require Kotlin support.
Examples and comparison with Kotlin Coroutines
A simple example of running multiple tasks in parallel:
Project Loom:
import java.util.concurrent.Executors; public class Main { public static void main(String[] args) { try ( var executor = Executors.newVirtualThreadPerTaskExecutor() ) { executor.submit(() -> { // Expected output: Task 1 on VirtualThread-1 System.out.println("Task 1 on " + Thread.currentThread().getName()); }); executor.submit(() -> { // Expected output: Task 2 on VirtualThread-2 System.out.println("Task 2 on " + Thread.currentThread().getName()); }); // Expected output: Main task on main System.out.println("Main task on " + Thread.currentThread().getName()); } } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { launch { // Expected output: Task 1 on DefaultDispatcher-worker-1 (thread name may change) println("Task 1 on ${Thread.currentThread().name}") } launch { // Expected output: Task 2 on DefaultDispatcher-worker-2 (thread name may change) println("Task 2 on ${Thread.currentThread().name}") } // Expected output: Main task on main println("Main task on ${Thread.currentThread().name}") }
Asynchronous data processing with waiting for results:
Project Loom:
import java.util.concurrent.Executors; public class Main { public static void main(String[] args) throws Exception { try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { var result1 = executor.submit(() -> getData1()); var result2 = executor.submit(() -> getData2()); // Expected output: Result: 30 System.out.println("Result: " + (result1.get() + result2.get())); } } public static int getData1() throws InterruptedException { Thread.sleep(1000); return 10; } public static int getData2() throws InterruptedException { Thread.sleep(1000); return 20; } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { val result1 = async { getData1() } val result2 = async { getData2() } // Expected output: Result: 30 println("Result: ${result1.await() + result2.await()}") } suspend fun getData1(): Int { delay(1000) return 10 } suspend fun getData2(): Int { delay(1000) return 20 }
Using delay in Kotlin and the equivalent in Java with virtual threads:
Project Loom:
public class Main { public static void main(String[] args) throws InterruptedException { var thread = Thread.startVirtualThread(() -> { // Expected output: Start System.out.println("Start"); try { Thread.sleep(1000); // Works like delay } catch (InterruptedException e) { e.printStackTrace(); } // Expected output: End after 1 second System.out.println("End after 1 second"); }); thread.join(); } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { // Expected output: Start println("Start") delay(1000) // Expected output: End after 1 second println("End after 1 second") }
Parallel execution of tasks with waiting for the result:
Project Loom:
import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.Executors; public class Main { public static void main(String[] args) throws Exception { try ( var executor = Executors.newVirtualThreadPerTaskExecutor() ) { var tasks = List.of( (Callable<Integer>) () -> { return getData(1); }, (Callable<Integer>) () -> { return getData(2); }, (Callable<Integer>) () -> { return getData(3); } ); var results = executor.invokeAll(tasks) .stream() .map(future -> { try { return future.get(); } catch (Exception e) { throw new RuntimeException(e); } }) .toList(); // Expected output: Results: [10, 20, 30] System.out.println("Results: " + results); } } public static int getData(int id) throws InterruptedException { Thread.sleep(500); return id * 10; } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { val results = listOf( async { getData(1) }, async { getData(2) }, async { getData(3) } ).awaitAll() // Expected output: Results: [10, 20, 30] println("Results: $results") } suspend fun getData(id: Int): Int { delay(500) return id * 10 }
Using channels to exchange data:
Project Loom:
import java.util.concurrent.BlockingQueue; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; public class Main { public static void main(String[] args) throws InterruptedException { var queue = new LinkedBlockingQueue<Integer>(); try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { executor.submit(() -> { for (int i = 1; i <= 5; i++) { queue.put(i * i); } // Special meaning for completion queue.put(-1); }); int value; while ((value = queue.take()) != -1) { // Expected output: 1, 4, 9, 16, 25 (each number on a new line) System.out.println(value); } } } }
Kotlin Coroutines:
import kotlinx.coroutines.* import kotlinx.coroutines.channels.Channel fun main() = runBlocking { val channel = Channel<Int>() launch { for (x in 1..5) channel.send(x * x) channel.close() } for (y in channel) // Expected output: 1, 4, 9, 16, 25 (each number on a new line) println(y) }
For variety, here are some examples in Kotlin using Project Loom to show that this can be done as well
Making Multiple HTTP Requests:
Suppose you have multiple APIs that you want to access in parallel. Here’s how you can do it using virtual threads.
import java.net.HttpURLConnection import java.net.URL import java.util.concurrent.Executors fun main() { val urls = listOf( "https://jsonplaceholder.typicode.com/posts/1", "https://jsonplaceholder.typicode.com/posts/2", "https://jsonplaceholder.typicode.com/posts/3" ) val executor = Executors.newVirtualThreadPerTaskExecutor() try { val tasks = urls.map { url -> executor.submit { fetchUrl(url) } } // Wait for all tasks to complete tasks.forEach { it.get() } } finally { executor.close() } } fun fetchUrl(urlString: String) { val url = URL(urlString) val connection = url.openConnection() as HttpURLConnection connection.requestMethod = "GET" val responseCode = connection.responseCode println("Response Code for $urlString: $responseCode") connection.inputStream.use { inputStream -> val response = inputStream.bufferedReader().readText() // Print the first 100 characters println("Response Body for $urlString: ${response.take(100)}") } }
Parallel execution of calculations:
In this example, we will execute several calculations in parallel.
import java.util.concurrent.Executors fun main() { val executor = Executors.newVirtualThreadPerTaskExecutor() try { val tasks = List(5) { i -> executor.submit { val result = heavyComputation(i) println("Result of computation $i: $result on thread: ${Thread.currentThread().name}") } } // Wait for all tasks to complete tasks.forEach { it.get() } } finally { executor.close() } } fun heavyComputation(n: Int): Int { // Emulation of heavy calculations Thread.sleep(1000) return n * n }
Working with I/O Streams:
This example demonstrates how virtual streams can help simplify working with I/O streams.
import java.io.File import java.util.concurrent.Executors fun main() { val fileNames = listOf("file1.txt", "file2.txt", "file3.txt") val executor = Executors.newVirtualThreadPerTaskExecutor() try { val tasks = fileNames.map { fileName -> executor.submit { readFile(fileName) } } // Wait for all tasks to complete tasks.forEach { it.get() } } finally { executor.close() } } fun readFile(fileName: String) { try { val content = File(fileName).readText() // Print the first 50 characters println("Content of $fileName: ${content.take(50)}") } catch (e: Exception) { println("Failed to read $fileName: ${e.message}") } }
Scalable Web Server on Virtual Threads:
Here we create a simple HTTP server that uses virtual threads to handle each request. This allows it to easily scale and handle a large number of requests simultaneously.
import java.net.ServerSocket import java.util.concurrent.Executors fun main() { val port = 8080 val serverSocket = ServerSocket(port) println("Server started on port $port") val executor = Executors.newVirtualThreadPerTaskExecutor() try { while (true) { val clientSocket = serverSocket.accept() executor.submit { clientSocket.use { val request = it.getInputStream().bufferedReader().readLine() println("Received request: $request on thread: ${Thread.currentThread().name}") val response = """ HTTP/1.1 200 OK Content-Type: text/plain Hello from virtual threads! """.trimIndent() it.getOutputStream().apply { write(response.toByteArray()) flush() } println("Response sent on thread: ${Thread.currentThread().name}") } } } } finally { executor.close() } }
Project Loom integration with Spring
To integrate Project Loom with Spring, you can use virtual threads to handle requests, making asynchronous programming simpler and more efficient than traditional threads.
Configuration for Tomcat:
We can configure Tomcat to use virtual threads by specifying a custom thread factory.
Configuring TomcatServletWebServerFactory with a virtual thread pool allows each request to be processed in a virtual thread, saving resources during high-load work.
import org.apache.catalina.connector.Connector; import org.apache.catalina.core.StandardThreadExecutor; import org.apache.coyote.http11.AbstractHttp11Protocol; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.runApplication; import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; @SpringBootApplication public class TomcatLoomApplication { @Bean public ServletWebServerFactory servletContainer() { TomcatServletWebServerFactory factory = new TomcatServletWebServerFactory(); factory.addConnectorCustomizers(connector -> { AbstractHttp11Protocol, protocol = (AbstractHttp11Protocol,) connector.getProtocolHandler(); StandardThreadExecutor executor = new StandardThreadExecutor(); executor.setNamePrefix("loom-thread-"); executor.setMaxThreads(Integer.MAX_VALUE); executor.setMinSpareThreads(10); executor.setThreadPriority(Thread.NORM_PRIORITY); // Set up a virtual thread factory executor.setThreadFactory(Thread.ofVirtual().factory()); protocol.setExecutor(executor); }); return factory; } public static void main(String[] args) { runApplication(TomcatLoomApplication.class, args); } }
Configuration for Jetty:
To set up virtual threads in Jetty, we create a ThreadPool that uses virtual threads.
Using QueuedThreadPool with a virtual thread factory allows Jetty to scale and create threads as needed without the overhead of system threads.
import org.eclipse.jetty.util.thread.QueuedThreadPool; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.runApplication; import org.springframework.boot.web.embedded.jetty.JettyServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; @SpringBootApplication public class JettyLoomApplication { @Bean public ServletWebServerFactory servletContainer() { QueuedThreadPool threadPool = new QueuedThreadPool( // Max virtual threads, min threads Integer.MAX_VALUE, 10, 60000 ); threadPool.setName("loom-thread-"); // Set up a virtual thread factory threadPool.setThreadFactory(Thread.ofVirtual().factory()); JettyServletWebServerFactory factory = new JettyServletWebServerFactory(); factory.setThreadPool(threadPool); return factory; } public static void main(String[] args) { runApplication(JettyLoomApplication.class, args); } }
Configuration for Undertow:
To work with virtual threads in Undertow, we use a custom Worker with virtual threads.
In Undertow, an XnioWorker is created, configured to use virtual threads. This is suitable for asynchronous request processing and the non-blocking architecture of Undertow.
import io.undertow.UndertowOptions; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.runApplication; import org.springframework.boot.web.embedded.undertow.UndertowServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; import org.xnio.OptionMap; import org.xnio.Options; import org.xnio.Xnio; import org.xnio.XnioWorker; @SpringBootApplication public class UndertowLoomApplication { @Bean public ServletWebServerFactory servletContainer() { XnioWorker worker = createVirtualThreadWorker(); UndertowServletWebServerFactory factory = new UndertowServletWebServerFactory(); factory.addBuilderCustomizers(builder -> { builder.setWorker(worker); builder.setServerOption(UndertowOptions.ENABLE_HTTP2, true); }); return factory; } private XnioWorker createVirtualThreadWorker() { Xnio xnio = Xnio.getInstance(); return xnio.createWorker(OptionMap.builder() .set(Options.THREAD_DAEMON, true) .set(Options.WORKER_IO_THREADS, 4) .set(Options.WORKER_TASK_CORE_THREADS, 10) // Unlimited number of virtual threads .set(Options.WORKER_TASK_MAX_THREADS, Integer.MAX_VALUE) .set(Options.THREAD_FACTORY, Thread.ofVirtual().factory()) .getMap() ); } public static void main(String[] args) { runApplication(UndertowLoomApplication.class, args); } }
Configuring the embedded Spring Boot server to use virtual threads (Project Loom) cannot currently be fully accomplished using application.properties or application.yml alone. This is because using virtual threads requires configuring a custom ThreadFactory as well as changing the thread pool structure, which is not yet supported by the built-in Spring Boot properties.
However, there are alternative ways to configure virtual threads without having to explicitly write configuration code in a @Bean. For example, you can create your own @Configuration for more centralized control, and as Spring Boot and application servers evolve, a simpler way may emerge.
Property-based custom configuration approach:
You can create a custom configuration so that when you include a specific property in application.properties, Spring automatically applies the virtual thread settings.
Add a custom property to application.properties:
server.use-virtual-threads=true
Let’s create a configuration class that checks the value of the property and applies the settings for the selected server (for example, Tomcat).
In this example, the server.use-virtual-threads=true setting enables the configuration with virtual threads if this property is set in application.properties. This allows you to switch the use of virtual threads through the configuration without changing the application code.
import org.apache.catalina.connector.Connector; import org.apache.catalina.core.StandardThreadExecutor; import org.apache.coyote.http11.AbstractHttp11Protocol; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class VirtualThreadConfig { @Value("${server.use-virtual-threads:false}") private boolean useVirtualThreads; @Bean @ConditionalOnProperty(name = "server.use-virtual-threads", havingValue = "true") public ServletWebServerFactory servletContainer() { TomcatServletWebServerFactory factory = new TomcatServletWebServerFactory(); factory.addConnectorCustomizers(connector -> { AbstractHttp11Protocol, protocol = (AbstractHttp11Protocol,) connector.getProtocolHandler(); StandardThreadExecutor executor = new StandardThreadExecutor(); executor.setNamePrefix("loom-thread-"); executor.setMaxThreads(Integer.MAX_VALUE); executor.setMinSpareThreads(10); executor.setThreadPriority(Thread.NORM_PRIORITY); // Set up a virtual thread factory executor.setThreadFactory(Thread.ofVirtual().factory()); protocol.setExecutor(executor); }); return factory; } }
Using CommandLineRunner for additional flexibility:
Another way is to create a CommandLineRunner or ApplicationRunner that checks the properties and uses virtual threads if enabled. This method allows for flexible configuration of the server at runtime.
To enable this approach, you can set the -Dserver.use-virtual-threads=true property when starting the Spring Boot application.
import org.apache.catalina.connector.Connector; import org.apache.catalina.core.StandardThreadExecutor; import org.apache.coyote.http11.AbstractHttp11Protocol; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory; import org.springframework.stereotype.Component; @Component public class VirtualThreadCommandLineRunner implements CommandLineRunner { private final TomcatServletWebServerFactory factory; public VirtualThreadCommandLineRunner(TomcatServletWebServerFactory factory) { this.factory = factory; } @Override public void run(String... args) { boolean useVirtualThreads = Boolean.parseBoolean(System.getProperty("server.use-virtual-threads", "false")); if (useVirtualThreads) { factory.addConnectorCustomizers(connector -> { AbstractHttp11Protocol, protocol = (AbstractHttp11Protocol,) connector.getProtocolHandler(); StandardThreadExecutor executor = new StandardThreadExecutor(); executor.setNamePrefix("loom-thread-"); executor.setMaxThreads(Integer.MAX_VALUE); executor.setMinSpareThreads(10); executor.setThreadPriority(Thread.NORM_PRIORITY); // Set up a virtual thread factory executor.setThreadFactory(Thread.ofVirtual().factory()); protocol.setExecutor(executor); }); } } }
To be continued (planning to add more examples and performance comparison)