Project Loom is a project being developed by Oracle for Java that adds lightweight threads, or "virtual threads," to the language. The goal of Project Loom is to make it easier to write concurrent code by allowing asynchronous execution without explicitly using complex abstractions like threads, callbacks, or coroutines.
Main Features
Virtual Streams:
The main innovation of Loom. Virtual threads are lightweight threads implemented at the JVM level that allow you to process thousands or even millions of tasks in parallel.
Support for parallel calls:
Thanks to virtual threads, you can write blocking code (for example, synchronous sleep, wait, or I/O calls) that will still perform tasks in parallel.
Supported with JDK 19:
Virtual threads are included as a preview feature starting with JDK 19. They are expected to become a standard feature in the JDK in the future.
Advantages
Simplified asynchronous code:
Code written with virtual threads can be linear and blocking, and does not require asynchronous APIs or complex constructs.
Reduced overhead costs:
There is no need to create real OS threads for each task, which reduces overhead and allows the JVM to run hundreds of thousands of virtual threads on a limited number of OS threads.
Scalability:
Virtual threads make applications that process a large number of parallel tasks (for example, highly loaded servers) scalable.
Flaws
JVM dependency:
Loom only runs on the JVM, which limits its use if compatibility with another platform or a Java version below JDK 19 is required.
Limited support and stability:
Project Loom is still in development and requires further improvements and optimization.
Lack of reactive library functionality:
Loom does not offer built-in tools for reactive programming, such as data transformations and operators, that are found in libraries like RxJava.
Operating principle
Creating a virtual thread:
Virtual threads are created in the same way as regular Java threads (Thread), but they are much lighter and do not take up a significant amount of system resources.
When a virtual thread is created, the JVM does not allocate an OS thread to it. Instead, it runs as a task that can be suspended, started, and suspended again without occupying an OS thread all the time.
Thread.startVirtualThread(() -> { // code running in a virtual thread });
JVM Thread Scheduler:
Virtual threads are managed by a special scheduler at the JVM level, which distributes their execution to real OS threads (carrier threads).
The scheduler operates similarly to OS schedulers, but within the JVM, allowing for more flexible control over the suspension and resumption of virtual threads.
Blocking operations and suspension:
When performing a blocking operation such as I/O or sleep, the virtual thread is automatically suspended by the JVM and releases the skeleton thread resource.
This involves a context switch, in which the virtual thread releases the framework thread. This process is significantly faster and less resource-intensive than working with OS threads.
When the blocking operation completes, the virtual thread is requeued to run on one of the framework threads.
Resuming virtual flow:
When the blocking operation completes, the virtual thread is reassigned to run and picked up by the framework thread.
This allows the JVM to run hundreds of thousands of virtual threads with a limited number of OS threads, providing scalability.
Stack management:
One of the main challenges in implementing Project Loom was creating lightweight virtual threads with suspend and resume support, which required modifications to the stack.
When a virtual thread is suspended, the JVM saves the call stack of that thread, allowing the thread to be resumed later as if it had never been suspended.
Efficient use of resources:
Because virtual threads are not tied to OS threads, the JVM can run thousands of virtual threads using just a few skeleton threads.
This allows for efficient use of CPU time and reduced overhead since the JVM does not make system calls to create new OS threads.
Technical components and aspects
Virtual Threads:
This is a key component of Project Loom. Virtual threads are created and managed by the JVM, not the OS, and can be suspended and resumed as needed, freeing up resources for other tasks.
Carrier Threads:
These are OS threads that are used by the JVM to execute virtual threads. The JVM manages a pool of skeleton threads that execute suspended virtual threads.
Advanced stack management:
Loom uses stack copying on suspend, which allows the current execution state of a thread to be saved and restored upon resumption.
Optimized I/O management:
Loom is built into the JVM in such a way that blocking calls such as reading from files or network requests are handled automatically by pausing virtual threads until the I/O is complete.
Compatibility with existing code:
Virtual threads appear as regular Threads to the rest of the Java program, allowing Loom to be used with existing code without modification.
Compatibility
Project Loom is supported from JDK 19 onwards. Virtual threads were introduced as a preview feature, and Project Loom has gradually gained stability and improvements with each subsequent JDK release since 19. Project Loom is developed within OpenJDK, so support for it is only available on OpenJDK-compatible JVMs.
JDK 8 – JDK 18:
On these versions, Loom's virtual threads are not supported and cannot be enabled. For these versions of Java, you will have to use alternative approaches for asynchronous programming, such as standard threads, CompletableFuture, RxJava, Kotlin coroutines, and others.
Third-party JVM implementations based on JDK 8-18:
Even if it is a third party implementation (eg Amazon Corretto, GraalVM, Zulu etc), if it is based on JDK 8-18, then Project Loom virtual thread support will also be missing.
JDK 19 and above:
Loom is available for use starting with this version, but as a preview, which requires an explicit flag to be enabled to activate virtual threads.
JDK 20 and 21:
In these versions, Project Loom continues to improve, adding stability and performance improvements.
JVM from OpenJDK:
Project Loom is developed under OpenJDK, so all JVM implementations based on OpenJDK (e.g. Zulu, AdoptOpenJDK, etc.) also support virtual threads if they use JDK 19+.
Non-OpenJDK JVM:
If the JVM is not based on OpenJDK (e.g. IBM J9 or other proprietary implementations), support for Project Loom is not guaranteed, even if the version matches JDK 19 or higher. Such JVMs will most likely not include Loom, since this project is specific to OpenJDK and depends on its internal implementation.
GraalVM:
GraalVM based on OpenJDK 19+ supports Project Loom if an OpenJDK-compatible configuration is used. However, if it is a specialized version of GraalVM that is not fully compatible with OpenJDK (e.g. Native Image), support may be limited or absent.
Proprietary builds that are incompatible with OpenJDK:
Some custom JVM builds may not include all the features available in OpenJDK 19+, so Loom support will also be missing.
Differences from Kotlin Coroutines
Execution model:
Project Loom:
Loom's virtual threads are implemented at the JVM level. They function as lightweight threads, and the JVM manages their scheduling. This brings Loom closer to the traditional blocking execution model, but without the overhead of creating a full thread for each task.
Kotlin Coroutines:
operate at the library level and use a system of suspend and resume to create asynchronous code. Coroutines are managed at compile time, where suspend points are designated using suspend.
Support and integration:
Project Loom:
This is a JVM native solution, meaning that any library or code that uses blocking calls will support virtual threads without modification.
Kotlin Coroutines:
work exclusively in Kotlin and require writing asynchronous code using suspend functions and other coroutine constructs.
Semantics and blocking:
Project Loom:
It is possible to use "blocking" code with virtual threads without significant performance overhead.
Kotlin Coroutines:
The code must be written with asynchrony in mind, using suspend and await, which sometimes requires restructuring the usual synchronous algorithms.
Performance:
Project Loom:
has an advantage on the JVM because it creates and manages virtual threads directly, which minimizes overhead compared to OS threads.
Kotlin Coroutines:
are very performant, as they require significantly fewer resources to manage asynchrony and task transitions, but they cannot run at the JVM level and require Kotlin support.
Examples and comparison with Kotlin Coroutines
A simple example of running multiple tasks in parallel:
Project Loom:
import java.util.concurrent.Executors; public class Main { public static void main(String[] args) { try ( var executor = Executors.newVirtualThreadPerTaskExecutor() ) { executor.submit(() -> { // Expected output: Task 1 on VirtualThread-1 System.out.println("Task 1 on " + Thread.currentThread().getName()); }); executor.submit(() -> { // Expected output: Task 2 on VirtualThread-2 System.out.println("Task 2 on " + Thread.currentThread().getName()); }); // Expected output: Main task on main System.out.println("Main task on " + Thread.currentThread().getName()); } } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { launch { // Expected output: Task 1 on DefaultDispatcher-worker-1 (thread name may change) println("Task 1 on ${Thread.currentThread().name}") } launch { // Expected output: Task 2 on DefaultDispatcher-worker-2 (thread name may change) println("Task 2 on ${Thread.currentThread().name}") } // Expected output: Main task on main println("Main task on ${Thread.currentThread().name}") }
Asynchronous data processing with waiting for results:
Project Loom:
import java.util.concurrent.Executors; public class Main { public static void main(String[] args) throws Exception { try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { var result1 = executor.submit(() -> getData1()); var result2 = executor.submit(() -> getData2()); // Expected output: Result: 30 System.out.println("Result: " + (result1.get() + result2.get())); } } public static int getData1() throws InterruptedException { Thread.sleep(1000); return 10; } public static int getData2() throws InterruptedException { Thread.sleep(1000); return 20; } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { val result1 = async { getData1() } val result2 = async { getData2() } // Expected output: Result: 30 println("Result: ${result1.await() + result2.await()}") } suspend fun getData1(): Int { delay(1000) return 10 } suspend fun getData2(): Int { delay(1000) return 20 }
Using delay in Kotlin and its equivalent in Java with virtual threads:
Project Loom:
public class Main { public static void main(String[] args) throws InterruptedException { var thread = Thread.startVirtualThread(() -> { // Expected output: Start System.out.println("Start"); try { Thread.sleep(1000); // Works like delay } catch (InterruptedException e) { e.printStackTrace(); } // Expected output: End after 1 second System.out.println("End after 1 second"); }); thread.join(); } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { // Expected output: Start println("Start") delay(1000) // Expected output: End after 1 second println("End after 1 second") }
Parallel execution of tasks with waiting for the result:
Project Loom:
import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.Executors; public class Main { public static void main(String[] args) throws Exception { try ( var executor = Executors.newVirtualThreadPerTaskExecutor() ) { var tasks = List.of( (Callable<Integer>) () -> { return getData(1); }, (Callable<Integer>) () -> { return getData(2); }, (Callable<Integer>) () -> { return getData(3); } ); var results = executor.invokeAll(tasks) .stream() .map(future -> { try { return future.get(); } catch (Exception e) { throw new RuntimeException(e); } }) .toList(); // Expected output: Results: [10, 20, 30] System.out.println("Results: " + results); } } public static int getData(int id) throws InterruptedException { Thread.sleep(500); return id * 10; } }
Kotlin Coroutines:
import kotlinx.coroutines.* fun main() = runBlocking { val results = listOf( async { getData(1) }, async { getData(2) }, async { getData(3) } ).awaitAll() // Expected output: Results: [10, 20, 30] println("Results: $results") } suspend fun getData(id: Int): Int { delay(500) return id * 10 }
Using channels to exchange data:
Project Loom:
import java.util.concurrent.BlockingQueue; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; public class Main { public static void main(String[] args) throws InterruptedException { var queue = new LinkedBlockingQueue<Integer>(); try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { executor.submit(() -> { for (int i = 1; i <= 5; i++) { queue.put(i * i); } // Special meaning for completion queue.put(-1); }); int value; while ((value = queue.take()) != -1) { // Expected output: 1, 4, 9, 16, 25 (each number on a new line) System.out.println(value); } } } }
Kotlin Coroutines:
import kotlinx.coroutines.* import kotlinx.coroutines.channels.Channel fun main() = runBlocking { val channel = Channel<Int>() launch { for (x in 1..5) channel.send(x * x) channel.close() } for (y in channel) // Expected output: 1, 4, 9, 16, 25 (each number on a new line) println(y) }
For variety's sake, here are some examples in Kotlin using Project Loom to show that this can be done too.
Making multiple HTTP requests:
Suppose you have multiple APIs that you want to access in parallel. Here's how you can do that using virtual threads.
import java.net.HttpURLConnection import java.net.URL import java.util.concurrent.Executors fun main() { val urls = listOf( "https://jsonplaceholder.typicode.com/posts/1", "https://jsonplaceholder.typicode.com/posts/2", "https://jsonplaceholder.typicode.com/posts/3" ) val executor = Executors.newVirtualThreadPerTaskExecutor() try { val tasks = urls.map { url -> executor.submit { fetchUrl(url) } } // We are waiting for all tasks to be completed. tasks.forEach { it.get() } } finally { executor.close() } } fun fetchUrl(urlString: String) { val url = URL(urlString) val connection = url.openConnection() as HttpURLConnection connection.requestMethod = "GET" val responseCode = connection.responseCode println("Response Code for $urlString: $responseCode") connection.inputStream.use { inputStream -> val response = inputStream.bufferedReader().readText() // Print the first 100 characters println("Response Body for $urlString: ${response.take(100)}") } }
Parallel execution of calculations:
In this example we will perform several calculations in parallel.
import java.util.concurrent.Executors fun main() { val executor = Executors.newVirtualThreadPerTaskExecutor() try { val tasks = List(5) { i -> executor.submit { val result = heavyComputation(i) println("Result of computation $i: $result on thread: ${Thread.currentThread().name}") } } // We are waiting for all tasks to be completed. tasks.forEach { it.get() } } finally { executor.close() } } fun heavyComputation(n: Int): Int { // Emulation of heavy calculations Thread.sleep(1000) return n * n }
Working with input/output streams:
This example demonstrates how virtual threads can help simplify working with I/O streams.
import java.io.File import java.util.concurrent.Executors fun main() { val fileNames = listOf("file1.txt", "file2.txt", "file3.txt") val executor = Executors.newVirtualThreadPerTaskExecutor() try { val tasks = fileNames.map { fileName -> executor.submit { readFile(fileName) } } // We are waiting for all tasks to be completed. tasks.forEach { it.get() } } finally { executor.close() } } fun readFile(fileName: String) { try { val content = File(fileName).readText() // Print the first 50 characters println("Content of $fileName: ${content.take(50)}") } catch (e: Exception) { println("Failed to read $fileName: ${e.message}") } }
Scalable web server on virtual threads:
Here we create a simple HTTP server that uses virtual threads to handle each request. This allows it to easily scale and handle a large number of requests simultaneously.
import java.net.ServerSocket import java.util.concurrent.Executors fun main() { val port = 8080 val serverSocket = ServerSocket(port) println("Server started on port $port") val executor = Executors.newVirtualThreadPerTaskExecutor() try { while (true) { val clientSocket = serverSocket.accept() executor.submit { clientSocket.use { val request = it.getInputStream().bufferedReader().readLine() println("Received request: $request on thread: ${Thread.currentThread().name}") val response = """ HTTP/1.1 200 OK Content-Type: text/plain Hello from virtual threads! """.trimIndent() it.getOutputStream().apply { write(response.toByteArray()) flush() } println("Response sent on thread: ${Thread.currentThread().name}") } } } } finally { executor.close() } }
Integrating Project Loom with Spring
Project Loom's integration with Spring allows you to use virtual threads to handle requests, making asynchronous programming simpler and more efficient than traditional threads.
Configuration for Tomcat:
We can configure Tomcat to use virtual threads by specifying our own thread factory.
Configuring TomcatServletWebServerFactory with a virtual thread pool allows each request to be processed in a virtual thread, saving resources during high-load work.
import org.apache.catalina.connector.Connector; import org.apache.catalina.core.StandardThreadExecutor; import org.apache.coyote.http11.AbstractHttp11Protocol; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.runApplication; import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; @SpringBootApplication public class TomcatLoomApplication { @Bean public ServletWebServerFactory servletContainer() { TomcatServletWebServerFactory factory = new TomcatServletWebServerFactory(); factory.addConnectorCustomizers(connector -> { AbstractHttp11Protocol<?> protocol = (AbstractHttp11Protocol<?>) connector.getProtocolHandler(); StandardThreadExecutor executor = new StandardThreadExecutor(); executor.setNamePrefix("loom-thread-"); executor.setMaxThreads(Integer.MAX_VALUE); executor.setMinSpareThreads(10); executor.setThreadPriority(Thread.NORM_PRIORITY); // Setting up a virtual thread factory executor.setThreadFactory(Thread.ofVirtual().factory()); protocol.setExecutor(executor); }); return factory; } public static void main(String[] args) { runApplication(TomcatLoomApplication.class, args); } }
Configuration for Jetty:
To set up virtual threads in Jetty, we create a ThreadPool that uses virtual threads.
Using QueuedThreadPool with a virtual thread factory allows Jetty to scale and create threads as needed without the overhead of system threads.
import org.eclipse.jetty.util.thread.QueuedThreadPool; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.runApplication; import org.springframework.boot.web.embedded.jetty.JettyServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; @SpringBootApplication public class JettyLoomApplication { @Bean public ServletWebServerFactory servletContainer() { QueuedThreadPool threadPool = new QueuedThreadPool( // Max virtual threads, min threads Integer.MAX_VALUE, 10, 60000 ); threadPool.setName("loom-thread-"); // Setting up a virtual thread factory threadPool.setThreadFactory(Thread.ofVirtual().factory()); JettyServletWebServerFactory factory = new JettyServletWebServerFactory(); factory.setThreadPool(threadPool); return factory; } public static void main(String[] args) { runApplication(JettyLoomApplication.class, args); } }
Configuration for Undertow:
To work with virtual threads in Undertow we use a custom Worker with virtual threads.
Undertow creates an XnioWorker that is configured to use virtual threads. This is suitable for asynchronous request processing and Undertow's non-blocking architecture.
import io.undertow.UndertowOptions; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.runApplication; import org.springframework.boot.web.embedded.undertow.UndertowServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; import org.xnio.OptionMap; import org.xnio.Options; import org.xnio.Xnio; import org.xnio.XnioWorker; @SpringBootApplication public class UndertowLoomApplication { @Bean public ServletWebServerFactory servletContainer() { XnioWorker worker = createVirtualThreadWorker(); UndertowServletWebServerFactory factory = new UndertowServletWebServerFactory(); factory.addBuilderCustomizers(builder -> { builder.setWorker(worker); builder.setServerOption(UndertowOptions.ENABLE_HTTP2, true); }); return factory; } private XnioWorker createVirtualThreadWorker() { Xnio xnio = Xnio.getInstance(); return xnio.createWorker(OptionMap.builder() .set(Options.THREAD_DAEMON, true) .set(Options.WORKER_IO_THREADS, 4) .set(Options.WORKER_TASK_CORE_THREADS, 10) // Unlimited number of virtual threads .set(Options.WORKER_TASK_MAX_THREADS, Integer.MAX_VALUE) .set(Options.THREAD_FACTORY, Thread.ofVirtual().factory()) .getMap() ); } public static void main(String[] args) { runApplication(UndertowLoomApplication.class, args); } }
Configuring the embedded Spring Boot server to use virtual threads (Project Loom) cannot currently be fully accomplished using application.properties or application.yml alone. This is because using virtual threads requires configuring a custom ThreadFactory as well as changing the thread pool structure, which is not yet supported by the built-in Spring Boot properties.
However, there are alternative ways to configure virtual threads without having to explicitly write configuration code in a @Bean. For example, you can create your own @Configuration for more centralized control, and as Spring Boot and application servers evolve, a simpler way may emerge.
Custom configuration approach based on property:
You can create a custom configuration so that when you enable a specific property in application.properties, Spring automatically applies the virtual thread settings.
Let's add a custom property to application.properties:
server.use-virtual-threads=true
Let's create a configuration class that checks the property value and applies the settings for the selected server (for example, Tomcat).
In this example, setting server.use-virtual-threads=true enables configuration with virtual threads if this property is set in application.properties. This allows you to switch the use of virtual threads via configuration without changing the application code.
import org.apache.catalina.connector.Connector; import org.apache.catalina.core.StandardThreadExecutor; import org.apache.coyote.http11.AbstractHttp11Protocol; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory; import org.springframework.boot.web.servlet.server.ServletWebServerFactory; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class VirtualThreadConfig { @Value("${server.use-virtual-threads:false}") private boolean useVirtualThreads; @Bean @ConditionalOnProperty(name = "server.use-virtual-threads", havingValue = "true") public ServletWebServerFactory servletContainer() { TomcatServletWebServerFactory factory = new TomcatServletWebServerFactory(); factory.addConnectorCustomizers(connector -> { AbstractHttp11Protocol<?> protocol = (AbstractHttp11Protocol<?>) connector.getProtocolHandler(); StandardThreadExecutor executor = new StandardThreadExecutor(); executor.setNamePrefix("loom-thread-"); executor.setMaxThreads(Integer.MAX_VALUE); executor.setMinSpareThreads(10); executor.setThreadPriority(Thread.NORM_PRIORITY); // Setting up a virtual thread factory executor.setThreadFactory(Thread.ofVirtual().factory()); protocol.setExecutor(executor); }); return factory; } }
Using CommandLineRunner for additional flexibility:
Another way is to create a CommandLineRunner or ApplicationRunner that checks the properties and applies virtual threads if enabled. This method allows for flexible configuration of the server at runtime.
To enable this approach, you can set the -Dserver.use-virtual-threads=true property when starting a Spring Boot application.
import org.apache.catalina.connector.Connector; import org.apache.catalina.core.StandardThreadExecutor; import org.apache.coyote.http11.AbstractHttp11Protocol; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory; import org.springframework.stereotype.Component; @Component public class VirtualThreadCommandLineRunner implements CommandLineRunner { private final TomcatServletWebServerFactory factory; public VirtualThreadCommandLineRunner(TomcatServletWebServerFactory factory) { this.factory = factory; } @Override public void run(String... args) { boolean useVirtualThreads = Boolean.parseBoolean(System.getProperty("server.use-virtual-threads", "false")); if (useVirtualThreads) { factory.addConnectorCustomizers(connector -> { AbstractHttp11Protocol<?> protocol = (AbstractHttp11Protocol<?>) connector.getProtocolHandler(); StandardThreadExecutor executor = new StandardThreadExecutor(); executor.setNamePrefix("loom-thread-"); executor.setMaxThreads(Integer.MAX_VALUE); executor.setMinSpareThreads(10); executor.setThreadPriority(Thread.NORM_PRIORITY); // Setting up a virtual thread factory executor.setThreadFactory(Thread.ofVirtual().factory()); protocol.setExecutor(executor); }); } } }
To be continued (planning to add more examples and performance comparison)