Great stuff is Loom-ing on the Horizon for Java

A short look at Project Loom

Almost two years ago now, I was at Oracle Code One in San Francisco. It was there that I first heard about Project Loom, a project that would introduce some kind of new way to do concurreny in Java. At that point, Loom was just a concept, a promise to make concurrency in Java easier.

Fast forward to today and Loom has changed quite a bit since that initial presentation. The time to catch up with Project Loom.

Old choices sometimes hurt in Java

Ever since Java 1.0 we’ve had the Thread class. This class is a representation of an OS resource, namely an OS thread. OS threads are limited to a couple thousand at a time. This wasn’t much at a problem 25 years ago. It is however a huge problem now, where some applications need to handle tens of thousands of requests (or more) at the same time.

By linking what can be done concurrently in Java to OS Threads, Java has extremely limited it’s own scalability. To add to the problem: Threads are costly to make. Just creating them for a single run and disposing them again isn’t viable. This is why we have started using Thread Pools, groups of re-usable threads we keep alive in our applications.

The real problems begin when we have to block a thread. This happens whenever we do an API call, connect to a database or read from a file. At those moments, our thread isn’t doing anything. It is just “waiting” for something. As threads are costly AND limited, this becomes a serious bottleneck if we want to get the most out of our hardware.

Loom to the rescue

So what is Project Loom going to bring us?

Backwards compatibility

When Loom was first introduced, we were also introduced to the concept of Fibers, a new concept that was dubbed “Threads revisited”. This sparked a bit of controversy in the Java Community, as worries grew about replacing Thread.class everywhere by Fiber.class, which would require a lot of code to be changed. As the project went on the Java team realized that if it looked like a thread, and worked like a thread, then maybe it should just be Thread!

That’s what they did. Instead of creating a totally new concept, Loom introduces us to Virtual Threads. Unlike the current Thread implementation, virtual threads are no longer a one-on-one mapping to OS threads. Instead, virtual threads are a concept managed by the JVM. This means we no longer have to worry about them as much as we had to in the past.

Here is a basic example on how to run something on a virtual thread:

    Thread.startVirtualThread(() -> {
        // Do something in a Virtual Thread
    });

    Thread.builder().virtual().task(() -> {
        // Do something in a Virtual Thread
    }).start();

If you think that code looks awfully familiar you are absolutly correct. By keeping the current name and class, we can effectively keep all our code and our tools the same! While some libraries will need to change to make use of virtual threads, the impact of that change is extremely limited. When using the ExecutorServices, which you should be doing anyway, chances are all that needs to be changed is the implementation of the Executor!

    // Classic fixed thread pool
    ExecutorService executor = Executors.newFixedThreadPool(10);

    // Threadpool with unlimited virtual threads
    ExecutorService executor = Executors.newUnboundedVirtualThreadExecutor();

So hardly any change in code required to switch to Virtual Threads! That’s the kind of backwards compatibility we’ve grown to love (and expect) from our favorite programming language!

Unlimited (virtual) threads :o

In case you are wondering, you did read that previous code example correctly: “newUnboundedVirtualThreadExecutor”. Unbounded means “a lot!", but how does that translate to OS Threads? Well, to reiterate: Unlike the current Thread implementation, virtual threads are no longer a one-on-one mapping to OS threads. The JVM does the heavy lifting of making sure our virtual threads eventually run on actual OS threads.

So, if virtual threads are a concept in the JVM… how many can we run at the same time? According to the “State of Loom” post by Ron Pressler (May 2020): Millions!


Under high load, we will see the true power of Loom.

Under high load, we will see the true power of Loom.


Because these virtual threads are cheap to create, there is no real need for pooling anymore. With Loom, we are entering a world where we will be creating a (virtual) thread for a single action and then just disposing it again. In fact, the UnboundedVirtualThreadExecutor above does exactly that! Whenever you use it, you get a shiny new virtual thread which will be destroyed once it’s done with its task.

Blocking code is no longer an issue

In recent years we saw the rise of frameworks who claim to have bypassed the restrictions of the Java Concurrency model. These “non blocking” frameworks have seen quite some success, Vert.x being the one I personally fell in love with. The main idea behind them was often to write code in such a way that only a limited amount of Threads would be blocked at the same time. This left other threads to be continuously busy. Those threads are never blocked leading to greater overall performance.

Those benefits, like everything in life, came at a cost. Quite often those frameworks required a different programming style, something a lot of programmers (including myself) struggled with. Whenever I do reactive programming, I feel like I’m no longer programming Java. It has always felt alienating to me.

With Project Loom we might no longer need reactive, non-blocking programming as it is known today.

In plain terms reactive programming is about non-blocking applications that are asynchronous and event-driven and require a small number of threads to scale.
- docs.spring.io

Because we can create MILLIONS of virtual threads, each of which can be used for a single action, the cost of blocking a virtual thread is close to zero! With that cost gone… imagine the possibilities! We can write synchronous/blocking code, without having the decrease in performance or scalability.

Conclusion

When Project Loom is completed and finds it’s permanent spot in the JDK, it’ll change how we look at building high-performant Java applications forever. Removing the cost of blocking threads while still being backwards compatible is a masterpiece of engineering by the project team.

I have been trying out the Early Access build and was just baffled by how intuitive the API feels. This is my new #1 of amazing things to come for Java!

In short: “Project Loom allows Java to once again become the Java we fell in love with: boring but highly performant <3."

Addendum:

Try yourself! Download the Early Access Build

Read more:

Loom  Java  JDK 
comments powered by Disqus