Smart Locking in Java: From Spin to Virtual Threads

The Java Virtual Machine (JVM) uses several smart techniques to make multithreaded programs run faster and more efficiently. These include spin locks, adaptive spinning, lock elimination, lock coarsening, biased locking, and lightweight locking.

1. Spin Locks & Adaptive Spinning

When a thread fails to acquire a lock, instead of immediately suspending, the JVM allows the thread to perform a busy-wait loop (i.e., spinning) for a short period. This approach avoids the high overhead of context switching when the lock is expected to be released quickly.

Adaptive Spinning

Adaptive spinning dynamically adjusts the spin duration based on:

If a thread recently acquired the lock quickly via spinning, the JVM may increase the spin duration for subsequent attempts.

Best For: Short critical sections, low contention.
Not Suitable For: Long lock hold times — CPU cycles will be wasted.

Since JDK 9, the JVM has gotten smarter at handling spin locks by taking into account the number of CPU cores and how threads are assigned to them. Adaptive spinning is turned on by default.

2. Lock Elimination

Lock elimination is based on escape analysis, where the JVM analyzes whether an object is accessible only within the current thread. If the object does not escape the thread’s scope, synchronization is deemed unnecessary and is removed automatically.

Example

Since JDK 17, escape analysis and scalar replacement are highly optimized, making lock elimination more effective, especially in JIT-compiled code.

3. Lock Coarsening

Instead of repeatedly acquiring and releasing a lock in a tight loop, the JVM may extend the lock scope to cover the entire sequence of operations — a technique known as lock coarsening.

Before Optimization:

for (int i = 0; i < 100; i++) {
    synchronized (buffer) {
        buffer.append(i);
    }
}

After Coarsening:

synchronized (buffer) {
    for (int i = 0; i < 100; i++) {
        buffer.append(i);
    }
}

Lock coarsening is handled automatically by the JIT compiler, which dynamically determines the optimal locking strategy at runtime.

4. Biased Locking (Removed in JDK17)

Biased locking is a performance optimization for situations where a lock is almost always used by the same thread. In this case, the JVM “biases” the lock toward that thread by recording its ID in the object’s Mark Word. As long as the same thread keeps using the lock, no actual synchronization is needed — which makes it very fast.

What happens when another thread tries to use the lock?

Limitations

Note:

5. Lightweight Locks

Lightweight locks are designed to reduce the overhead of heavyweight locks in situations with mild contention.

Lock Acquisition

  1. A thread attempts to CAS the Mark Word of the object to point to its own lock record.
  2. If successful → lock acquired.
  3. If unsuccessful, but Mark Word still points to the same thread → reentrant lock.
  4. If CAS fails and another thread holds the lock → spin wait → may escalate to heavyweight lock.

Unlocking

With enhancements in ZGC, G1, and JDK 17+, lightweight locks are still a reliable and efficient option in moderate concurrency scenarios.

6. Heavyweight Locks

These are traditional OS-level mutexes used when lock contention is high. JVM resorts to this locking mechanism when:

High overhead due to context switching.

Stable under high contention, but not recommended for high-performance applications unless necessary.

7. Virtual Threads & Structured Concurrency

With JDK 21, Virtual Threads (Project Loom) and Structured Concurrency represent a major shift in concurrent programming.

Virtual Threads

Structured Concurrency

Recommendation: Use virtual threads combined with high-level concurrency utilities (java.util.concurrent) as possible.