Abstract: The LinkedBlockingDeque has an interesting feature in addAll(), where they first build up the linked nodes and then add them all to the tail. However, a bug allowed us to overflow the size, resulting in a negative count. In this newsletter we look at this bug, and also how it was fixed in Java 26.
A hearty welcome to our 328th edition of The Java(tm) Specialists' Newsletter, sent from the beautiful Island of Crete. One of my daily highlights is going for a run, and if I am home, a quick jump into the Cretan sea, summer and winter. We have a great "parea" - beach buddies - ranging in age from 40 to 75. They are all retired, living off pensions and passive income. I usually exchange a few words and then head off to do my work. What I find interesting in talking to my friends is that they face similar troubles to everyone else. There is no escape from problems in this world. Thus if our entire dream is to one day be retired and then be happy - it is better to find happiness now, because it ain't coming just because we can laze on the beach all morning. I am very grateful for my Java work, consulting, training, speaking at conferences and writing this newsletter. Thank you for reading it :-) I'm also thankful for my family and friends, who make this life worth living.
In October I am speaking at JAX London and JDD Krakow. Please come say "hi" - I don't bite :-) My talk was also accepted in Switzerland, but we could not make it work. Hopefully I will get to Voxxed Zürich in March for all my friends "in der Schwiiz". I'm doing a virtual talk on October 25th for our friends over at GCEasy / yCrash. Hope to see you there.
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
A few month ago, I recorded a detailed walkthrough of the linked
blocking collections. They are fascinating classes, especially the
LinkedBlockingQueue
and how it manages to do locking with a put and take
lock, allowing concurrent put()
and take()
calls.
Very clever indeed. I also
studied the LinkedBlockingDeque
in excruciating detail, and found and fixed
five issues. In this newsletter we will look at one bug, that we could
exploit to make a LinkedBlockingDeque
with a negative size()
.
This bug is fixed in Java 26.
Here is how it works. We create an empty LinkedBlockingDeque
. We then call
addAll()
to pass in an extremely large collection with more than
Integer.MAX_VALUE
elements. Inside addAll(c)
, they
copy collection c
into a private chain of Nodes.
They then acquire the lock, and check the condition
count + n <= capacity
. If it is true, they append the chain
to the tail of the queue. This works great, as long as the collection c
does not overflow the counter n
. In Java 25, this is an
int
, thus it is possible to overflow to a negative number.
The condition then becomes true, allowing us to append the entire chain to the
tail of the deque, even though we have exceeded the capacity of the deque.
Let's try it out and we will then look at the fix.
Our first task is to make a very large collection with more elements than
Integer.MAX_VALUE
. We could use a ConcurrentLinkedQueue
,
since that specifically allows that many elements. However, that would also
use a huge amount of memory, and my poor little laptop only has 96 GB of RAM.
OK, it's a monster of a laptop, decked out to the hilt, even though it is a
few years old now. It would not be able to contain such a large
ConcurrentLinkedQueue
and a LinkedBlockingDeque
at
the same time. Thus, instead, we create a fake collection that only pretends
to have a ginormous number of items, but in reality does not contain any.
Meet our ExtremelyHumongouslyLargeCollection
:
import java.util.*; class ExtremelyHumongouslyLargeCollection extends AbstractCollection<Integer> { private final long size; public ExtremelyHumongouslyLargeCollection(long size) { this.size = size; } public Iterator<Integer> iterator() { return new Iterator<>() { private long pos = 0; public boolean hasNext() { return pos < size; } public Integer next() { if (pos >= size) throw new NoSuchElementException(); pos++; if (pos % 100_000_000 == 0) IO.println("pos = " + pos); return 42; } }; } public int size() { return (int) Math.min(size, Integer.MAX_VALUE); } }
Next we create a LinkedBlockingDeque
and
pass this artificial collection to addAll()
:
import java.util.concurrent.*; public class VeryVeryVeryVeryVeryLargeLBD { static void main() { long time = System.nanoTime(); try { IO.println(Runtime.getRuntime().maxMemory()); IO.println(new ExtremelyHumongouslyLargeCollection(10)); var lbd = new LinkedBlockingDeque<Integer>(); lbd.addAll(new ExtremelyHumongouslyLargeCollection( 1L << 31)); IO.println("lbd.size() = " + lbd.size()); } finally { time = System.nanoTime() - time; System.out.printf("time = %dms%n", (time / 1_000_000)); } } }
We should run this with -Xmx90g, thus giving the JVM 90 GB to play with. Firing this up on Java 25, after a fairly long wait of 85 seconds, we see the following:
96636764160 [42, 42, 42, 42, 42, 42, 42, 42, 42, 42] pos = 100000000 pos = 200000000 pos = 300000000 *snip* pos = 2100000000 lbd.size() = -2147483648 time = 84638ms
Here is the JavaDoc for Collection.size():
/** * Returns the number of elements in this collection. If this * collection contains more than {@code Integer.MAX_VALUE} * elements, returns {@code Integer.MAX_VALUE}. * * @return the number of elements in this collection */
There are not many collections that can contain more than
Integer.MAX_VALUE
items. We already said that
ConcurrentLinkedQueue
can. In addition, we can
also add more items to ConcurrentHashMap
,
ConcurrentLinkedDeque
and the
ConcurrentSkipList
set and map. All of these
implement the size() method correctly, where they return
Integer.MAX_VALUE
when they contain more
elements.
Other collections like LinkedList
and the old
HashSet
and HashMap
incorrectly return
negative values for size()
when we overflow the
count. The array based
structures like ArrayList
cannot create such
large arrays, so we will never see a negative size()
.
Back to our experiment. We saw that it ran in about 85 seconds.
If we turn on GC logging with -verbose:gc
, we can
see that a large amount of time was spent clearing out the young
space. In our case, we actually don't need such a large old
space, and so we can make a small modification by allocating a
massive 85 GB young size: java -Xmx90g -XX:NewSize=85g -verbose:gc
and
we now see the output as follows after just 12 seconds:
96636764160 [42, 42, 42, 42, 42, 42, 42, 42, 42, 42] pos = 100000000 pos = 200000000 *snip* pos = 2100000000 lbd.size() = -2147483648 time = 12240ms
That is an incredible difference and goes to show what a large proportion of our time was spent in GC before. However, please don't use these settings in production, unless you are very sure that you won't end up with much worse performance.
Next we run the same code with Java 26, where I made one small change. For
the variable n
inside the addAll()
method, instead
of an int
, I modified that to use a
long
. I also had to add a cast in the right place.
That's it, and now the condition count + n <= capacity
becomes false when we go beyond Integer.MAX_VALUE
.
Here is the output running with Java 26, and with the NewSize set to 85 GB.
We see that once the size would exceed the capacity, we fall back to
super.addAll()
. This will throw away our linked nodes and add
them individually. After reaching the capacity, we see an
IllegalStateException
.
96636764160 [42, 42, 42, 42, 42, 42, 42, 42, 42, 42] pos = 100000000 pos = 200000000 *snip* pos = 2100000000 [14.772s][info][gc] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 82721M->801M(87104M) 639.310ms pos = 100000000 pos = 200000000 *snip* pos = 2100000000 time = 57738ms Exception in thread "main" IllegalStateException: Deque full at java.base/java.util.concurrent.LinkedBlockingDeque.addLast
Note that to clear away 82 GB of objects takes just 639ms. It is quick to clear away objects, but if we had to move them all to the old space or the survivor spaces, that would be far more expensive.
Kind regards
Heinz
We are always happy to receive comments from our readers. Feel free to send me a comment via email or discuss the newsletter in our JavaSpecialists Slack Channel (Get an invite here)
We deliver relevant courses, by top Java developers to produce more resourceful and efficient programmers within their organisations.