Running on Java 22-ea+15-1134 (Preview)
Home of The JavaSpecialists' Newsletter

204Book Review: The Well-Grounded Java Developer

Author: Dr. Heinz M. KabutzDate: 2012-08-06Java Version: 7Category: Book Review
 

Abstract: Ben Evans and Martijn Verburg explain to us in their new book what it takes to be a well-grounded Java developer. The book contains a section on the new Java 7 features and also vital techniques that we use for producing robust and performant systems.

 

Welcome to the 204th issue of The Java(tm) Specialists' Newsletter sent to you from a warm Crete. We spoke to a French art dealer in the sea at Kalathas today and asked him why he came to Crete on holiday. We have a lot of Italians, French and Russians here this year, plus of course thousands of Scandinavians. He told us that countries like Croatia had become overrun with visitors and also quite expensive for holidays. In comparison to other popular destinations, Crete offered excellent value for money. It is true. The prices in restaurants have not changed much since 2006. I can get a delicious freshly squeezed orange juice in Chorafakia for just 2 Euro at Pantelis' cafeteria. And nothing beats the cooking of Irene's next door. The only group of holiday makers that is missing this year are my fellow Germans. Angsthasen ;-)

A few weeks ago, we went to apply for a Greek ID card for my 14 year old son. When we came to Greece, my name was converted into the Greek alphabet as XAINZ KAMPOYTZ. Greek does not have an "H" sound, so they used "X", which is pronounced as "CH". The "U" sound is made up of the diphthong "OY". Unfortunately someone had the bright idea of automatically reverse engineering Latin from Greek names. So the computer was fired up somewhere in Athens and converted me to CHAINZ KAMPOUTZ. Knowing how much trouble incorrect names can cause, I asked them to fix it. This turned out to be rather difficult for them. After all, who can argue with a computer translation? At one point, the policewoman tried to convince me that their system was correct and that I had it wrong. Gosh, 40 years of being called Heinz (like ketchup) Kabutz and only now I find out that it was wrong all the time? Must let my mom know!

javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.

Book Review: The Well-Grounded Java Developer [ISBN 1617290068]

Ben Evans and Martjin Verburg are both well known Java experts who consult in the financial industry in London. They have many years of experience as well-grounded Java developers. Together they wrote The Well-Grounded Java Developer: Vital techniques of Java 7 and polyglot programming [ISBN 1617290068] .

They kindly asked me to write the foreword, which you can read on Manning's website. Hope you enjoy it. It is a bit different. And yes, the fact that I wrote the foreword is an endorsement for the book. I certainly would not have agreed if I did not like the book. I had the great privilege of getting a sneak peak at the book and also to meet Ben and Marty in person when they came to Crete in April.

Cost of Change in Java

In the book they explain why it is so expensive to add new features to the JVM. Adding new library extensions such as fork/join or syntactic sugar like switch-on-string is relatively easy, but a JVM instruction such as invokedynamic is very costly to add. This is why we have not seen many changes to Java's fundamental infrastructure since Java 1.0. I was always wondering why change flowed so slowly in the Java environment. This is all described in chapter 1 of the book, which you can download as a sample chapter.

Binary Literals

One of the new features in Java 7 are binary literals. We can now write numbers as 0b101010101. Unfortunately you are also allowed to write long binary numbers with a lower case L, such as: 0b111110111011101101101l. This is quite confusing to readers of the code, as they can easily mistake the lower case L at the end of the number for a one. It is much clearer to write 0b111110111011101101101L. I would have welcomed it if they had decided to not allow the lower case L for binary numbers, but they probably wanted to stay consistent with other primitive number representations.

Better Exceptions

In previous versions of Java, if we caught the general "Exception" and then re-threw that, we needed to declare that our method throws "Exception" as well:

public void foo() throws Exception {
  try {
    doSomethingWhichMightThrowIOException();
    doSomethingElseWhichMightThrowSQLException();
  } catch (Exception e) {
    // do something with e ...
    throw e;
  }
}

In Java 7, the compiler is clever enough to figure out that only the checked exceptions need to be declared. Thus we can write:

public void foo() throws IOException, SQLException {
  try {
    doSomethingWhichMightThrowIOException();
    doSomethingElseWhichMightThrowSQLException();
  } catch (Exception e) {
    // do something with e ...
    throw e;
  }
}

In the book, Ben and Martijn recommend that you mark the Exception as "final". The compiler does not insist on this, so it is just a convention that they use to signal their intention. In my opinion, this is not necessary, since a lot of code already would have the exception marked as final.

Try-With-Resource

They make an important point about try-with-resource. We need to declare each of the objects that we want to have automatically closed in the try section. For example, this would not be correct:

try (
  ObjectInputStream in = new ObjectInputStream(
    new BufferedInputStream(
      new FileInputStream("someFile.bin")));
) {
  // use the ObjectInputStream
}

If the FileInputStream construction succeeds (because the file does exist) but the ObjectInputStream construction fails (because the file is corrupt) or the BufferedInputStream fails (because of an OutOfMemoryError), then the FileInputStream will not be closed automatically.

The correct way to write the code is like this:

try (
  FileInputStream fis = new FileInputStream("someFile.bin");
  BufferedInputStream bis = new BufferedInputStream(fis);
  ObjectInputStream in = new ObjectInputStream(bis);
) {
  // use the ObjectInputStream
}

Now, if any part of the construction fails, the previously declared and constructed objects will be automatically closed.

The "javap" Tool

The book contains a nice discussion on the javap tool and how we can use it to analyse what is going on in our code. I have mentioned the javap tool in 15 newsletters already (042, 064, 066, 068, 069, 083, 091, 105, 109, 115, 129, 136, 137, 147 and 174). As you can imagine, it is a technique that I often employ to understand what the byte code looks like. However, I do not recall seeing it written about in any Java book to date, at least at the level that Ben and Martijn did. Jack Shirazi mentioned javap very briefly in his excellent book, Java Performance Tuning. Warning: Even though Shirazi's book is fantastic, it is quite dated. As always with clever performance tricks, you need to measure that the tricks works for you. Some parts of his book, such as his methodologies for tuning performance, are still very relevant today.

Bottleneck on Caches

One of the most surprising classes was their CacheTester. I have seen a number of benchmarks that try to show how fast Fork/Join is by iterating over a very large array in parallel. For example, the code might try to find the largest int inside the array.

Usually the benchmark bottlenecks on memory, thus incorrectly proving that fork/join does not give any performance gains. In the CacheTester, Ben and Marty show how iterating over the array one element at the time is not much slower than looking at every 16th element. Here is their CacheTester:

public class CacheTester {
  private final int ARR_SIZE = 1 * 1024 * 1024;
  private final int[] arr = new int[ARR_SIZE];
  private void doLoop2() {
    for (int i=0; i<arr.length; i++) arr[i]++;
  }
  private void doLoop1() {
    for (int i=0; i<arr.length; i += 16) arr[i]++;
  }
  private void run() {
    for (int i=0; i<10000; i++) {
      doLoop1();
      doLoop2();
    }
    for (int i=0; i<100; i++) {
      long t0 = System.nanoTime();
      doLoop1();
      long t1 = System.nanoTime();
      doLoop2();
      long t2 = System.nanoTime();
      long el = t1 - t0;
      long el2 = t2 - t1;
      System.out.println("Loop1: "+ el +" nanos ; Loop2: "+ el2);
    }
  }
  public static void main(String[] args) {
    CacheTester ct = new CacheTester();
    ct.run();
  }
}

I ran their code on my 8-core server and got the following results in microseconds:

        Average   Variance
Loop1   239       12
Loop2   549       48

We can thus see that even though we are reading 16x as many array elements, it is only 2.3 times slower to do that.

Even though the results are good in that the variance is not too high, they could be better if we changed a couple of things. First off, to measure time at the nanosecond granularity invites slight abberations in the system to have an influence on our variance. In my CacheTester, I repeat the iteration 1000 times, thus getting the results in milliseconds. Secondly, I usually try to produce output that I can then copy and paste directly into a spreadsheet. Comma separated values seem to work nicely. Thirdly, the number 10000 in the CacheTester is significant. Typically, after you have called a method 10000 times, the HotSpot compiler starts profiling and optimizing the code. However, it may be a while before the new optimized code is available. Thus we sleep for a second after the 10000 warm-up calls in order to immediately have the fastest times:

public class CacheTester {
  private final int ARR_SIZE = 1 * 1024 * 1024;
  private final int[] arr = new int[ARR_SIZE];
  private static final int REPEATS = 1000;

  private void doLoop2() {
    for (int i = 0; i < arr.length; i++) arr[i]++;
  }

  private void doLoop1() {
    for (int i = 0; i < arr.length; i += 16) arr[i]++;
  }

  private void run() throws InterruptedException {
    for (int i = 0; i < 10000; i++) {
      doLoop1();
      doLoop2();
    }
    Thread.sleep(1000); // allow the hotspot compiler to work
    System.out.println("Loop1,Loop2");
    for (int i = 0; i < 100; i++) {
      long t0 = System.currentTimeMillis();
      for (int j = 0; j < REPEATS; j++) doLoop1();
      long t1 = System.currentTimeMillis();
      for (int j = 0; j < REPEATS; j++) doLoop2();
      long t2 = System.currentTimeMillis();
      long el = t1 - t0;
      long el2 = t2 - t1;
      System.out.println(el + "," + el2);
    }
  }

  public static void main(String[] args)
      throws InterruptedException {
    CacheTester ct = new CacheTester();
    ct.run();
  }
}

Here are the results of my CacheTester, which show almost no variance at all:

        Average   Variance
Loop1   238       0.3
Loop2   546       1.8

When I ran the code on my MacBook Pro with an Intel Core 2 Duo, I had the following results with my benchmark:

        Average   Variance
Loop1   168       17
Loop2   580       37

You can see that the variance was again quite high, because my laptop had too many other programs running on it. On my MacBook Pro hardware, iterating through every element in the array was 3.4 times slower.

Concurrency

Another sample chapter you can download is the one on concurrency. Both Ben and Martijn are certified to present my concurrency course. Ben has a lot of experience in the subject, which led to many interesting discussions when they came here in April.

Just one minor gripe. In the book they use Math.random() * 10 in order to calculate a random delay. Since Java 7, we should rather use ThreadLocalRandom.current().nextInt(10). This has several benefits. First off, ThreadLocalRandom keeps a Random instance per thread, so that we do not have any contention on the random seed. Secondly, the random distribution is fairer with the nextInt(10) method call. The fairness is a minor point, but the contention is not. Math.random() shares an instance of Random and the seed is protected by compare and swap. Thus if a lot of threads call this at the same time, they will need to repeat a lot of expensive calculations to eventually update the random seed.

The book is filled with many other interesting tidbits, and is definitely on my "recommended reading" list for the Java specialist.

Kind regards

Heinz

P.S. Hot off the press: Martin Thompson just published an article on the cost of memory access, showing different approaches to traversing the elements. This is closely related to the CacheTester in Evans and Verburg. Click here to read it.

 

Comments

We are always happy to receive comments from our readers. Feel free to send me a comment via email or discuss the newsletter in our JavaSpecialists Slack Channel (Get an invite here)

When you load these comments, you'll be connected to Disqus. Privacy Statement.

Related Articles

Browse the Newsletter Archive

About the Author

Heinz Kabutz Java Conference Speaker

Java Champion, author of the Javaspecialists Newsletter, conference speaking regular... About Heinz

Superpack '23

Superpack '23 Our entire Java Specialists Training in one huge bundle more...

Free Java Book

Dynamic Proxies in Java Book
Java Training

We deliver relevant courses, by top Java developers to produce more resourceful and efficient programmers within their organisations.

Java Consulting

We can help make your Java application run faster and trouble-shoot concurrency and performance bugs...