Running on Java 22-ea+15-1134 (Preview)
Home of The JavaSpecialists' Newsletter

044Book Review: Object-Oriented Implementation of Numerical Methods

Author: Dr. Heinz M. KabutzDate: 2002-04-01Java Version: 1.3Category: Book Review
 

Abstract: In our first book review, we look at an interesting book that talks about implementing numerical methods in Java. Although not primarily a Java book, it gives us some insight as to the performance of Java versus other languages like C or Smalltalk.

 

Welcome to the 44th edition of The Java(tm) Specialists' Newsletter, sent to 3101 (!) Java experts in over 77 countries. A special welcome to our first subscriber from Morocco!

I'm still in Germany, enjoying the technology (we won't mention the weather again, OK?) the people, the food, the fantastic beer. Not enjoying the shops that don't take credit cards. In South Africa I once bought a cabbage costing about US$ 0.13 on my credit card (long story, I thought I had some money in my wallet but it was completely empty).

Administrative Note: Before I get into this week's book review, there has been a slight change in the way this newsletter is going to be funded. I have purchased the rights to use an idea by Vince Sabio, now comfortably retired author of HumourNet, in order to enrich myself. He has patented a concept called an "Unsubscription Fee" for newsletters such as this. It's very simple really: Though subscriptions are free, unsubs cost US$5.00 to US$7.00, depending on your geographical location. The charge automatically appears on your credit card (when you unsubscribe) as "Maximum Solutions (South Africa)" (Please remember this when you get your bill.) This idea will also enable me to measure the percentage of intellectual proletariat (TM - Vince Sabio) on my distribution list.

javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.

Book Review: Object-Oriented Implementation of Numerical Methods

As promised, this week I am going to look at an interesting book, that I think most of you will enjoy. The author is a subscriber to this newsletter, but that's no surprise, most of "who's who" of Java authors are on this list :-) My purpose in book reviews is to tell you of interesting and different books that I think you would appreciate.

In the days when I attended school we did not have computer studies, so our universities could not make that a prerequisite for Computer Science. The effect was that a wide variety of talent arrived at our hallowed halls of learning. This presented a problem for my old >Computer Science Department: If they made CompSci too difficult, then those who had never seen a computer would fail, and if they made it too easy, the hackers would get full marks, get bored, not go to class, etc. They therefore made the rule that you had to pass Mathematics II before taking CompSci III. The hardest three years for many a hacker was Mathematics II.

We learnt a whole lot of things at Mathematics, much of which I never fully understood or appreciated. I think that playing with computers was just so much more interesting than looking at a blank piece of paper. I find it much more interesting figuring out a computer program than reading mathematical proofs.

I wish I had had a copy of Dr. Didier Besset (PhD Physics from University of Geneva) book Object-Oriented Implementation of Numerical Methods [ISBN 1558606793] in those days! It marries numerical methods and programming in a very interesting way. Just listen to these algorithms, implemented in Java and Smalltalk:

  • Interpolations: Lagrange, Newton, Neville, Bulirsch-Stoer and Cubic Spline
  • Zero of Function: Bisection Algorithm, Newton's Method, Roots of Polynomials
  • Integration of Functions: Trapeze, Simpson, Romberg
  • Series: Infinite Series, Continued Fractions, Incomplete Gamma Function, Incomplete Beta Function
  • Linear Algebra: Vectors, Matrices, and all that you might dream of for linear algebra
  • Elements of Statistics: Moments, Histograms, Random number generators, probability distributions
  • Statistical Analysis: Fisher-Snedecor, many others
  • Optimization: Hill-Climbing, Powell's Algorithm, Genetic Algorithm
  • Data Mining: Data Server, Covariance, Mahalanobis Distance, Cluster Analysis

It cuts, it slices, it dices! If you can't get excited by all those algorithms, then you're in the wrong profession ;-)

Dr. Besset sets the scene in the book by pulling out some performance statistics. We all know Java is slow ... right? Have a look at these stats from the book:

Operation Units C Smalltalk Java
Polynomial 10th degree msec. 1.1 27.7 9.0
Neville Interpolation (20 points) msec. 0.9 11.0 0.8
LUP matrix inversion (100 x 100) sec. 3.9 22.9 1.0

The C measures are done using published algorithms, so Dr. Besset didn't just add a whole lot of wait statements into the C code. Dr. Besset says: "I want to emphasize here that all the code in this book is real code that I have used personally in real applications." Wow, that's certainly better than most books nowadays :-)

Comparing Doubles

Besides all the interesting algorithms, which are shown with mathematical explanations and well-written Java code, Dr. Besset also tackles issues such as the problems that happen when you compare floating point numbers. Here's an extract written by him for the Smalltalk Chronicles (edited by myself):

Dr. Didier Besset: "One classical caveat with floating-point numbers is checking the equality between two floating-point numbers. Now and then one bloke complains on some news groups that Smalltalk does not compute right with floating-point numbers. In the end it turns out that he was computing a result with method A, the same result with method B, and, to check the results, was evaluating the expression ' resultA == resultB. The fact that this expression evaluates to false has nothing to do with Smalltalk. It is a fundamental problem with floating-point numbers [HK: also in Java].

"A floating-point number is only an approximation of a mathematical real number. A small introductory article like this one is too short to explain things in depth, but I would like to quickly recall a few principles.

"Floating-point numbers are used to keep the relative error constant. This is valid of course for a given number. As soon as numbers are combined together one must follow the propagation of rounding errors. Because the relative error is kept constant, nothing serious happens with multiplications and divisions. The error on additions and subtractions, however, can become prohibitively large, up to the point of generating something utterly wrong. To illustrate this point, try running the following Java program:

public class DoubleTest {
  public static void main(String[] args) {
    System.out.println(2.71828182845905 - 2.71828182845904);
    System.out.println(2.71828182845905 - 2.71828182845904 +
      0.00000000000001 );
    System.out.println(2.71828182845905 == (2.71828182845904 +
      0.00000000000001));
  }
}

[HK: The answer is the following, a free unsubscription credit to anyone who guessed it... ;-]

9.769962616701378E-15
1.976996261670138E-14
false

"Mark the difference in the last digits! The result you will get is 100% wrong.

"Unless you are a very good and courageous mathematician, I would not recommend you to attempt to predict error propagation. The easiest and surest thing to do is to measure error propagation experimentally.

"After coding an algorithm, you can predict roughly where the infinities or the nearly zero cases are located. I am not speaking only about the result. All steps of the algorithm must be checked against the occurrences of infinities. In these areas, try to evaluate a few results by changing the values by a very small amount (10-12 or so for standard IEEE double format). In general the difference between the results will be one or two order of magnitudes larger than the original variation. If you observe something much larger, the algorithm used is not made for computers and must be adapted. I give several examples of such modifications in my book. Other examples can be found in Numerical Recipe for C."

How Should You Compare Doubles?

Shortly after I started sending out my newsletter, a friend of mine mentioned to that he was surprised that Java programmers did not know how to compare doubles. If you just use "==" as in our example above, you will get incorrect results. Dr. Besset also has a section about that in his book. It is my understanding that in Java the precision of doubles and floats is defined by the IEEE 754 floating point format, so there should not be differences between physical architectures, since in Java we are running on a virtual machine. Please run these examples and tell me if your results vary. The results were identical on Wintel, AIX box running Java 1.3 on AIX version 4.3.3.0 (IBM 2 processor 4 cpu model 7044-270) and on a Solaris Box (SunOS Mars2 5.6 Generic_105181-29 sun4u sparc SUNW,Ultra-5_10).

Dr. Besset presents the following algorithm for comparing double precision numbers (reproduced with permission):

/**
 * This class determines the parameters of the floating point
 * representation
 * HK: I have refactored it somewhat to make it thread-safe and
 * to make it easier to understand and to fit into my newsletter.
 * The algorithms are the same as in the book.
 *
 * @author Didier H. Besset
 */
public final class DhbMath {
  /** Radix used by floating-point numbers. */
  private static final int radix = computeRadix();
  /** Largest positive value which, when added to 1.0, yields 0 */
  private static final double machinePrecision =
    computeMachinePrecision();
  /** Typical meaningful precision for numerical calculations. */
  private static final double defaultNumericalPrecision =
    Math.sqrt(machinePrecision);

  private static int computeRadix() {
    int radix = 0;
    double a = 1.0d;
    double tmp1, tmp2;
    do {
       a += a;
       tmp1 = a + 1.0d;
       tmp2 = tmp1 - a;
    } while (tmp2 - 1.0d != 0.0d);
    double b = 1.0d;
    while (radix == 0) {
      b += b;
      tmp1 = a + b;
      radix = (int)(tmp1 - a);
    }
    return radix;
  }

  private static double computeMachinePrecision() {
    double floatingRadix = getRadix();
    double inverseRadix = 1.0d / floatingRadix;
    double machinePrecision = 1.0d;
    double tmp = 1.0d + machinePrecision;
    while (tmp - 1.0d != 0.0d) {
      machinePrecision *= inverseRadix;
      tmp = 1.0d + machinePrecision;
    }
    return machinePrecision;
  }

  public static int getRadix() {
    return radix;
  }

  public static double getMachinePrecision() {
    return machinePrecision;
  }

  public static double defaultNumericalPrecision() {
    return defaultNumericalPrecision;
  }

  /**
   * @return true if the difference between a and b is less than
   * the default numerical precision
   */
  public static boolean equals(double a, double b) {
    return equals(a, b, defaultNumericalPrecision());
  }

  /**
   * @return true if the relative difference between a and b is
   * less than precision
   */
  public static boolean equals(double a, double b, double precision) {
    double norm = Math.max(Math.abs(a), Math.abs(b));
    return norm < precision || Math.abs(a - b) < precision * norm;
  }
}

The book has details as to why the algorithms work the way they do. Here is how you would use the equals() method:

public class BetterDoubleTest {
  public static void main(String[] args) {
    System.out.println("Floating-point machine parameters");
    System.out.println("---------------------------------");
    System.out.println("radix = " + DhbMath.getRadix());
    System.out.println("Machine precision = " +
      DhbMath.getMachinePrecision());
    System.out.println("Default numerical precision = " +
      DhbMath.defaultNumericalPrecision());
    System.out.println(DhbMath.equals(
      2.71828182845905,
      (2.71828182845904 + 0.00000000000001)));
    System.out.println(DhbMath.equals(
      2.71828182845905, 2.71828182845904));
    System.out.println(DhbMath.equals(
      2.718281828454, 2.718281828455));
    System.out.println(DhbMath.equals(
      2.7182814, 2.7182815));
  }
}

On the machines that I ran this test on, the output was:

Floating-point machine parameters
---------------------------------
radix = 2
Machine precision = 1.1102230246251565E-16
Default numerical precision = 1.0536712127723509E-8
true
true
true
false

That's it for this week, I hope you will consider these issues when next you want to compare doubles. And if you like interesting books, do yourself a favour and read Didier Besset's book. No, there's not really an unsubscription fee. Look at the date. Yes, I will get a referral fee if you purchase the book via that link.

Heinz

 

Comments

We are always happy to receive comments from our readers. Feel free to send me a comment via email or discuss the newsletter in our JavaSpecialists Slack Channel (Get an invite here)

When you load these comments, you'll be connected to Disqus. Privacy Statement.

Related Articles

Browse the Newsletter Archive

About the Author

Heinz Kabutz Java Conference Speaker

Java Champion, author of the Javaspecialists Newsletter, conference speaking regular... About Heinz

Superpack '23

Superpack '23 Our entire Java Specialists Training in one huge bundle more...

Free Java Book

Dynamic Proxies in Java Book
Java Training

We deliver relevant courses, by top Java developers to produce more resourceful and efficient programmers within their organisations.

Java Consulting

We can help make your Java application run faster and trouble-shoot concurrency and performance bugs...