Java Microbenchmarks are Evil

0 Comments

I tried to make a benchmark to compare returning objects vs. throwing exceptions but the Java virtual machine is a very hard thing to benchmark because of the optimizations it does. See this old Q+A for more information, and optimizations have probably improved since then.

I wanted to compare the numbers to Andrew’s numbers from .NET that he wrote in my comments but they are probably skewed/optimized too.

For example I wrote two functions

private static Exception returnException()
{
return new Exception();
}

private static void throwsException() throws Exception
{
throw new Exception();
}

and called them a million times in a loop. With the first one I assigned the result to an Exception variable inside the loop. The second I put in a try/catch block inside the loop and caught the exception. When I timed both it I got around the same time (I did about 10 of each and recorded the high and low):

returnException: 3755-3766 ms
throwsException: 4005-4046 ms

So this can make it look like there is no performance difference between throwing an exception and returning a value. Nope, not so fast. The interpreter/compiler is optimizing the returnException() call. Because it’s such a small function, it’s just inlining it into the loop itself and removing the overhead of having a function call (using the call stack). The second function that throws an exception is likely inlined as well.

The compiler is apparently also smart enough not to generate code for variables that aren’t used, like my Exception variable that holds the result of the method call (that has since been optimized out). But that made me wonder: what’s taking so long then? All you would have is an empty loop. I compared returning new Exception() to returning just boolean true and it was 1000 times slower than boolean. But apparently the allocation using new can’t be optimized away, even though the variable is never used.

The whole point of testing this was to show that when an exception is thrown it has to navigate back up the call stack (cleaning up the stack as it goes) to find the correct catch block. This is what is expensive about throwing exceptions compared to calling functions, which only have to push and pop a few values on and off the stack to return and don’t have to manage the try/catch logic. If the compiler optimizes the code you can’t compare them fairly.

So beware of microbenchmarks, which is exactly what the five year old Q+A linked above said. So only way to fairly test the speed of code generated from an optimizing compiler is to test it in a large product. I wonder if anyone has done a return Object versus throw Exception comparison on a larger scale.