JAVA PERFORMANCE CHARLES HUNT BINU JOHN PDF

Comment 0. If you want to: — learn about commands used for OS monitoring — understand about different components of JVM — monitor and tune JVM to improve its performance. This book covers : — command line tools for OS monitoring. The book has 12 Chapters and not all chapters are required to be read in one go. The chapters covering Web Services performance rely on SOAP based services and might not be really relevant to few readers. Also Glassfish is used as the app server for examples in Java EE related monitoring chapters.

Author:Vinos Shakaran
Country:Kazakhstan
Language:English (Spanish)
Genre:Relationship
Published (Last):22 April 2015
Pages:470
PDF File Size:6.1 Mb
ePub File Size:18.33 Mb
ISBN:125-1-49560-830-6
Downloads:47986
Price:Free* [*Free Regsitration Required]
Uploader:Akikora



Feb 01, 18 min read. Charles Humble. Specifically, it provides information on performance monitoring, profiling, tuning HotSpot, and Java EE application performance tuning the latter section was written by Binu John.

It is suitable as a guide for developers new to the topic who want to get a better understanding of what is going on under the hood, and also provides excellent reference material for performance specialists. Whilst there are a number of other books that cover similar ground, this is the first new one on the topic for some time, and as such is able to cover some of the state-of-the-art Java performance tuning techniques that weren't available when, for example, Steve Wilson et al wrote " Java Platform Performance ".

As well as reflecting changes in the underlying technology, and available optimisation techniques, the book reflects changes in common software engineering practices, for example advocating the inclusion of performance testing as part of the continuous build cycle, and involving project stakeholders in setting performance tuning metrics early. The book is particularly good on low level details, with the myriad of different HotSpot command line options well represented, and an excellent, step-by-step guide to JVM tuning.

The two chapters covering profiling one focused on using Oracle's Solaris Studio Performance Analyzer and the NetBeans Profiler, and the other on resolving common issues , are also first-rate, and I was particularly pleased to find an appendix with source code examples for common, but hard to resolve, problems such as lock contention, resizing Java collections, and increasing parallelism. In addition, the section on writing effective benchmarks, and the sort of issues that a smart JIT compiler can cause, is very strong, showing you how you can take a look at the generated assembly code from the JIT compiler to make sure your microbenchmark is doing what you think it is doing.

As well as detailed information on the common top-down i. For example, the book talks about how to reduce TLB misses perhaps the most common form of CPU cache miss and how to reduce involuntary context switches. For reducing other types of CPU stalls, there is a side-bar in Chapter 5, Java Application Profiling, that talks about the -h option for Oracle Solaris Studio Performance Analyzer, which can be used to incorporate hardware counter information in the profile, such as CPU cache misses.

The same side-bar also talks about how alternative implementations can impact how and when access to a Java object field can reduce CPU cache misses. It also, though, states that this type of performance tuning activity is generally recommended for compute bound applications only, and the book itself, not unreasonable, stops short of providing full details on this approach.

While it may be considered by many to be 'fun' to tune an operating system, we advocate not tuning an operating system unless you have evidence, via the data you've collected, that suggests you need to.

It should perhaps be noted that whilst the book provides a lot of useful, generally applicable, information, is does focus a great deal on Oracle's tools and hardware.

The intention in this section is to demonstrate that the common approach of running a subset of a production load as a means to evaluate a systems performance has flaws, but that perhaps doesn't come over as clearly as it should. Elsewhere the two profilers featured are the admittedly excellent Oracle Solaris Studio Performance Analyzer tool, and the NetBeans profiler.

This may be partially due to space constraints at pages plus appendices it is already quite a lengthy book , but it would have been nice to see an alternative, such as Intel's VTune, given more coverage. I would also have liked to have seen some discussion around alternative garbage collectors, such as Oracle's JRockit, IBM's Balanced GC which to be fair may have appeared to late for the book's production schedule or Azul's C4, which don't get a mention.

Of course much of the advice here applies equally well to other containers, but it would have been good to see more reference made to them in the text.

One other nitpick: the lack of colour plates within the text is occasionally problematic. For example the use of monochrome screen shots throughout the text reduces their usefulness, notably in chapter 2 Operating System Performance Monitoring where lack of colour makes reading the graphs more difficult than it should be. Despite these criticisms, however, the book is an excellent, if Oracle-centric, guide to the subject. It doesn't provide a recipe for solving every problem, but does provide enough information for non-performance specialist developers, and others involved in application performance tuning work, to solve the majority of commonly encountered performance problems.

Given that the book advocates a slightly different approach to many others on the topic - recommending a use case, rather than code-centric approach to tuning, and also suggesting that software professionals think about performance tuning much earlier in the development cycle than is commonly done - InfoQ spoke to Charlie Hunt and Binu John to find about more about their thinking here.

LaunchDarkly Feature Management Platform. Dynamically control the availability of application features to your users. Start Free Trial. In addition, we recognized that there was a lot of interest in knowing more about the Java HotSpot VM internals along with some structure around how to go about tuning it.

Also, we noticed there were folks who were using combinations of Java HotSpot VM command line options that just didn't make sense.

And, mostly, we wanted to offer what we have learned over the years doing Java performance. Our hope is that readers will find at least one thing in the book that helps them.

In our opinion, we think the methodology can be adapted to agile approaches and in general most software development approaches. The "take-away" we'd like to see readers come away with is that performance of their application is something that should be considered throughout the entire software development process. In particular, the expected performance of the application is something that should be well understood as early as possible. If there are concerns about whether that performance can be met early in the software development process, then you have the luxury to mitigate that risk by conducting some experiments to identify whether those risks are "real" along with whether you may need to make some alternative decisions, which may require a major shift in the choice of software architecture, design or implementation.

The key here is the ability to "catch" the performance issue as early as possible regardless of the software development methodology. The first and foremost thing to have in place is to understand exactly what it is you are trying to accomplish.

Without having a clear understanding of that, you may still learn some things, but you are at risk of not accomplishing what you really wanted to accomplish. Having a clear understanding of what you want to accomplish will help identify what you need in the way of hardware, software and other environmental needs.

The larger the variability, the more difficult it is to identify whether you're really observing improvements or regressions in your performance tuning efforts or it's just random noise variability in running the experiments. How much variability is acceptable depends on how much of an improvement you are looking for. By the way, there are times when it's useful to understand why a given environment, setup, etc introduces wide variability between test runs, especially when you're looking for small percentage improvements in performance, or you're wanting to identify small percentage improvements in performance.

If you spend some time investigating statistical methods and their equations, you can understand the reasoning behind the "variability" discussion here. This topic is also touched upon in the book sections from Chapter 8, "Design of Experiments and Use of Statistical Methods".

Another important thing if your testing environment deviates from production, is that you understand the differences, and more importantly, whether those differences will impact or impede what you are trying to accomplish in your performance tuning.

It depends on what you want to learn from the performance test and it also depends on how the environment deviates from the production environment. If you have a good understanding of what you want to learn from your performance tests and you don't have the luxury of an exact replica of a production environment, you need to know how your testing environment deviates from the production environment.

If you can justify that the environment deviations will not impact what you want to learn, then you can use an environment that's not an exact replica. Ideally, to ensure the highest probability of success with your performance goals, it is important to ensure that the test machines use the same CPU architecture as the production machines.

Also, it is important to account for differences in architecture between the different CPU families from the same manufacturer, eg: Intel Xeon vs. Intel Itanium. More on chip differences below. However, keep in mind that you need to be able to convince yourself, and your stakeholders, that the differences between your testing environment and the production environment do not introduce performance differences. That can be a difficult task. It would be wise to document any assumptions you are making and be able to show that the assumptions you are making don't introduce differences.

Again, that can be a difficult task. There are two points we should also make here. One, what we're trying to describe here is the notion of "designing an experiment" around the questions you want to have answered, or what you want to learn. Then, identifying a test environment that can satisfy the "design of experiment" without introducing bias or variability that puts into jeopardy what you want to learn.

The second point is, back to the chip differences , one of the reasons for the latter part of Chapter One's sections on "Choosing the Right Platform and Evaluating a System", "Choosing the Right CPU Architecture" and "Evaluating a System's Performance" is so readers understand that, often times, the commonly used traditional approach of evaluating the performance of a system where you take a subset of a production system, put it on a new system, run it at much less than its full capacity, is a practice that has flaws when evaluating a system using UltraSPARC T-series CPUs.

The reason this assumption and approach has flaws is its difference in CPU architecture. The motivation for including these sections is so that folks who are evaluating systems understand that this traditional approach has flaws and to have readers understand why it is flawed.

In addition, our hope is that readers will also question whether any other differences in their testing environment versus production may introduce some unexpected or unforeseen flaw s. Another topic that is applicable is scalability.

Testing on different hardware will most often not show scalability issues if the test hardware has fewer virtual processors than the production hardware. This can be illustrated with some of the example programs used in Chapter 6, "Java Application Profiling Tips and Tricks". If you happen to run some of those example programs on hardware with a small number of virtual processors, it is likely you may not observe the same performance issues as you may see on a test system that has a large number of virtual processors.

This is also pointed out in Appendix B where the example source code listings for those programs exist. So, if what you are wanting to learn from performance testing is related to how well a Java application scales, having a testing environment that replicates as close as possible the production environment is important.

It's a very quick summary, but fairly accurate. To be a little more specific, we advocate first identifying whether you need to profile the application. It's through monitoring the application via JVM monitoring tools, looking at GC logs, looking at application logs, and capturing operating system statistics that you will observe symptoms or clues as to the next step that will point in the direction of finding a resolution to your performance issue.

It's worth mentioning that one thing we have noticed with folks who are investigating a performance issue, is they tend to gravitate towards what they know best. For example, a Java developer tends to go immediately look at the code, some will profile the code immediately, systems administrators will look at operating system data, attempt to tune the operating system, or communicate to the Java developers that his or her application is behaving badly and proceed to tell him or her what is being observed at the operating system, and a person with JVM knowledge will tend to want to tune the JVM first.

I think that's kind of understandable since we all have our "comfort zones". However, we should let the data that's collected drive the performance tuning effort. It's the data that will lead you to the performance resolution the quickest.

What we advocate is using monitoring tools at the operating system, JVM and application level. Then analyze that data to formulate a hypothesis as to the next step. Sometimes it may be application profiling, sometimes it may be JVM tuning and sometimes it may be tuning an operating system, our experience is that it's rarely operating system tuning.

If we assume that we have sufficient evidence to suggest the next step is to do application profiling, then we advocate the idea of first stepping back at the call path level, which often times maps to a use case, and asking yourself what is it that the application is really trying to do. Most modern profilers offer a "hottest call path" view. You will almost always be able to realize greater improvement by changing to, for example, a better algorithm in the "hottest call path" than you will by improving the performance of the "hottest method".

If, however, you only need a small improvement to meet your performance goals, then looking at the hottest method and improving it will likely offer a quicker means to your end goal than the "hottest call path" approach. So the reason we advocate the "hottest call path" approach is we are assuming you're looking for peak performance.

We think most folks would agree that stepping back and looking at alternative algorithms or data structures offers the potential for a bigger performance improvement than making changes to the implementation of a method or several methods. Several reasons It follows from the well understood idiom of, "The earlier a bug is found in the software development lifecycle, the less costly it is to fix it". And, we consider a performance issue a bug.

Additionally, thinking about performance and talking about performance early on at requirements gathering time helps set expectations from both the folks who are building the application and the folks who will be using it. It can also potentially be used or incorporated as part of an acceptance test plan with the users of the application. First, we wouldn't suggest to hire performance specialists unless you have a need for the expertise.

The reason for recommending performance testing to also be included as part of unit and other functional testing is to catch performance issues as soon as they are introduced. The earlier you can catch performance issues, the less time and money you'll spend finding and fixing them. The need for hiring performance specialists should come out of not being able to find a performance issue, or perhaps with advising on how to go about making performance testing part of unit and functional testing.

Again, the motivation here is to minimize the amount of time and effort in tracking down when a performance issue is introduced. Ideally, you'd like to catch performance issues before they ever get integrated into a project's source code repository.

CONECTANDONOS TEXTBOOK PDF

Java Performance

Goodreads helps you keep track of books you want to read. Want to Read saving…. Want to Read Currently Reading Read. Other editions.

NUR UL IDAH ENGLISH PDF

Java™ Performance

Feb 01, 18 min read. Charles Humble. Specifically, it provides information on performance monitoring, profiling, tuning HotSpot, and Java EE application performance tuning the latter section was written by Binu John. It is suitable as a guide for developers new to the topic who want to get a better understanding of what is going on under the hood, and also provides excellent reference material for performance specialists. Whilst there are a number of other books that cover similar ground, this is the first new one on the topic for some time, and as such is able to cover some of the state-of-the-art Java performance tuning techniques that weren't available when, for example, Steve Wilson et al wrote " Java Platform Performance ". As well as reflecting changes in the underlying technology, and available optimisation techniques, the book reflects changes in common software engineering practices, for example advocating the inclusion of performance testing as part of the continuous build cycle, and involving project stakeholders in setting performance tuning metrics early.

D&D ROKUGAN PDF

Book Review and Interview: Java Performance, by Charlie Hunt and Binu John

The authors present dozens of tips and tricks you'll find nowhere else. You'll learn how to construct experiments that identify opportunities for optimization, interpret the results, and take effective action. You'll also find powerful insights into microbenchmarking—including how to avoid common mistakes that can mislead you into writing poorly performing software. Then, building on this foundation, you'll walk through optimizing the Java HotSpot VM, standard and multitiered applications; Web applications, and more. Coverage includes.

Related Articles