PurrPackage starts with Cobertura, but adds a few unique features that make it much more useful for measuring the quality of unit test coverage:
Configuring PurrPackage is almost as easy as configuring Cobertura. If you have Cobertura set up in your build, you probably need only change a few lines in your build file. See the deployment guide for more information and examples.
Also like Cobertura, PurrPackage is open source software, licensed under the GPL.
No measure of code coverage can ever prove that your unit tests are complete, but tools like Cobertura, Emma, and Clover can reveal parts of the code that are definitely not exercised by a set of tests, which is useful. However, when used with a suite of unit tests, these tools overstate the coverage: code may have been only incidentally run during some other test. Ideally, these tools would produce a report showing what code in a code base is covered while running its corresponding unit test, not just any test. Practically speaking, on the other hand, this appears to be much more trouble to set up than it is worth.
PurrPackage represents a happy compromise: it approximates this ideal, but requires only the most trivial effort. PurrPackage measures coverage on a per-package basis; that is, it shows when code is covered by a test in the same Java package as that code. (Naturally, we are assuming that the project follows the usual convention of placing unit tests in the same package as the target code.)
For an example: consider that while it is natural factor out code for reuse into some core packages, it is far less natural to factor out the unit tests for this code at the same time. Typical code coverage tools will show very little in this situation, since the tests will all still run. PurrPackage can show when code moves across packages when its unit tests do not.
To see how it does this, open this sample report. in a new window. Click on the "Mixed Frameworks" to drill-down to that file, and note the colorful columns on the left of the source code, by the line numbers. Here, as is conventional, green means the line was fully covered, yellow means some but not all branches were covered, and red means no part of the line was covered. PurrPackage, however, gives two views of this: the block on the left indicates whether the line was covered at all, while the other block (the background of the line number) shows whether it was covered by a test in the same package. You can see immeidately that the first two methods are presumably well unit tested; part of the third method and all of the next two methods are not unit tested, and the last method was not tested at all.
The Methods display in the lower left corner shows the total number of elements (lines and branches) in each method, how many were missed by all tests, and how many missed by tests in the same package. (On a larger file, by the way, clicking the links will scroll the source code display to show that method.) Between the "Help & Settings" popup and the tooltips, you can figure out the details of the summary data in the Source Files list, and the list of packages that appears when youq click the "Packages" button in the upper left corner.
PurrPackage let's you describe what level of coverage merits particular attention, and its reports can help you focus on these areas. For this example, we ill be using a very simple and strict policy that says that we expect 100% code coverage by tests in the same package.
Open the sample report in a new window, if you have not already done so. Then, click "Help & Settings," and choose to "Activate" the policy features and to make certain items Visible. You should now see a column in the Source Files list section which indicates if the file is "Ok?" with respect to this policy; as you can see, the "IdealExample" file satisfies it, and the other classes do not. Similarly, the "Packages" display has a "Fails" column, indicating (with "Ch.") that the package contains a source file that did not satisfy the policy.
To refine results even further, click "Settings" and "Make items that satisfy the coverage policy: Hidden." Now the lists include only source files and packages that do not satisfy the policy.
This example is rather trivial, since the policy is simple and the code base so small. To get a better idea of the possibilities, open this larger sample report, which is based on a version of the Apache Commons Math project. Click on the "Data: OK" button to see this report's coverage policy along with comments indicating its intent. Policies are sets of rules configured by a so-called "DSL" based on JavaScript. This enables configuration using various rules and a list of exceptions.
Making the items that satisfy the policy alternately visible and hidden to see how the a policy can help you choose specific packages and files where testing work may be desirable. At its best, the policy can let the report function like an automated task list.
The reports you have just seen are based on reading coverage data in JSON format and rendering them with JavaScript and DHTML. In particular, PurrPackage offers a complete separation of data and presentation, so producing a report in a different format is just a matter of pointing static HTML and JavaScript files at the coverage data. A custom report, by the way, can work with or without either of the per-package coverage or coverage policy features discussed above.