CARVIEW |
![]() Get Involved
Get Informed
Get Connected
Search
Online Books:
|
![]() |
|||||||||||||||||||||||||||||||||||||||||||||||||||
![]() |
John Ferguson Smart's Blog
Learn how to code smarter at the Java Power Tools Bootcamp!Posted by johnsmart on March 12, 2008 at 02:49 PM | Permalink | Comments (4)The Java Power Tools book is coming out real soon. In conjunction with this event, I will be giving some special training sessions called the Java Power Tools Bootcamps from May 2008 onwards. The first courses will be in Wellington, San Francisco and London, with other cities in New Zealand, Australia, the USA and Europe planned for later on in the year. ![]() The Java Power Tools Bootcamp is an intense 4-day hands-on workshop covering some of the best open source tools for Java development on the market. This one-of-a-kind course takes you on a in-depth guided tour of some of the best open source Java tools around, showing how you can use them individually and together to write code better, faster and smarter. We cover virtually all aspects of software development, from Build Scripts, SCM Best Practices, Unit and Integration Testing, code quality, and of course Continuous Integration. And we place a particular emphasis on how to make the tools work together. Some of the principal topics covered include:
This is a hands-on course that will give you techniques that you can take away and immediately apply in your daily development work. Places are strictly limited, so book quickly! You can view the schedule and book online here, or contact me directly. Come along! It should be a lot of fun! Cool ways to use Hudson - voice controlPosted by johnsmart on March 12, 2008 at 01:29 PM | Permalink | Comments (0)Paul Duvall, from Stelligent, has been experimenting with using voice commands to control a build server. A neat idea! The basic idea is to use Jott, which is a service that converts your voice messages into email messages. He is running fetchmail on Cygwin to download the mail messages, though, as he remarks, on a *nix build server, native mail tools would do the job with less mucking around. Then he uses an Ant script to read and parse the downloaded messages and kick off a build if an appropriate message is received. Finally, Hudson runs this process at regular intervals. Hudson is good at monitoring external tasks, so I could see this working well. Another idea would be to configure the Ant script to accept a command-line parameter indicating the text you are looking for (eg "Build QA"). The script would return 1 or 0 depending on whether it found the specified text in the latest messages. Of course, under Unix, you could do that easily with an ordinary shell script as well, but it would be less portable. In Hudson, you can define "Post-build actions", things that need to be done once the build finishes. These actions can include running other build jobs. So, you could define one build job, as described above, to monitor your Jott mail messages, and to trigger another build job if any corresponding build orders are found. This other build job would run the actual build. This approach would let you decouple your real build process from the way it is launched. It's an interesting idea, in any case. Now what I'm wondering is, when the build fails, does Hudson answer with "I'm sorry Dave, I'm afraid I can't do that." Using Hudson environment variables to identify your buildsUsing Hudson environment variables to identify your buildsPosted by johnsmart on March 09, 2008 at 08:21 PM | Permalink | Comments (0)So your CI server now automatically deploys your application to an integration server. You've even configured it so that you can manually deploy to the QA server using the same process. Great! But wouldn't it be nice to know exactly what build you are looking at at any point in time? Well, Hudson lets you do just that. When Hudson runs a build, it passes in a few useful environment variables, that you can use in your build script. This is a great way to inject information about the build into your deployable application. For example, Hudson build has a unique number, which you can reference in your build scripts using something like "${BUILD_NUMBER}". This is the list of variables (taken from an obscure corner of the Hudson documentation :-)):
Sweet, you say. But how can you inject this into your deployable application? Easy. Just use these variables as you would any other properties, and let your imagination do the rest! In a Maven project, for example, you can use the maven-war-plugin plugin to inject data into the MANIFEST.MF file as follows: <project>...<build>...<plugins><plugin><artifactId>maven-war-plugin</artifactId><configuration><manifest><addDefaultImplementationEntries>true</addDefaultImplementationEntries></manifest><archive><manifestEntries><Specification-Title>${project.name}</Specification-Title><Specification-Version>${project.version}</Specification-Version><Implementation-Version>${BUILD_TAG}</Implementation-Version></manifestEntries></archive></configuration></plugin>...</plugins></build>...</project> Now, the WAR file generated by Hudson will contain the build number: Manifest-Version: 1.0 It is then a simple matter to extract this field from the MANIFEST.MF file of your deployed application at runtime. The java.util.jar.Manifest comes in handy here. I use this to display the build version discretly at the bottom of the screen so that testers can identify exactly what version they are dealing with. I'll be giving a 4 day Java Power Tools Bootcamp in San Francisco, from May 12 to May 15 just after JavaOne. So, if your in SF at this time, come along! Continuous Integration build strategies - stage your builds!Posted by johnsmart on March 04, 2008 at 03:18 PM | Permalink | Comments (1)So you've got hundreds of tests, but they take ages to run. You have a Continuous Integration server, but it takes an hour to tell anyone when there's a failure. What can you do? This is where staged builds can come in handy. I basically distinguish fast unit tests from slower integration tests. TestNG test groups are very cool for this, but you can also use simple naming conventions. For example, your unit tests run everything called *Test except classes called *IntegrationTest, whereas your integration tests only run *IntegrationTest. Unit tests should be snappy, and give feedback within minutes.For integration tests, they should do the job they are intended to do, but time is not really of the essence. Strictly speaking, unit tests are tests written in isolation, with no interaction with other components. I tend to be a bit more pragamtic here, so some interaction is allowed. However, you do need to be careful loading Spring contexts or Hibernate configurations, as this can be slow. If it slows down your fast unit tests, put it with the integration tests. Heavy-weight tests, such as performance, integration, or GUI tests, will take longer to do. So these should always go in the integration-test category. Actually, I often distingush several types of integration test. Here's how I often do it:
Each task is a separate build on the build server (say, Hudson). The first kicks off whenever the source code repository is updated. The others kick off once the fast unit tests have succeeded. I'll be giving a 4 day Java Power Tools Bootcamp in San Francisco, from May 12 to May 15 just after JavaOne. So, if your in SF at this time, come along! Don't miss CommunityOne and the Java Power Tools Bootcamp in San Francisco in May this yearPosted by johnsmart on February 29, 2008 at 03:00 AM | Permalink | Comments (0)For anyone who's interested, I'll be giving a session at CommunityOne in May entitled "Open source tools to optimize your development process". It should be fun! CommunityOne is a free event on the Monday before JavaOne start, all about open source project and tools. While I'm in the groove, I'll also be giving a special public Java Power Tools Bootcamp training session the following week. The Java Power Tools Bootcamp is a course on open source Java development tools, based on on-site training that I've been giving to clients. Public registration is now open. So, if you want to get ahead of the queue, you can register here. Java Power Tools podcast on JavaWorldPosted by johnsmart on February 28, 2008 at 12:36 PM | Permalink | Comments (0)A little while back I had a ball of a time doing an interview with my good mate Andy Glover about the upcoming Java Power Tools book. It was a fun, off-the-hip, and largely improvised talk, and my thanks go once again to Andy for organizing it. Anyway, the podcast has finally been published, so anyone interested can have a listen here.
A bird's-eye survey of the world of Continuous Integration Tools in 2008Posted by johnsmart on February 25, 2008 at 04:50 PM | Permalink | Comments (6)About a year ago, I launched a poll to learn what Continuous Integration servers people were using. The results were interesting... The original CI tool (if you don't count ye old cron job) came in first with a wopping 35% for CruiseControl. Hudson and Continuum where neck-and-neck, with 14% for Hudson and 13% for Continuum. IntelliJ's TeamCity performed well for a commercial product, with a score of around 9%. But times change, and technologies evolve. A year is a long time in the Java world. So lets take a look at the lay of the land today. And, while you're at it, why not vote on the new 2008 Continuous Integration Poll. I haven't used CruiseControl for a wee while, so I can't vouch for it's latest features. In my experience, it's powerful, and flexible, but a right pain in the nether regions to configure and maintain. The reporting and web interfaces are pretty so-so as well. But, still, lots of people out there are using it. Hudson, the new kid on the block, is constantly evolving, and adding feature after feature. The latest releases have added role-based security, plus an every-increasing list of very cool plugins. And the user interface is still as groovy as ever! Of all the open source CI tools, Hudson is what I generally use, given the choice. Continuum 1.1 has arrived at last, with some cool new features like project groups and role-based security. Unfortunatly, though, the online documentation still leaves a lot to be desired. On the commercial front, JetBrains is now offering a free "Professional Edition" of TeamCity, the Continuous Integration tool from the makers of IntelliJ. This edition is designed for small organisations with a limited number of user accounts, and no support for complex user authentication schemes such as project-based roles and LDAP support. Indeed, there are also a number of commercial Continuous Build servers. There are plenty of good open source Continuous Build tools out there, so why might you choose a CI commercial tool over an open source one? Let's play the devil's advocate for a moment, and consider the options. If your shop works with both Java and Microsoft technologies, you might appreciate TeamCity's support of both Java and .NET builds. TeamCity also has a feature called a "personal build", which is basically a sandbox on the build server where code is automatically compiled and tested before being committed to version control. This is basically equivalent to the developer compiling and running the full set of unit tests before each commit, but with the advantage of being automated. Parabuild seems to proposes something simmilar with their "unbreakable builds". Both TeamCity and Parabuild provide innovative features around distributed builds, though Hudson also provides some basic support for this as well. Atlassian also do a nice Continuous Build server, called Bamboo, which, unsuprisingly, integrates very smoothly with JIRA. Documentation might be another factor in your decision. Documentation for a commercial product is often of higher quality than the open source equivalent. Continuum is a flagrant example of this. The Hudson online documentation is better, though it still has a rather Wiki-ish feel to it. TeamCity documentation is excellent. However, introducing Continuous Integration into an organisation is as much, if not more, about changing mindsets than it is about choosing a particular tool. And there is little risk of vendor lock-in, as it is fairly easy to replace one CI server with another. So, if your new to CI, you might just want to download an easy-to-use open source CI tool (Hudson comes to mind) and start out with that. Once you get familiar with what you can do with a CI server, you can always rethink your requirements later on. To get a better idea of what tools people are using, I've published a new Continuous Integration Poll - come along and vote for your favorite tool! Behavior Driven Development - putting testing into perspectivePosted by johnsmart on February 19, 2008 at 12:26 AM | Permalink | Comments (0)The ultimate aim of writing software is to produce a product that satisfies the end user and the project sponsor (sometimes they are the same, sometimes they are different). How can we make sure testing helps us obtain these goals in a cost-efficient manner? To satisfy the end user (the person who ends up relying on your software to make his or her work easier), you need to provide the optimal feature set. The main challenge here is that the optimal feature set is not always what the users ask for, nor is it always what the BA comes up with at the start. So you need to keep on your toes, and be able to change direction quickly as the users discover what they really need. But that's the realm of Agile Development, and not really what I wanted to discuss here... To satisfy the project sponsor (the person who has to fork out the cash), you need to satisfy your users, but you also need to write your application as efficiently as possible. Efficiency means writing code quickly, but it also means avoiding having to come back later on to fix silly mistakes. For example, I cn wriet ths tezt REASLDY QUIFKLY but if I don't keep an eye on the quality, the end user (you, the reader, in this case) will suffer. Your code needs to be reliable (not too many bugs) and maintainable (easy enought to understand so that the poor bastard who comes after you can work on the code with minimum hair loss). So what has all this got to do with testing? Writing a test takes time and effort, so, ideally, you need to balance the cost of writing a test against the cost of _not_ writing_ the test. Does the test you are writing directly contribute to delivering a feature for the user? Will it lower long-term costs by making it more flexible and reliable? If your tests are to contribute positively to the global outcome of your project, you need to think about this, and design your tests so that they will provide the most benefit for the project as a whole. It is fairly well-established that, in all but the most trivial of applications, unit testing will help to make your code more reliable. The cost of writing unit tests is the time it takes to write (and maintain) them. The cost of not writing them is the time it takes to fix the bugs that they miss. Techniques such as Test-Driven Development (TDD) help to do this by incorporating testing as a first-class part of the design process. When you code using Test-Driven Development, you begin by writing unit tests to exercise your code, and then write the code to make the tests work. Writing the unit test helps (in fact, forces) you to think about the optimal design of your class _from the point of view of the rest of the application_. This is a subtle but significant shift in how you write your code. However, when it comes to testing, developers are often at a loss as to what exactly should be tested. In addition, they tend to focus on the low-level mechanics of their unit tests, rather than Behavior-Driven Development, or BDD, can provide some interesting strategies here. If you're not familiar with BDD, Andy Glover has written an excellent introduction here. Behavior-Driven Development takes Test-Driven Development (TDD) a step further. It is actually more a crystallization of good TDD-based practices, rather than a revolutionary new way of thought. Indeed, you may well be doing it already without realizing it. Using BDD, your tests help define how the system is supposed to behave as a whole. Using BDD, developers are encouraged to "What is the next most important thing the system doesn't yet do?" (see https://behaviour-driven.org/PowerfulQuestions).
This, in turn, leads to meaningful unit tests, with meaningful (albeit verbose) names, such as
Using a basic TDD approach, you might simply test directly against the $5000 value, as shown here: @Test public class TaxCalulatorTest { @Test public void testTaxCalculation() { double taxDueOnLowIncome = taxCalculator.calculateTax(4999); assertThat(taxDueOnLowIncome, is(0.0)); double taxDueOnHighIncome = taxCalculator.calculateTax(5000); assertThat(taxDueOnHighIncome, greaterThan(0.0)); } } However, at a more abstract level, this gets us thinking - where does this value come from? From a configuration file? A database? A web service? In any case, this is the sort of thing that can change at the whim of a politician, so it's probably not a good idea to hard-code it. So we should add a property to our TaxCalculator class to handle this parameter. While we're at it, we rename the test to better reflect what behaviour we are trying to model. So, instead of talking about "testTaxCalculation" (where the emphasis is on what we are testing), we would use a name like "shouldReturnZeroForIncomeBelowMinimumThreshold". The use of the word "should" is deliberate - we are describing how the class should behave. Note how our intentions suddenly becomes clearer. The tests might now look like this: @Test public class TaxCalulatorTest { @Test public void shouldReturnZeroForIncomeBelowMinimumThreshold() { taxCalculator.setMinimumThreshold(5000); double taxDueOnLowIncome = taxCalculator.calculateTax(4999); assertThat(taxDueOnLowIncome, is(0.0)); double taxDueOnHighIncome = taxCalculator.calculateTax(5000); assertThat(taxDueOnHighIncome, greaterThan(0.0)); } }These examples are a little contrived, and obviously incomplete, but the idea is there. Tests written this way are a much clearer way of expressing your intent than with tests with names like "testCalculation1", "testCalulation2" and so on. In addition to the usual JUnit and TestNG, there are also some frameworks such as JBehave which make BDD even more natural. And, with tests like this, tools like TestDox (https://agiledox.sourceforge.net/) can be used to extract documentation describing the _intent_ of the classes. For example, for the test class above, TestDox would generate something like the following: TaxCalulator - should return zero for income below minimum threshold ... Java Power Tools: where it's atPosted by johnsmart on February 13, 2008 at 05:29 PM | Permalink | Comments (2)It's been a while since I've given any updates on the status of the Java Power Tools book. So, here goes. The actual writing is done. Over the last couple of months, Java Power Tools has been proofread and typeset, getting it ready to go to print. Estimated release date is mid March. I've seen and reviewed the individual chapters in the final form - now I'm just waiting for the final cut. The book is finally much bigger than I thought. Current estimates are around 880 pages - I don't know exactly, as I wrote the book in Docbook (I used XMLMind to do the actual writing). My original estimate was around 475 pages. So underestimating work is not just something that happens in the software industry! I basically managed to more-or-less stick to the original schedule through sheer stubbornness, hard work and lots of late nights. I'm planning to give a talk in Wellington sometime in March - more details to come. I'm also busy preparing some training material based on some of the topics covered by the book, for some courses to be given later on in the year. Again, more details later. Reflections on SCM Branching strategiesPosted by johnsmart on February 13, 2008 at 12:37 PM | Permalink | Comments (2)Traditionally, in both CVS and Subversion, if you want to merge some changes from a branch back into the trunk, you need to specify the changes you want to apply. As in "I want to merge the changes made between revision 157 to and revision 189 on branch B back into the trunk". In Subversion 1.5 (which isn't out yet), you just say "Merge the changes from branch B back into the trunk". Subversion will figure out what has already been merged, and only apply the new stuff. This is very cool, and makes it a lot less disuasive to consider merging between branches. I discussed this in some detail in https://www.javaworld.com/javaworld/jw-01-2008/jw-01-svnmerging.html. Anyway, this got me thinking about branching strategies. From what I've seen, there are two common strategies. In the first approach, your development work goes in the main trunk. Branches are for releases or possibly for isolated changks of work. You create a new branch whenever you release a version into a new environment (UAT, production, whatever). Bug fixes made in the release branches can be merged back into the development trunk as required. This strategy has the great merit of simplicity. In the second approach, your trunk contains the production-ready code. Branches are for milestone releases. This means you can have separate teams working on different releases, and theoretically work more efficiently in parallel, but this sounds a bit complicated to manage to me. There are also other approaches, though. One interesting one is to adopt a much more agile stragegy. Create a new branch for each new user story/feature/whatever. Then merge it back into the trunk when it's ready to be integrated. You would also need branches for production releases as well, I suppose. This strategy would let individuals or small groups work on specific features/user stories, and merge them into the main trunk when they are ready. One interesting question for this approach is how would you set up Continuous Integration with this approach. For example, you might have separate CI jobs hooked into each user story branch, as well as another job running against the main branch. Unit testing your Spring-MVC applicationsPosted by johnsmart on February 03, 2008 at 07:15 PM | Permalink | Comments (2)Spring-MVC might use the old MVC model rather than the more recent component-based approachs. It doesn't come with lots of AJAX-based components. It doesn't come with its own arcane tag library to learn - you have to content yourself to JSP/JSTL, Velocity, or FreeMarker. However, it is still a powerful and flexible (and fairly popular) choice as far as web frameworks go. One of the great things about this framework is how testable it is. In Spring-MVC, any custom validators (for field and form validation) and property editors (for converting text fields to specific Java types) are dead-easy to test - you can just test them as if they where isolated POJOs. For example, here is how you could test a simple property editor, called FrequencyEditor, that converts a string into an enum value (called RepaymentFrequency): public class FrequencyEditorTest { FrequencyEditor editor = new FrequencyEditor(); @Before public void init() { editor = new FrequencyEditor(); } @Test public void testSetValidValue() { editor.setAsText("MONTHLY"); RepaymentFrequency value = (RepaymentFrequency) editor.getValue(); assertEquals(RepaymentFrequency.MONTHLY, value); } @Test public void testGetValidValue() { editor.setValue(RepaymentFrequency.MONTHLY); assertEquals("MONTHLY", editor.getAsText()); } ... Pretty simple, eh? It gets better. Spring-MVC also comes with a full set of mock objects that you can use (with a bit of practice) to test your controllers to your heart's content. For example, you can use classes like MockHttpServletRequest and MockHttpServletResponse to simulate your HTTP request and response objects. This is also made easier by the fact that Controllers can be instanciated as normal Java classes. For example, imagine you are testing a controller class for a page that updates a client details record. You could do this very simply as follows: public class UpdateClientTest { // // Prepare your request // request.setMethod("POST"); request.setParameter("id", "100"); request.setParameter("firstName", "Jane"); request.setParameter("lastName", "Doe"); // // Invoke the controller // controller = new ChoosePeriodController(); ModelAndView mav = controller.handleRequest(request, response); // // Inject any service objects you need // controller.setClientService(clientService); ... // // Inspect the results // assert mav != null; assertEquals("displayClient",mav.getViewName()); Client client = (Client) mav.getModel().get("client"); assertEquals("Jane",client.getFirstName()); assertEquals("Doe",client.getLastName()); ... } ... The implications of this are that you can do very thorough testing on all the Java layers of your application, from the controller layer down. Any you can choose the depth of your tests. If you just want fast, light-weight unit tests, you might also mock out the service classes (using something like JMock, for example). Or you can instanciate the Spring container and obtain the service object from there, which potentially lets you test the whole application (this is more integration testing than unit testing). It can get hairy when there are lots of parameters, but, on the other hand, maybe this is when the testing becomes really useful. All this is also true for Spring Portlet MVC, which has a similar set of mock objects. In fact, it is probably even more useful for portlet testing, since automated portlet user interface testing is very hard to script (the URLs tend to be dynamic and unpredictable). That's the sort of thing that is totally lacking in JSF, for example. In Struts, you have something similar, but less powerful, in the form of StrutsTestCase. Merging and branching in Subversion 1.5Posted by johnsmart on January 29, 2008 at 01:23 PM | Permalink | Comments (0)The forthcoming version of Subversion (version 1.5) promises a few niceties, but the best of the lot will be the long-awaited merge tracking feature. For example, one cool trick is that you can now repeatedly merge changes from a branch into the main trunk (or another branch) simply by specifying the branch where the changes are coming from. In current version, you need to figure out exactly what change sets you want to merge back into the trunk, which can be a real pain. There is also a nice little Eclipse plugin which makes handing your merges from Eclipse a whole lot easier. For more details, check out Merging and branching in Subversion 1.5 on JavaWorld. ![]() |
![]() |
March 2008
Search this blog:CategoriesCommunity: Java ToolsCommunity: NetBeans J2EE Open Source Testing Tools Archives
March 2008 Recent EntriesLearn how to code smarter at the Java Power Tools Bootcamp! Cool ways to use Hudson - voice control ArticlesIntegrating Maps into Your Java Web Application with Google Maps and Ajax Instant Messaging in Java Made Easy: The Smack API Web Services Made Easy with JAX-WS 2.0 All articles by John Ferguson Smart » ![]() |
![]()
|