The same tests work fine on the developer machine where we've run them (in a slightly different configuration). The "builder" attempts to diagnose the root cause of the failures have been unsuccessful.
I've scheduled a Monday brainstorming session with a larger group of people. We'll discuss what we know about the failures, what we know about matching successes, and then decide on a course of action.
A colleague asked why I was the one convening the meeting, instead of someone in the affected team. His logic was that several different people on the development team should have been intensely interesting in the root cause of the failure, and have "run it to ground".
I think there are several reasons why individual developers are not chasing this problem:
- The problem crosses boundaries (only visible from an installed version, only visible when other components installed, only visible from the continuous integration machine, etc.)
- There is no clear correlation between the first failure and a commit from a developer
- Installation related problems are more difficult and painful to diagnose. Setup and teardown time for the test is signficantly more than the setup and teardown time for most of our other automated tests
- Diagnosing this failure will reduce the energy available to work on other things, like new backlog items and new tests
I'm expressing my priorities by scheduling the meeting and bringing a group of people together to work on the problem. Since I'm a manager, my priorities have a little more weight, and I'm "throwing that weight around" a little for this case.
Eventually I'm confident others in the organization will recognize my focus on keeping the continuous integration servers "clean". Until then, I'll continue working with people to persuade them.
No comments:
Post a Comment