Tuesday, December 22, 2009

Google Alerts - Monitoring Your Online Reputation

Google alerts delivers periodic search results to my e-mail box. I've been impressed at how it helps me detect new bloggers that might be worth reading, and helps me manage my own reputation.

I have several topical search terms that I use to watch for interesting new authors and sites. I currently have alerts set for "Association for Software Testing", "exploratory testing" and Debian Linux testing. I'll eventually add search terms for SharePoint 2010 and evidence based management.

I also have search terms which search for specific authors I've found interesting in the past. The search results on those authors have lead to other interesting writing on topics I follow.

I even search for my own name. I realize that is more than just a little vain, but it provides one way to monitor how others might perceive me. I use an exclusion term or two to avoid the writing of the journalist who shares my name and writes at a newspaper in Nevada. Someday I may add an exclusion term for the person who shares my name in Georgia.

Wednesday, December 9, 2009

Simplifying is Hard Work

One of our QA managers asked a very simple question yesterday that took me through a series of twists and turns trying to find a reasonable answer. It reminded me that seeking and finding solutions to problems is hard work, and finding a workable solution can be exhilarating.

The question was "How can a tester know if a specific fix is included in this build?"

The question hides contextual information like "source code is stored in git", "bug reports are stored in a customized Siebel database", "Siebel interface or API changes are not feasible", "testers are typically a long distance (geographically) from developers", "builds are accessible from a web page of hyperlinks" and "the recent builds page also has links to gitweb".

Challenges hidden in the question:


  • No API connection between git and Siebel means we need to rely on people as the carriers of information, the "information transport"
  • Time and space separation between developers and testers complicates communication, yet complicated forms or processes are more likely to be bypassed than simple ones
  • Complicated user interfaces are more expensive to develop than simple ones, and complicated user interfaces tend to get less interest from users
  • Persuading people to enter accurate data into forms, fields, or even free form text is difficult. It can be made much easier if they see immediate benefit from that entry, if the format is simple, and if the results of that data entry help them

My proposal to my colleagues (after an embarrassing amount of thought) was:


  • When a developer fixes a bug, they paste the SHA1 hash of the fix into the bug report. The SHA1 hash is a unique identifier of that commit in git, so recording it in the bug report provides a "link" between the bug reporting system and git
  • When a build runs, it records the short form of the "git log" for that build (the list of checkins which were in that build) with the identifiers (SHA1 hashes) of each checkin and the first line of the checkin comment
  • When a tester verifies a bug, they first verify that the log file which came with the build contains the SHA1 hash from the bug report. If it does, then they perform the verification. If it doesn't, they don't waste time performing that verification

During the thought process before arriving at that idea, I wandered several different blind alleys, thinking of one complicated solution or another. For example, the early wrong or overly complicated ideas and questions included:


  • Why do they need this data, don't the testers "trust" developers? The question is not about trust, it is about communicating intent, and preventing the wasted time associated with testing a specific fix, reporting it is not fixed, then being told "Oh, it wasn't in that build"
  • How can we ask git to tell us which changes are in a build when there is no API connection between the build and the source repository? I envisioned complicated sequences trying to map a build date or time or version number or branch back to the git repository to generate the list of changes
  • How do we connect the build (performed at some independent time) with the bug report (submitted at a different time on a different system) and the source master (yet another time and system)? Programs, links, pages, all came to mind as ways to make that connection, before I realized that the unique identifier of the commit is one of the few data items which might travel reliably between all those systems

Then again, looking at the idea now in the "cold light of a new day", maybe I was just too tired to think clearly.

Saturday, November 28, 2009

Small Barriers == No Socialization

I was again reminded today that very small barriers can kill a conversation, or cause a contributor to stop contributing. I wanted to post a follow-up comment to David Chadwick's exploratory testing blog posting. I was on a different computer than I originally used to post the first comments.

Unfortunately, the IBM developerworks site is unfamiliar enough that I don't remember my password. I remember that it had a more restrictive password policy than other sites, so I was unable to apply my common password rules to their site. That made the site "unique" in a negative sense, something I had to remember outside my normal patterns of memorization.

The key point: I had a few minutes to contribute something to a conversation. I was not willing to spend more than a minute or two, including the time to login, post the comment, and verify the comment was posted. I abandoned the comment because:

  • Restrictive password rules were outside my typical rule set
  • Time was limited, I was unwilling to spend more time trying to login again
  • Prior experience with the site made me distrust its use of my time

The relationship between a contributor and the forum to which they are contributing seems to be very fragile, at least for me. The newer I am to a forum, the more fragile the relationship. The less value I perceive from the forum, the more fragile the relationship. Even seemingly insignificant hurdles may be enough to stop a contribution and disengage a contributor.

Thursday, November 26, 2009

Do Not Discard My Data

I just wasted 40 minutes trying to post a comment to an IBM developerworks blog posting from David Chadwick.

That wasted 40 minutes was a flawed attempt to describe the simple changes I think could be made to David's list of things which he believes exploratory testing is "not". I believe with relatively simple wording changes, I could modify each of his "not" descriptions to instead be descriptions of high value, useful exploratory tests. The wording changes are so small that I believe they hint David may not yet understand exploratory testing and how to apply it. However, this posting is not about exploratory testing, it is about a site that lost my data, twice.

My first comment was lost after 20 minutes of writing, thinking, and editing. The comment was lost because I clicked the "Add Comment" link, entered my comment, then pressed the "submit" button. The page which was returned politely informed me that my comment was rejected, and listed several possible reasons for the rejection. Unfortunately, IT DISCARDED MY DATA.

Don't discard my data! It frustrates me as a user and makes me unwilling to return to the site. If I must be logged in to submit a comment, force the login before accepting my input.

I registered on developerworks, logged in, and clicked the "add comment" link again. I added a short, dummy comment to assure that I was now able to add comments. I was able to add the comment.

I wrote a new response (I assume it was a little better than the first comment, since second drafts are commonly better than first drafts). After about 20 minutes of working on that response, I clicked the "submit" link. My comment was rejected again, with the same list of possible reasons for the rejection. Unfortunately, IT DISCARDED MY DATA AGAIN.

Don't discard my data! It makes me feel stupid, and then I need to remind myself that I'm not stupid, the software which should be working for me is instead making me do the work.

I assume the second failure was either due to my inserting URL's into the text, or due to the length of the text I was trying to post. I don't know which it is, and I'm frustrated enough with the developerworks site to not care which it is.

I gave up on trying to post a useful comment. I left a short note that my comments had been rejected twice, and if the author wanted my comments, he would need to send me e-mail.

Thursday, November 12, 2009

Systematic Source Formatting

When my team switched to Extreme Programming in March 2003 (XP by 3/03), we decided to try all the practices and all the recommendations. There were plenty of bumps and bruises as we learned what worked for us and what didn't work for us. We learned that the original XP descriptions were too light on testing for our business needs, and we learned that integration tests worked better for us than pure unit tests with mock objects. Those can be topics for another time.

One of the surprisingly effective results of that adoption of all the practices and all the recommendations was using automated source code formatting to assure all our code looked the same. We had a diverse team of people developing the code, some in the U.S., some in Germany. That diverse team of people had different opinions and attitudes about the "one true way" to format source code. Those different attitudes and opinions lead to annoying little changes from one file version to another as the files changed hands from one person to another.

Since XP espouses "no code ownership" and allows anyone to be anywhere in the code, those formatting differences tended to worsen as people moved through the source tree.

We took a "top down edict" approach to implementing "no code ownership". I decided (as the manager) that we would use a program to format all our Java source code as part of each compile. The concept was that developers would no longer need to think about formatting their code, a program would do it for them. We inserted the open source version of the "jalopy" source code formatter into our standard build process and then added an additional step to systematically format the code once a day with the same process, in case someone forgot to format before they committed changes to the master.

The initial adoption of the change caused some angst, since the chosen format (Sun Java code formatting guidelines) disagreed with the "one true way" which some of the team expected. We pushed our way through that initial hurdle and found over the course of years that automated source formatting freed us from a number of problems.

Positive results of our switch to automated source code formatting:

  • Code looks the same no matter who wrote it
  • Changes in the source master are "real changes", not white space shuffling
  • Developers don't need to think about source code format, they can insert code however they want and the formatter will fix it to meet the standard

Negative results of our switch to automated source code formatting:

  • A "diff wall" was created at the point where we first ran the automated source code formatting tool. The automated formatting created a single massive change to the source master to convert from the old ad-hoc format to the new consistent format. I worried at the time that the "diff wall" was going to be a real problem, but it never appeared as a problem in our work
  • Command and control style management is frequently a bad pattern and should be reserved for rare occasions (our major process change from waterfall to Extreme Programming was done as a manager dictate from above, and this was "hidden" in that process change)

There are newer tools to support automatic source code formatting (astyle seems to be popular) and I hope to persuade my new organization to adopt automated source code formatting. I'm not in a position to be the dictator in the new organization, so changes like that are more difficult to create.

Sunday, November 8, 2009

Deliberate Practice - Mary Poppendieck Talk

As a manager at PTC, I need to write a performance appraisal for each of my direct reports. I will also receive a performance appraisal from my manager to help me identify strengths and weaknesses and set goals for the coming year. While preparing my own self-evaluation, I remembered reading about the concept of "deliberate practice". I'd already been trying to apply my understanding of the idea, but decided to listen to an original source before including more about it in my self appraisal.

Mary Poppendieck's Agile 2009 deliberate practice talk describes deliberate practice as practice which is intentionally focused on improving performance. I want to improve my performance. Mary highlights four key components that are required for a person to be using deliberate practice:

  • Mentor - a high skills expert to review, critique, and highlight flaws

  • Challenge - tasks that require greater skill than we currently possess

  • Feedback - review and analysis of results used to improve future attempts

  • Dedication - hard work, time and energy applied diligently


I don't have a mentor for the things I'm trying to improve! At least, I haven't identified someone as my "coach" and had them agree to review, critique, and highlight flaws in my performance. Flaw number 1 (and the most glaring flaw) in my recent improvement attempts.

Many of my tasks and assignments are the same assignments I've had before, coordinating, planning, discussing, and interacting with others. However, I always find those tasks challenging because I don't feel like a naturally social person. Negotiating, discussing, persuading, and debating do not come naturally to me. There is room to improve in the "challenge" area, although I don't see it as making as much an improvement as the absence of a coach in my improvement efforts.

Most of my activities have feedback built into the activity in one form or another. Meetings which start on time, stay on task, complete their objectives, and end on time leave me feeling refreshed, invigorated, and useful. Meetings which start late, wander aimlessly, don't have objectives, or extend beyond their scheduled end leave me frustrated and edgy. That's a form of feedback. However, there are other feedback forms (like team retrospectives) which I've not been using faithfully lately, and need to start using again.

Dedication may actually need some "negative attention", since lately I've been spending too many hours at work and not enough hours with my wife and children. Mary's talk notes that expert performers in other fields (like music) have discovered that they cannot apply more than 3 hours of deliberate practice at a time because it is too tiring. They stop, change tasks, take naps, or otherwise refresh themselves rather than continuing, and risking developing bad habits by practicing poorly.

With the identified gaps in my efforts to improve, now it is time to choose a mentor and start hearing the performance critiques, then acting on them. Finding a mentor seems like a difficult task for a manager. I want a mentor that is

  • Regularly and naturally exposed to my performances, without requiring that I present them a summary of what I did. Swimming coaches do not ask the swimmer to describe the most recent swim, they watch the swimmer and then tell the swimmer directly and openly what could have been improved in that swim. My mentor needs to be someone who "watches my performances" on a regular basis as part of their normal work

  • Able to spend time and energy critiquing my performances with focus on improving them

  • Credible as an expert. My request for coaching is an act of trust and that extension of trust requires that I believe in the skills of the person providing the coaching. I may disagree with their perspectives, challenge their ideas, and still want their coaching

  • Interested in my success. Without interest in my success, I doubt the coach can be trusted to provide excellent feedback


I'm sure there are other things I need in a mentor as well, but that list already worries me. Those who are regularly involved in my work tend to be my direct reports, my peers, and my manager. My direct reports aren't very interested in coaching their manager (other than possibly to smile with him about his many failings). My manager is already a mentor by being my manager. My peers and I frequently disagree on methods and techniques and so I'm not sure peers are the best source of mentors either.

All this musing might also be less useful if instead of a mentor, I need to become a "buccaneer scholar" as suggested by James Bach. James suggests that I should take responsibility for my own education and for my own learning. That may make seeking a single perfect mentor a waste of effort, rather I could accept that there are mentors all around me and those mentors can provide useful information at times, and information to be ignored at other times.

Another alternative is to consider Bob Sutton's questioning of the value of annual performance appraisals. So many ideas to consider, so many things to learn (the act of writing this has already taken me on a different path than I expected when I began...).

Thursday, November 5, 2009

Was Our Switch to Git a Mistake?

My team is part of a larger, multi-team, multi-site organization in our company. The corporate choice for software configuration management is ClearCase. Unfortunately, my team is "remote", and "small". ClearCase does not handle remote teams well. I could find more polite ways to say it, but that is the simple result of our experiment. ClearCase was costing us far too much time due to its poor performance over a wide area network.

We solved the ClearCase problem by creating a bidirectional bridge between ClearCase and Subversion. My team's interface to the source master became Subversion. We saw great performance improvements, and were seeing the ClearCase updates within a few minutes of their arrival on the ClearCase master. The bridge was conceptually very simple, and it worked very, very well for our needs. Life was good.

We were then faced with a new, even more challenging problem. The new development work needed to be spread even more widely than the previous development work, with a larger number of teams involved and more dispersed geography. The new work would create multiple products, each with their own release cycle and their own development lifetime.

At the time we were making this transition, I'd been experimenting with the git version control system. Git is version control software created by Linus Torvalds when he needed to switch from BitKeeper for Linux kernel development. It runs very, very well on Linux. It is significantly faster than Subversion, which (in our environment) is significantly faster than ClearCase. Performance looked like a big winner.

Another challenge of the new environment was branch related. The new teams thought their development model would likely be "branch intensive". My prior experiences with branches have been with CVS and Perforce, where branches are globally visible and merging things between branches is a hassle. I hate branches. However, considering that the new world would be "branch intensive" and Subversion is generally not perceived favorably for branch management, we didn't want to use Subversion in a "branchy" environment.

With those two needs, distributed teams and branch intensive environment, we skipped Subversion and went looking. My recent git experience (and recent Mercurial experience) biased me in favor of a distributed version control system. A key opinion leader in the company had also been using git in a subteam of a very large project, pushing their results back to ClearCase. They reported positive results. My experience had also been positive while I was experimenting with taking a work project off on a "tangent". Git worked well for me, sitting on my underpowered Linux box doing my personal "skunk works" project.

We chose git as the team source control system.

Unfortunately, I had failed to detect my own biases, and the biases of the other early adopters of git. Those biases were very different than the biases of my co-workers.

I'm a command line fan. I'm old enough that my first high school experience programming computers was with the newly installed terminals to the school district mainframe (thanks Davis School District and Layton High for spending the money, the time, and the pain to install those machines!), then I moved to a University that required I submit programs on punch cards (makes me sound old). Before I left the University, they had upgraded to dumb terminals communicating with a DEC minicomputer.

As a command line fan, I found the git "user interface" perfectly comfortable and very similar to CVS, Subversion, and Perforce. There were a few surprises while I tried to understand distributed version control, but those surprises were related to version control concepts, not the specifics of git.

Unfortunately, many in my team and in other teams are not command line fans. They are accustomed to productivity accelerators like graphical user interfaces, integrated development environments, and mouse clicking to perform work much faster. The transition to git has been painful for them. In addition to my transition experience (centralized vs. distributed, new commands, new concepts), they've also had to deal with transitions from robust GUI tools (TortoiseSVN, Perforce Windows client, etc.) to weak and brittle GUI tools (GitSVN, gitk, git gui, etc.).

The challenge has been made worse by our decision as a management team to isolate teams on branches. Two of the managers in the team come from a large scale development organization (5-10x larger than our current organization) and they are accustomed to requiring branches as a way to isolate one team from the potentially breaking changes made by another team. The price of that branch isolation is that we now are required to perform more frequent merges of work, with the resulting complexity and frustrations which come from merging with conflicts. It gets worse when the files to be merged are coming from the Visual Studio IDE, and the meaning of the contents of the files is not always clear.

I think the branch configuration decision has done more damage than the choice of git, but that is probably biased (again) by my command line centric mindset. Unfortunately, we're far enough into the project that we aren't willing to switch SCM systems. We'll remain with git for at least the duration of this project, glad to have a source master, glad to have it connected to our continuous integration servers, and glad to not have the awful performance of remote ClearCase.

In all fairness to git, I still remember the growing pains when we switched from CVS to Perforce. I whined mightily at paying hundreds of dollars per developer for our corporate standard SCM system. Then I whined mightily at the tool changes and use model changes forced upon us by Perforce's way of thinking. After 6 months or a year, I discovered that I had changed my way of thinking, and was now very comfortable using Perforce, getting value from its way of branching, and being very grateful that it was so fast.

Maybe 6-12 months from now I'll say the same things about git. Maybe it is a part of "climbing the learning curve", and unfair to judge our experience this early. Or maybe not...

I still don't know what we should have chosen instead of git, since it is not clear to me that there were any better alternatives for my team at that time. The company was not willing to purchase another SCM system, since they were already paying for ClearCase. That excluded all the purchased SCM systems (Perforce, Microsoft Team System, Accurev, BitKeeper, etc.). The teams were known to be widely distributed, so that pushed us towards distributed SCM. The benchmark comparisons suggested that Git was faster than Mercurial in many operations, and the Bazaar people were still not settled on their final "on disc" format. Subversion was not well perceived for handling "branchy" development, and CVS was worse than Subversion.

The Linux kernel handles massive amounts of change (averaging 2-4 changes per hour continuously for the last 4 years) from many, many developers. It scales well for that widely distributed, branch intensive team, yet we're struggling with it. Of course, Linux kernel developers are even more likely to be command line biased than I am, and scaling the tool is not the issue that is getting in our way, it is more our choice to be "branchy" and the user interface weaknesses in git.

So many things to learn, so little time...

Thursday, October 29, 2009

Dangerous Interruptions

Bob Sutton's blog post on reducing medical interruptions reminds me of Sunday mornings when I take my mother-in-law from her nursing home to church. I frequently interrupt or disrupt the nursing staff with my out of sequence, unpredictable arrival, and with my desire to get "Nana" to church on time.

Nana's trip from the nursing home to church starts from her room at the nursing home. I arrive at her room between 10:20 AM and 10:40 AM (depending on how late I arrive from home). Sunday is the only day we do this, so I tend to disrupt all sorts of people at the nursing home, including the nursing staff.

My mother-in-law is a diabetic, and church runs over the noon hour when she would normally receive her medications. The nurse would normally check her blood sugars right before lunch, then based on the results of that blood sugar test, she would select the proper dose of insulin, draw that dose, and administer the dose to Nana. Each of those steps has a potential for error, and each of those steps needs careful thought and attention to detail by the nurse.

Because I arrive as much as 90 minutes prior to lunch, and Nana will be gone for the three hours of church, the nurse is required to interrupt her current medication process, test Nana, medicate Nana, and then return to her previous task. The nurses are always very kind about handling the interruption, and they provide great care. I worry that my interruption may cause them to make unnecessary mistakes...

The study which Bob Sutton references was performed in the UCSF hospital system in San Francisco and is described in a San Francisco Chronicle article. The study was an attempt to reduce the frequency of medication errors at hospitals. They used both low-tech solutions and high tech solutions to reduce medication errors by nurses.

The low tech solutions described in the article focused on reducing nursing interruptions when administering medication. The article describes "do not interrupt" sashes and vests, closing blinds to prevent distractions, and other relatively simple techniques to reduce interruptions during the crucial activity of administering medication. The article noted that the nursing teams were encouraged to develop their own solutions, within their own working environment (own your process). It appears from the study that nurses administer medications (a detailed technical task) less accurately when they are interrupted than when they are undisturbed. It also appears that nurses allowed to explore improvement techniques tend to improve.

The study may not directly apply to my software development team, but I think there are several lessons I should take from the study. They are lessons others have noted, but the article serves as a good reminder.


  • Interrupting technical work (pair programming, software design, software testing, etc.) increases the chances for error. I need to interrupt my people less
  • Allowing and encouraging people to improve their own processes, their own ways of working is likely to generate improvement. I need to find ways to acknowledge my mistakes openly, learn from those mistakes, and encourage others to do the same. A software bug is a late manifestation of a mistake, mistakes will happen in human endeavors, and we want to learn from those mistakes, not hide them until later
  • Fear of failure tends to hide those failures, particularly in organizations with a culture of fear. Sutton's posting notes that hospitals which acknowledge and seek to reduce their drug administration errors tend to report 10x more drug administration errors than units with a more punitive attitude towards errors. The failures will still occur, but they will be discovered later, and likely be discovered with more damage done, or higher costs incurred from the failure. It would be a gross mistake to declare the nursing unit which reports 10x the drug administration errors as a failed unit without further investigation. If the clinical results of the unit are better (fewer deaths, fewer injuries, lower costs, etc.), then the larger number is actually highlighting their good practice of learning, rather than the bad practice of medication errors. Don't worry about bug counts, worry about what bugs can tell us about how to be better
  • "Best practices" at one location were not necessarily "best" at another location, although sharing practice based experiences seems like a good way to learn from mistakes and thus make fewer mistakes in the future.

Friday, October 23, 2009

Learning and Responding

A mistake was made today. Code was merged from one branch to another branch and the destination branch was broken in its intended use. The break was detected late in the day of the team that caused the break, and they had already mostly left for the day. The break highlighted all sorts of weaknesses in how I was handling things, including things like

  • Why didn't I make it clear to everyone both the purpose and the target configuration of each branch? Poor communication I had not made it clear what the purposes and expected configuration was for each branch, and they assumed that since they could see the branch building in one case, that was sufficient
  • Why didn't the person who performed the merge detect the broken build on the continuous integration server? Unclear information sources. We had configured 3 different continuous integration servers because we needed three different configurations. Unfortunately, I then "muddied the waters" by having one of the branches made compatible with all three configurations, and actively visible on all three configurations. When the developer performed the merge, they saw that it was "green" on the screen they were watching, and thought they were done. It had gone "red" on the other two servers, and those two were the most important to my team
  • Why wasn't the team which performed the harmful merge able to repair their damage? Unavailable spare configuration They had no machine available to them which matched the problem configurations and could be used for diagnosis and development. Their machines were all configured for their needs, and the break was in an area needed by other teams
  • Why did it take half a day to recover from the damage? Inexperience with our tools We recently switched from Perforce to Subversion to Git and the transition has left us less skilled in dealing with the complexity of this type of failure.

All told, the damage cost my team less than a day to recover, and because we're using a distributed version control system, they were able to continue their work locally, but they were not able to push to the central repository.


Moral of the story: Communicate clearly, listen carefully, and be willing to change as better ideas arrive

Saturday, October 10, 2009

Ask More Questions - Get More Answers

I had been tolerating a nagging problem with my web browser on multiple machines and now have a solution, because I had the presence of mind to finally ask a question.

Firefox is my preferred web browser because it includes the Gmarks plugin. The Gmarks plugin brings my google bookmarks into the Firefox menu. That makes portable bookmark management easier (all my bookmarks are stored at Google, visible from any web browser on the internet).

I also prefer the Foxit PDF reader instead of the Adobe reader. It feels faster, cleaner, and seems less likely to be attacked by malware (smaller installed base, newer code base).

Unfortunately, Firefox would report "OCX failed to load" when I tried to open a PDF file. I had found all sorts of strange alternatives for opening PDF files in Firefox. For example, sometimes I would download the PDF file, then open it in Foxit from the local machine. Other times I would copy and paste the URL from FIrefox to Internet Explorer, then use Internet Explorer to open the PDF file.

All those strange alternatives (work arounds, fixes, etc.) have now stopped. I was weary of the alternatives, so I used Google to search for the error message. In classic Google search fashion, the first page had a perfect match for my needs, a post which described my problem and an easy solution to my problem.

The moral: Ask questions sooner, don't be afraid of questions or their answers. I'll need to think more about why I didn't ask the question sooner...

Thursday, September 24, 2009

Feedback Junkie

I was reminded again this morning that I am a "feedback junkie". My team is on a new project and the backlog items are starting to appear as small working pieces of code. In addition, we're connecting with other groups in the company and connecting them to our continuous integration server, showing them how it works, showing them why it helps, and hoping that together we can arrive at working faster because we have better feedback systems.

We've already exceeded my 5 minute threshold on the feedback system. My arbitrarily chosen rule of thumb is that we need to know a checkin is "good" or "bad" within 5 minutes of that checkin. I know that is not always possible. I know there are many tests and sets of tests which will take more than 5 minutes. Still, that 5 minute barrier is the goal, and we'll keep splitting, partitioning, and refining to keep the feedback system within those 5 minutes.

Now, we continue fomenting revolution, getting other people just as addicted to feedback as I am...

Monday, August 24, 2009

Experimenting with Windows Live Writer

I heard about Windows Live Writer in the context of Windows 7.  It proclaimed itself as able to help me do a better job posting to my blog.  Thus far I’ve struggled to post multiple pictures and to have the formatting I want.

MarkWaite

The idea is that I should be able to easily embed multiple pictures, wrap text around the pictures, and generally manage the blog more directly with their editing tool rather than using the HTML editor provided by blogspot.com.

We’ll see…

MarkWaite

Thursday, August 13, 2009

Python WIN32 extensions on Windows 7

I started using Windows 7 a few days ago and needed the Python interpreter. I also needed the WIN32 extensions for Python.

The Python installation worked just fine, I downloaded Python 2.6.2 and it installed with the expected Microsoft UAC prompt confirming that I truly intended to install.

The WIN32 installation prompted for UAC as well, but then failed with an obscure error message. However, I was able to successfully install if I opened a command processor window with "Run as Administrator", and then ran the pywin32 installer from there. With that "magic", the installer succeeded and had the happy message "The pywin32 extensions were successfully installed".

Thursday, July 16, 2009

CAST 2009 - Red, Yellow, Green Facilitation

An amazing conference has ended its formal sessions. The last 3 days have been filled with insights about software testing, management, and measurement with a group of serious thinkers and practitioners of software testing.

I was fascinated by the facilitation technique used to support the high amount of conversation that occurs at CAST. The facilitation technique allowed a group of engaged listeners to discuss, debate, question, and interact with the presenter in a fluid but surprisingly orderly fashion.

Imagine the types of problems which are likely to occur when a presenter brings a controversial topic to a group of highly engaged, forward thinking professionals. The audience (software testers) consider it their professional role to challenge the status quo, see things differently, and understand those differences more deeply. With that type of audience, a typical presentation would rapidly lose focus as many of the testers challenge, question, and discuss their insights.

The CAST 2009 facilitators used "K-cards" and some agreed principles to assure that the presentations and resulting conversations are a good balance between allowing the presenter to complete their ideas and allowing the audience to interact with the presenter. K-cards are a trio of colored 3x5 cards with a unique number assigned to each member of the audience.

The red card is the "Burning Issue" card. An audience member raises their red card to interrupt the presenter or any current discussion. The red card is used to raise points of order, to flag blocking problems (like poor facility acoustics), or to allow a meaningful interruption of the presentation with key information. The red card can be confiscated by the facilitator if the facilitator feels it is being misused or abused.

The yellow card is the "On Stack" (or current topic) card. It signals that you have a question or comment related to the current thread of discussion.

The green card is the "New Stack" (or new topic) card. It signals that you have a question or comment which is not related to the current discussion. The facilitator keeps a list of numbers on a sheet of paper which reminds the facilitator the expected order in which new topics will be addressed.

The interaction during the sessions was orderly, insightful, and well managed. Ideas were presented, disagreed upon, discussed, and then new ideas were managed as well. The general format of a presentation allowed the first half of the allotted time to be dedicated to the presenter, while audience members would listen and if necessary, would raise a red card to flag a point of order or question which justified interrupting the speaker. The facilitator during my first few sessions was quite patient with my tendency to interrupt and did not take my red card.

Once the presentation was complete, or presentation time expired, the session switched to "open session". The facilitator called the number of the first "new topic" card they had seen during the presentation. The audience member whose number was called (I was number 15 throughout the sessions) asked their question or made their comment and the presenter responded, with some "back and forth" dialog between questioner and presenter.

If someone else had an "on topic" comment or question, they would raise their yellow card. Throughout the session the facilitator is noting the order of appearance and resolution of green ("new topic") and yellow ("on topic") cards on a notepad. "On topic" comments and questions take priority over new topics, and burning issues take priority over same thread topics.

With that simple mnemonic device, a skilled facilitator, an engaged audience, and a presenter ready to engage in dialog about their topic, the conference moved forward very well.

Paul Holland, the lead facilitator of CAST 2009, noted that there are some other subtle techniques which the facilitator can use to further improve the meeting. For example, if there are especially strong or high expertise individuals in the room, it is OK (and useful) for the facilitator to place those individuals at the "bottom" of the "on topic" stack, even if that is not the order in which they raised their card. By placing the experts at the bottom of the on topic stack, it allows the chance for others to present the question or observation which the expert would have presented, and involves other less expert people in the discussions more effectively. I believe some of the facilitators even chose consciously to place experts at the bottom of their "new topic" stack so the less expert would be involved in the conversation.

That stacking system worked well in the session I attended with experts. There were cases where the expert would be called upon and would call "pass" because their idea or comment had already been covered in the discussion.

The system is called "K-cards", named after Paul Holland's wife Karen. Before K-cards, Paul facilitated by having people learn three hand signs to signal the same meaning as the 3 colored K-cards. One of the attendees complained that the hand signs were too complicated. Paul was complaining to Karen in mock outrage that someone would not be able to learn 3 simple hand gestures. Karen suggested, "Why not use different colored cards". They made the switch, and they are now named "K-cards".

Paul noted that the Los Altos Workshop on Software Testing (LAWST) pioneered the original format which has been improved with K-cards.

I will attempt to use K-cards in sessions where I facilitate a discussion with a large group, and I may discuss the idea with others. We have a user experience workshop coming soon, and that workshop seems like an interesting place to try this technique as a way to manage the many opinions, discussions, and conversations which will naturally arise.

Thanks to Paul Holland and to the rest of the CAST 2009 facilitation team for showing how effectively a simple device can encourage interesting, effective, actively progressing conversation!

Tuesday, May 12, 2009

Another Mistake - Inconsistent Systems

The build broke again. It seems to have been the same root cause as the last break. The build machine has something different about its configuration which caused it to reject code which was allowed on a developers clean installation of Visual Studio 2008 and the Microsoft.NET framework.

The moral of this story: Don't ignore the first failure, since the root cause of the failure won't "just go away" until it is understood and repaired.

Thursday, May 7, 2009

End of day anti-pattern

I made a foolish mistake today. I should have known better... Now it is time to resolve to make new and different mistakes tomorrow, instead of making this same mistake yet again.

Wait For Green

Two of the programs used for our installation only make sense when run as an administrator. The Windows Vista and Windows Server 2008 user account control (UAC) code does not allow administrator privileged users to run programs as an administrator unless they explicitly select "Run as Administrator", or if the program has been marked with a manifest which tells the operating system to always run this program as an administrator.

Those two programs were using the default manifest, and one of the two was adding itself to the "Start Programs" menu. When a Windows Server 2008 user clicked the program with user account control enabled, it crashed painfully and offered to send a report of the problem to Microsoft. We didn't want that.

Visual Studio has an easy way to add manifests to applications, and it seemed simple enough that even a manager could do it...

I made the change, compiled the code, reviewed the code with a real programmer, and let it sit for a few days. This evening, just before leaving for home, I submitted it to the source master and left the building for the swim team dinner.

Don't Do That!

That was a mistake. The compilation failed on the production build machine, even though it had worked flawlessly on my machine. Apparently the production build machine has a different version of one of the compiler components in its path, and that different version did not recognize the options inserted by my version of the Visual Studio IDE. Ugh.

Check the Continuous Integration Server After You Commit!

I've reverted my change, but not after keeping someone else up late into the night because I was not paying attention to the results of my code change.

(There's another thing which is "have fast continuous integration", but we're still working on that...)

Saturday, April 25, 2009

"Benefit" that doesn't provide benefit

The U.S. government has allowed "pre-tax" medical reimbursement for several years now. The system allows my employer to take money from my pay before they compute and deduct the personal income tax, social security tax, etc. Those funds are placed in an "account" from which I may request reimbursement for qualifying medical expenses.

The medical reimbursement account has the effect of making some of my health care payments "tax free", stretching my health care funds further by the amount of my tax rate. That is a nice benefit, and I've used it for many years.

As an added benefit, my previous employer and my current employer both offer a Mastercard which can be used to pay eligible medical expenses directly, instead of paying them "out of pocket" and then requesting separate reimbursement.

That Mastercard seems like an ideal solution. It reduces my paperwork and it could reduce my costs by not requiring that I mail evidence for the reimbursement. It could reduce the costs for the benefits provider since they would not have to process the reimbursement evidence either.

But No, There's More (or Less)

Unfortunately, it doesn't work that way. It appears that almost every time I use the "benny card" (Mastercard to pay from the reimbursement account), the provider is required by the government to gather proof of the validity of the expense.

The sequence I wanted was:

  1. I pay a medical expense with the Benny card
  2. The provider pays the expense and deducts the expense from my account


The sequence I get is:

  1. I pay a medical expense with the Benny card
  2. The provider pays the expense and deducts the expense from my account
  3. The provider requests proof of the validity of the expense
  4. I find the receipt (by this point, several weeks old), copy and mail the copy to the provider
  5. The provider processes the receipt and decides it is valid (or not)
  6. If not valid, the provider rejects our claim and request repayment of the money they had paid

The actual sequence is worse than using the Benny card, not better!

I'm not clear on the root cause of the problem, but some of the alternatives to this sequence might be:

  • Stop requiring validity checks of expenses, accept some fraud as cheaper than the alternative
  • Declare all expenses from certain providers as "valid" (doctors, pharmacies, etc.)
  • Stop pre-tax medical reimbursements and either find another way to provide comparable benefit, or admit that the benefit is not valuable enough for the expense it creates


My moral: Be careful of unintended consequences. I doubt our elected representatives or the people who planned the pre-tax medical reimbursement system would be pleased that I have decided to never use the Benny card again, because its use is more onerous than the old reimbursement system

Saturday, April 18, 2009

Personal Continuous Integration

I've been experimenting for some months with distributed version control system (Mercurial and Git) on various personal projects.

The most interesting of the personal projects was a branch from an active code base that I wanted to extend in a slightly different direction. Because my changes are going in a slightly different direction, they were not likely to be included in the original code base for a very long time. That meant I was going to be "on a branch" from that original code base for a very long time.

Past experiences with CVS and Perforce suggested that being on a branch for a very long time could be a problem. Perforce was not as bad as CVS, but still it was difficult to maintain a separate personal branch and "keep it healthy". Inevitably I would make changes that were harmful to the main code, and not realize I had harmed it because I was not getting all the feedback that is available to developers on the main code.

The main code developers have a continuous integration setup which runs automated tests in many different configurations and summarizes the results to a central location. The results are easy to browse and easy to watch as they evolve.

Since I'm not on the main code, I don't have that nice infrastructure to support my private branch. With distributed version control, I have all the power of a version control system (incremental checkin, revert to previous point in time, branching, merging, etc.). Why can't I have all the power of a continuous integration server for myself as well.

Why Not Use My Own Machine?

I'm sure others have already realized this, but with the ease of installing, configuring, and using the Hudson continuous integration server, I can run my own continuous integration server which compiles and tests code from my personal version control, before it is ever pushed to any other person (or system) in the organization.

Setup Idea
  1. Install Sun JDK (needed by Hudson)
  2. Install Hudson continuous integration server
  3. Run the Hudson continuous integration server
  4. Install distributed version control (Mercurial or Git or ...)
  5. Install Hudson plugin for selected distributed version control
  6. Clone the development repository from the company central location
  7. Create a new Hudson job which monitors the local repository, checks out the source from the local repository when something changes, compiles it, tests it, and reports its results
  8. Start making local changes, checking them in, and enjoying the benefits of continuous integration test runs while developing the code, without the danger of checking into a central repository before the code is "done done"
I think those setup steps could be reduced to less than a day, and that day would repay itself within the first 4 weeks of work as developers were more confident in their changes before they shared them with others. I don't yet know if the idea will work for "real" developers, since I'm the manager trying to provide them the tools and environment to be successful, rather than a full time developer. We'll see if the idea actually reduces the feedback loop, and if it actually is viewed positively by the developers. What Will They Value? I think developers will gain
  1. Rapid feedback on their changes
  2. Background work (compiling and testing while they work on other steps)
  3. Isolation of their changes from others until they are "done done"
  4. Visibility to the impact of changes from others in their test environment (integrating changes from others onto their branch is a "trigger" event, just as checking in their own changes is a trigger event)

CAST 2009 - Early Bird Discount Extended

The 2009 Conference of the Association for Software Testing is being held in Colorado Springs, CO this year. I've registered and am preparing to attend Dr. Cem Kaner's first day tutorial on metrics and how they can serve our stakeholders.

The early bird discount has been extended to May 1. Now is a good time to persuade your manager that a conference with Jonathan Koomey, Jerry Weinberg, James Bach, Cem Kaner, Michael Bolton, Mike Dwyer, and Scott Barber is a great investment for your business.

Wednesday, April 8, 2009

Five Whys and Four Fingers Pointing Back At Me...

A bug report at work traveled a somewhat strange path as we tried to deduce the root cause of the problem. That strange path reminded me that very frequently when I follow the "Ask Five Whys" heuristic, I discover that there are things which I can change which will improve the situation.

In this particular case, a bug was found, fixed, and then reopened because it was apparently not fixed. There were then several e-mail exchanges between various people as they tried to deduce why the bug was still not fixed. The submitter was confident that the bug was still in the software, so it could not have been fixed. The fixer was confident the change had been submitted, so it must be a problem somewhere else. Others in the conversation wondered if there were additional complications which had not been considered. All of those ideas (and more) could have been correct.

In this specific case, a series of simple gaps were enough to mislead us all.

  1. A translation mistake was discovered in late March
  2. The bug report was assigned to the wrong person, but e-mail exchanges alerted the translation team that the bug existed and needed to be fixed
  3. In early April the corrected translation was added to the source master
  4. Just before the corrected translation was added, a new build was generated as part of our once a week schedule of builds
  5. The submitter tested the fix with the build just prior to the fix
  6. The e-mail discussion was then started trying to understand why the bug was not fixed

When I started asking "Five Whys", I thought it was obvious where the problem originated, and even how to fix it. The bug had been sent to the wrong person, and then when the bug was fixed the bug report was not updated to show which build included the fix.

However, as I stared at the problem further, I realized there was a more significant problem than I had seen initially, and that more significant problem has caused other issues as well.

Why did the fixer need to waste the time guessing which build would include the fix. Couldn't a system tell the submitter when their bug fix was in a build? For example, most bug fixes will reference the bug number in their submit message. Why not pass that information automatically to the submitter, or to the bug report so the fixer does not have to think about the number or name of the next build.

That would have helped, but it appeared that a bigger problem was that the tester did not have easy access to the list of changes which had been made in the build being tested. That list of changes was difficult to find, and difficult to read (I don't find CruiseControl output especially friendly) and probably not known to the submitter at all.

Make Available Information Reachable - Reduce Guessing

When information is not readily available to our very smart people, they will apply skill and judgment and make the best assumptions they can with the information they have. Making that information more readily available will allow them to do their jobs better and reduce wasted work.

The root problem seemed to be someone else's issue, until my thought processes came back to highlight that it was really my problem. I'm the manager, and ultimately it is my fault. Sometimes it becomes obvious more quickly, other times it takes a little more time...

I admit was well that making information easily reachable by those who need it, when they need it, is only part of the answer. That would have helped the tester, but did not help the submitter send the bug to the right person, nor did it help the fixer insert the right data in the bug report. There are so many "why" questions to ask, and so many ways to make small improvements that might help a little.

CAST 2009 - Koomey, Kaner, Weinberg, and Bach

Attend CAST

I've done it. I spent the personal funds to attend the 2009 Conference of the Association for Software Testing. Economic times are hard, conferences are difficult to justify to the company, but this conference looks too good to miss.

I've registered for Dr. Cem Kaner's tutorial on metrics and qualitative measurements. Considering the years I've struggled with quantitative measurement in the team, I'd love to hear a different approach.

I've learned plenty from the Association for Software Testing Black Box Software Testing courses ("Foundations" and "Bug Advocacy") and am looking forward to the next course ("Domain Testing"). I'm looking forward to this conference!

(I'm also excited that I'll be able to hang out in Colorado Springs, CO for a few evenings with my oldest daughter. She's an electrical engineer at a Colorado Springs company and has agreed to find a place where I can crash for the night.)

Saturday, February 14, 2009

Multi-stake Youth Dance - What Worked, What Didn't

Coleen and I are the co-chairs of the youth activities committee for our stake, a group of 10 LDS congregations in our area. That means we are responsible for a series of events for the youth age 12 to 18.

Multi-Stake Dance

As part of that assignment, we hosted a dance last night for the youth ages 14-18 from Loveland, Longmont, and Greeley. Other youth were welcome to attend (including the 17 who came from Fort Collins, and any who may have come from Laramie or Cheyenne).

The dance was an interesting exercise to plan, host, prepare, present, and repair. In the spirit of "relentless improvement", here are my observations about the dance.

What Worked Great
  • The Eiffel tower that Matthew Pond created was amazing. He constructed an 8 foot tall Eiffel tower framework from 2x4 lumber. Dionne Lee covered it with butcher paper and drew squares on the paper to give the feel of the Eiffel tower. In an unexpected twist, many of the youth at the dance signed their names to the tower. Good conversation piece and a good centerpiece.

  • The chocolate fountain was a big hit and was surprisingly tidy. Coleen provided platters of rice krispie treats, bananas, apples, and angel food cake squares which the kids could dip into the fountain, then eat as a chocolate coated treat

  • Priority Five, an acappella group from Berthoud performed 4 songs, all with the intent that the songs were dance numbers, not performance pieces. The youth didn't seem to dance much, but we encouraged them to dance, and they were thoroughly impressed by a live performance from a first class acappella group.

  • A carabiner through an eye bolt is a great way to string lights in the gym, and makes it easy to remove them when the dance is finished

What Worked
  • Tourist posters from member photos of the destination (we used "France" as our theme, and used pictures from Brittany West's trip to Paris). Printing the pictures in poster size was easy (upload them to the Sams Photo Club site, order the pictures online, and pick them up), and they looked good

  • Use the decorations as prizes. We gave away the posters at the end of the night, some to committee members, some to raffle winners. The youth who received them seemed happy to get them, and we had a few less things to take home after the dance was done.

  • Good musical selection and disc jockey work from Christian Dunn. His sound system was excellent, his fees reasonable, and his music selection seemed to fit the youth very well.

What Didn't Work As Well
  • Postcards as a mixer activity. We distributed postcards to the youth as they arrived and invited them to have their dance partners sign the postcard. Once the postcard was signed with 7 names, they received a raffle ticket for the end of dance drawing. Some of the youth enjoyed the activity, and the 5 raffle tickets which were chosen all claimed their poster prizes, but it didn't seem to generate much enthusiasm among the larger group of youth.

  • Not enough setup and teardown help from the committee, in large measure because the dance was on a Friday night. Their school events and athletic activities compete for their Friday nights even more than they compete for their Saturday nights. I think teardown was also affected by the late hour. The dance finished at 10:00 PM, and we didn't leave the building until almost 11:00 PM.

What Didn't Work At All
  • Friday night is the wrong night for an event that requires significant preparation time. Coleen was on the run all day gathering food and supplies for the dance, I was on the run after leaving work early just before 4:00 PM, and still we had youth and adult chaperones arriving at the dance before we were ready.

    Saturday is the night for big events, not Friday

  • Don't forget the little details (need a checklist of little details to avoid forgetting them in the future)

    • Without keys to the church, setup will not happen

    • Buy much food early (as much as you can), since the day of the dance is filled with plenty of other things

    • Bring tools to the setup in case something needs repair or rework

    • Include more people in setup and teardown

    • Use an agenda during the chaperone orientation meeting so the meeting runs smoothly

Thursday, January 8, 2009

Down, but not Out

Christmas brought a very nice gift, pedals and cycling shoes (with cleats). I mounted them on the bike the day after Christmas and took them for a ride. They were nice.

While riding home from work today, I had to wait while a car turned ahead of me. While waiting, I fell over...

When I fell, I was still about 7 miles from home, so I had plenty of time to work through the bruising and battering that had just happened to my knee. It always seems so comical when someone falls over on their bicycle in TV shows, but the comedy was lost on me when I was picking myself up from the ground after the "tip over incident".

Live, learn, and keep riding...