Monthly Archives: January 2010

I dream of green

After spending a month trying to get my first round of large changes into the test harness system, I learned a lot of little things along the way. First off, I will admit I am not a rock star developer who can whip out perfect code without testing it (although I did a hot patch to talos that worked) or look at code and see what is wrong at a glance. So knowing that, I have found that I can be successful by testing my code and following up on it to ensure it works and doesn’t cause problems.

First off, this patch literally rewrote 90% of the code in all of the test harnesses. Now most of that was just moving functions inside of classes, but you can imagine the chance of failure. I started writing this patch in October and after a few review cycles, it was ready to land (checkin for you non Mozillians.) A month later we finally get it in…why the delay?

Bitrot: This usually is not that big of a problem. All patches will experience people editing code in the same areas, but in this case with HG I was unable to fix the rejects by just editing the code that had conflicts. With my patch(es) they modified such large blocks of code that a 3 line patch caused me a whole day of headaches and I found it easier to back out those specific changes (hg revert -r xyz [filename]), apply my patch, and add those changes on top of it. Not the most ideal thing, but it works.

Reading the patch: I found that many times I would accidentally have a change in a patch that I didn’t realize was there. So I would submit it for review and get odd questions or it bounced back to me. I strongly encourage after you do you diff or qdiff to read over the patch like a reviewer would before submitting it. If it is an update to a patch, I diff it against the last version.

TryServer: This is a great tool that Mozilla has setup where you can submit a patch and it will do a build and testpass on all platforms. So while I might develop my code on Linux or Windows, I don’t have to build and run all the tests on my Mac as TryServer will do it for me. I didn’t realize this, but TryServer does builds and tests differently than Tinderbox (at least for mozilla-central). For example there are no debug builds (and tests), test packaging (run out of the objdir) or leaktests. So this isn’t perfect, but helps if you are not changing platform specific code.

Random Orange: This is a term at Mozilla to describe tests that are known to fail randomly causing the test run to turn orange instead of green. So while watching the Tinderbox Push Log, you start seeing failures in your tests and need to investigate them. It is easy to search in bugzilla for the test name that failed, but one case we didn’t find it in bugzilla and backed out the patch due to a debug leaktest (which was a known random failure) when in fact the patch would have been fine.

Tinderbox Logs: While investigating failures (either random or caused by my patch), we need to look at the logs generated by the build and tests to see what happened. There are brief and full logs available, but I found that the brief logs were not that useful in telling me how the build was done, or what command was executed. So loading up the full logs it is…all 30Mb+ of data into one Firefox window = crash. I found safari to be better at loading these large logs but most useful was doing a wget and then searching through the log (it is a .gz extension, but it is really .html) for ‘BuildStep ended’ to find the right area to focus on.

Although I never ran into these issues for the dozens of other patches that I have had land, it must have been pure luck. Now whenever I am ready for submitting a patch, I plan to:

  1. Build on one OS in Opt and Debug
  2. For both builds run leaktests.py, mochitest, mochitest-chrome, browser-chrome, mochitest-a11y, mochitest-ipcplugins, reftest, crashtest, jsreftest
  3. Submit to tryserver and get all green (builds, talos, unittests)
  4. Review patch for mistakes, comments, extra code
  5. After review verify for bitrot, if bitrot- repeat

Leave a comment

Filed under testdev

first round of new test harness code has landed

Back in October, I started working on code to run the unittests on Windows CE + Mobile. This is an ongoing project, but I am starting to get the ball rolling in the right direction finally.

Today I checked in my first (actually a set of 2) patch (and it didn’t get backed out this time) which converts the bulk of the python test harness code to be OO instead of a raw scripts.

This is sort of a halfway point in the code that needs to get checked into mozilla-central in order for us to be testing automation on a windows mobile phone. Big thanks to Ted for reviewing all my patches and to Clint for helping me test and do the actual checkin.

NOTE: I originally wrote this Jan 7th, and it finally made in it today:)

1 Comment

Filed under testdev