This post deviates from my slew of Mozilla automation related posts, but feel free to read along. After moving to a high rise condo building a year ago I started taking the stairs (to the 39th floor) instead of the elevator a few times per week. As time went on this became enjoyable and I could make it to the top without falling on the floor with shaky legs, gasping for air and on the verge of needing life support (just ask my wife about my earlier climbs.)
Time to step it up (pardon the pun). I became involved in some of the online groups, and noticed a lot of other people getting into the sport. It seems like the last couple years has seen an explosion of participants, elite competitors (ones who actually run up the stairs), and events in cities all over the world. Earlier this month I took the plunge and signed up for my first race. This is small in comparison to the Sears tower or the CN tower, but you have to start somewhere.
Wish me luck in 4 weeks and consider climbing up the stairs next time you are waiting for the elevator, it really is fun.
My previous posts on the status of winmo automation outlined a series of patches to land. I am proud to say all of those have been reviewed (thanks to everybody), have landed (thanks ctalbert for checking these in) and with the help of this buildbot shim script are running very well!
So some highlights of what works:
- Mochitest: runs great on the phone, and we use the –total-chunks and –this-chunk options so we don’t run out of memory. Right now I am testing it with –total-chunks=20, but suspect we can go a bit lower. As another note, the overhead to restart the phone, install the build, load the browser, and start the tests is 7.5 minutes on my HTC touch pro
- Reftest/Crashtest/JSreftest: all of these run great. We need to run smaller manifest files though as after about 800 files on my device the tests come to a halt and execute maybe 1 test every few minutes. Luckily these run with manifest files so we can easily create a few manifests and have a working solution
- Xpcshell: this is pretty straightforward. I don’t see any problems running this end to end as the harness by design only runs one case at a time. As a note, this is the only test harness that copies over the test files to the phone.
- shim script: this turns on the webserver on your local IP so we can access it from the phone, as well as a bunch of other setup, monitoring and cleanup tasks. It would be nice to move the webserver functionality into the remote harness scripts in the near future so developers can easily run from a build tree
- sutagent: this is actually the backbone of these tests. This tool runs on the phone and has come a long way over the last few months. This agent is a product of blassey and bmoss. The next steps here are to get the code checked into one of our source trees.
There are a few things we want to clean up, but overall we are at a great milestone on this project and ready to start rolling this out.
Just a quick post to wish everybody a happy PI day. If you want to celebrate, I suggest buying t-shirt with PI on it, eating some form of pie or better yet calculating or reciting digits of PI.
As we get closer to having unittests running on windows mobile, I am starting to wonder how long it will take to complete a test run. Just like maemo with the n810’s we will be running the tests in parallel, but can we see full build + test + results in < 8 hours for a nightly?
For desktop builds, we have an enormous amount of building and testing that we do. This means we would be running tests for about 512 checkins/month (just less than 1/hour) and double that if we wanted to include the try server.
What I propose doing is running a small set of unittests on each checkin, such that we get simple coverage on each build and it runs fast enough so we have results in the same time window as our other test data comes in for desktop tests. Right now when you check into m-c or 1.9.2 tests are only run on desktop builds, this would run tests on windows mobile as well.
The big question is what tests to run. Here are a couple ideas:
Run a small smoketest for each test suite (reftest, mochitest, xpcshell) which covers different areas, but not as in depth. Save the full test for the nightlies
Iterate through a series of chunks (say 30 chunks for mochitest; desktop builds split it 5 ways) and for each checkin just run a single chunk. After the course of 2 days we will have done a full cycle of little chunks.
Personally I like the second option best, but really I am trying to think out of the box for ways to reduce regression windows while getting feedback on a per checkin basis. What do you think?
Filed under general, testdev
A couple weeks ago, I posted an initial status of the winmo automation project. Here is an update of where we are.
- Modify reftest.jar to support http url for manifest and test files – up for review
- Refactor runreftests.py – one issue while trying to land, need to land this in the next couple days
- Add remotereftests.py still WIP – could be at review stage
I also have instructions for how to setup and run this.
Xpcshell: this requires 2 patches
- Refactor runxpcshelltests.py to support subclass for winmo – submitted for final review
- Add remotexpcshelltests.pyup for initial review
I have written some instructions on how to run xpcshell tests on winmo if you are interested.
Only a few patches have landed since the last update, but a lot of reviews have happened. Great progress has been made to resolve some of the unknown issues with running xpcshell and reftest remotely and everything is truly up for review. I expect next week at this time to have a much shorter list of patches.