Monthly Archives: March 2013

smoketest for firefox android on panda boards

Last September the panda board were deemed ready to run tests.  The next steps were to start integrating them into buildbot and making them 100% automated.  This task turned into a much larger project and the end results was developing a smoketest which yielded a cleaner integration point with the automation.

The core of the android automation to date has been on the NVidia Tegra 250 developer kit.  This has been running quite successfully with 3-4% total failure rate (product, test harness, tests, infrastructure, hardware).  Our goal for testing on Android 4.0 was to test on the panda boards which also have a NEON chipset.  Essentially this is just like adding more tegra boards to our automation, and for the most part that was true.

The main problems we faced came about when dealing with installing, rebooting, and overall management of the device.  For our tegras, this is all in a set of python code call sut_tools.  These sut_tools handle all the device management and with a few modifications we were able to do that for the panda boards.

While the tests and harnesses ran fine on a panda board at my desk, getting them to work smoothly with the sut_tools and the buildbot scripts proved to be quite a challenge.  After about 10 weeks of solid work and many bugs fixed in the android kernel, system libraries, Firefox and of course our harnesses we were able to get this going fairly reliably with <10% total failure rate when we first turned these tests on in late December.

In order to prove this was working, we developed a smoketest which would run on the production foopies (host to control the panda boards, 12 at a time) and production panda boards.  In fact this ended up being a way to diagnose boards, script changes and help debug overall test failures.  The original smoketest was going to be ‘run some tests on a given panda board for 24 hours’.  The resulting smoketest is a reuse of the exact tools we use in automation for cleanup, verification, installation, and uninstalling the product from the device under test.  We also run a set of production mochitests, so we mimic a real job being pushed with about 98% accuracy.

To run these, it is pretty easy:

While this sounds straightforward, there is a bit more required in order to test a new panda board or what we normally do a chassis of new panda boards.  As it stands now, I run an instance of smoketest.py in a different terminal window for every panda I am interested in testing.  Usually this is 6-8 at a time, but this can easily be done for 1 or 12 without concern.

I usually run this in a loop of 100:

  • $ for i in {0..99}; do python smoketest.py; done

Then I grep the logs looking for failure messages or more specifically count how many success messages I have.  If I have >95% success rate across all my logs, this is a good sign that things are ready to roll.

In the future, it would be nice to make smoketest.py have a better reporting and looping system.  There is also the need to get us to 99% success rate running a controlled smoketest.  One thing that would make this easier would be a tool to launch on a given set of machines and report back information and query the log files for easier parsing and status.

 

Leave a comment

Filed under testdev