Most of my posts are related to getting the unittests to run on Fennec, there is not much communication about how we are tracking and getting the tests to be green (zero failures). Simple explanation, up until now there was no plan.
Last December, I went through every failure and documented what I thought was the problem. I created a little web tool to see the differences and track my bugs. Of course this is a static tool and was a real pain to update with new bugs and tests.
Now it is August and many new failures are occuring and the old failures are not fixed. I am going to outline an approach to get us to ZERO failures by the end of the year. In order to be successful, we need to reduce the variables as much as possible. This means that we will run Fennec on desktop linux builds in tinderbox per checkin instead of on Maemo! This sets us up for getting green Tinderboxes in this environment vs a device (I suspect we will be 90%+ passing when run on a device).
Actions to take:
- Start with XPCShell tests first (then do Crashtest, Reftest, Mochitest, Chrome one at a time) and for each failure do the next steps
- Reproduce failure (twice)
- Reduce testcase (if possible)
- File bug/update existing bug, add bug # to a master tracking bug
- When done with a specific test harness (XPCShell in this case), meet with the devs to prioritize bugs and get everybody on the same page
This sounds simple but could take a long time. The benefit of tackling the smaller test harnesses first is that we can see progress (list of bugs, green) faster and start keeping those harnesses green.
What this does not do:
- Help us track new failures
- Get green tinderboxes on Maemo and WinMo
- Resolve remote web server related issues
- Fix issues when running tests one at a time
Stay tuned for an update when we get our first batch of bugs filed for XPCShell.
After having good success two weeks ago running mochitests on my windows mobile device using a remote web server, I have cleaned up a lot of my code and made this a better process. When I last posted, I had this remaining list of action items and I have appended my status:
* Sort out python script to generate mochitesttestingprofile and get it on the device- bug 512319
* Fix profile and tests to remove localhost/127.0.0.1 dependencies- bug 512319
* Fix tests to remove calls to local files (an example I found)- about 100 test files fail
* Test on a release build of Fennec with desktop tests.tar- more details below
* Verify tools like certutil.exe, ssltunnel.exe, etc.. do not cause any problems- no progress
* Write tools in the python script to look for a test that doesn’t exit and clean up zombie processes- fixed with maemkit
I have yet to update maemkit officially, but that is in the works. I mentioned there is a quirk with running on release build and tests.tar. The issue with this is to make sure you have the right build and binaries for the right platform. I know this sounds easy, but in order for me to run tests on windows mobile, I need to build a binary of windows mobile and a test package for desktop.
Let me outline a set of steps that are necessary to take to help elaborate on this:
1) Build WinMo build and install on device (I usually take the .zip file, unzip, and copy to \tests\ so that I can run \tests\fennec\fennec)
2) Build Windows Desktop build (with my two patches 508664 512319) and create a ‘make package-tests’ and untar this in something like c:\tests (so you have c:\tests\bin, c:\tests\mochitest\, etc…).
3) Using the build from step #2, create a ‘make package’ and unzip the package to c:\tests so you have c:\tests\firefox\firefox.exe.
4) Copy c:\tests\bin\* c:\tests\firefox so we have the xpcshell.exe in the correct directory
python runtests.py --appname=firefox.exe --remote-webserver=192.168.55.100:8888 --setup-only. Note the ip address is the activesync ip
6) Create profile directory on device:
7) Copy mochitesttestingprofile to device:
c:\tools\pput.exe -r c:\tests\mochitest\mochitesttestingprofile\* \tests\mochitesttestingprofile\
7) Edit httpd.js and server-locations.js in c:\tests\mochitest to change localhost and 127.0.0.1 to be 192.168.55.100
8) Launch web server (from the c:\tests directory):
firefox\xpcshell.exe -g firefox -v 170 -f mochitest\httpd.js -f mochitest\server.js
9) launch fennec on remote device:
c:\tools\prun.exe -w \tests\fennec\fennec.exe --environ:NO_EM_RESTART=1 -no-remote -profile \tests\mochitesttestingprofile\ http://192.168.55.100:8888/tests/toolkit/components/passwordmgr/test/test_xhr.html?logFile=%5ctests%5cmochi.log
That is the basic run. When I have maemkit updated, step 9 would become:
python maemkit-chunked.py --testtype=mochitest
I can automate a lot of these steps if I assume we are running over active sync and make maemkit a bit smarter about the setup.
Previously I had mentioned getting windows mobile mochitests up and running. Since then they have been running mostly full time. I calculated out that it would take about 50 hours to run the full test suite assuming there were no errors that caused the browser to not exit automatically. Unfortunately I ran into a block of 234 tests files which over 175 of them fail to execute (no log generated and browser doesn’t terminate within 5 minutes).
These are all dom-level1-core/test_hc_* tests. In trying to debug the problem (by editing a testcase), I found that I could pinpoint an issue, but then it wouldn’t pass twice in a row. Further tinkering showed that about 1 out of 4 times I ran a test (with/without changing it) it would pass. Looking at my log files that I generated during my initial automation pass on the dom-level1-core directory I see the same statistics (1 out of 4 passing) for the fully automated run. Time for a debug build to figure out what is going on.
With a debug build installed, every time I ran a test manually it would pass. The same test that would not pass two times in a row passed 6 times in a row. So much for a debug build helping me out. Next I thought that this might be related to running on a remote web server and one file at a time. I ran a series of tests on desktop Linux Fennec and they all ran end to end without hanging.
This is where I get stuck. My infrastructure is not the problem as the tests obviously run. The problem is they only run *reliably* on debug builds. This all points to some kind of timing issue with WinMo Fennec. Any tips for how to figure out what the problem is?
On Wednesday I figured out how to get a standalone mochitest to write the output to a log file. This was required since in Windows Mobile builds of Fennec we could not get the iframe to load a source file with all the magic mochitest bits. That is something we need to fix, but for getting tests running (on Windows Mobile) all roads were pointing towards running each test file by itself while using a remote web server instead of localhost. Now I can just launch fennec on my phone and run a test:
\tests\fennec\fennec.exe --environ:NO_EM_RESTART=1 -no-remote -profile \tests\mochitest\mochitesttestingprofile\ http://192.168.55.100:8888/tests/content/canvas/test/test_2d.composite.canvas.source-over.html?logFile=%5ctests%5cmochitest%5c314.log
I let a python script run all night long and came back with 113 tests ran. Seems the device locked up on me again. So all day Thursday I kept working on it and finding hang after hang. I decided that running python on the device might not be the best approach due to the extra memory usage. I worked for a few hours on blassey‘s remote test harness, but could not get it to detect when the process exited (really had 2 zombie threads). I found some xda tools (prun, pget, pdel) which allow me to launch my fennec command line from my tethered host machine. What I do here:
prun -w \tests\fennec\fennec.exe ...
pget [logfile] [locallog]
Now I am not worried about space on the device with the tests or the log files.
Here is what we need to make this a solid package and running on tinderbox:
- Sort out python script to generate mochitesttestingprofile and get it on the device
- Fix profile and tests to remove localhost/127.0.0.1 dependencies
- Fix tests to remove calls to local files (an example I found)
- Test on a release build of Fennec with desktop tests.tar
- Verify tools like certutil.exe, ssltunnel.exe, etc.. do not cause any problems
- Write tools in the python script to look for a test that doesn’t exit and clean up zombie processes