Tag Archives: testdev

joel’s rage – dom tests on windows mobile

Previously I had mentioned getting windows mobile mochitests up and running. Since then they have been running mostly full time. I calculated out that it would take about 50 hours to run the full test suite assuming there were no errors that caused the browser to not exit automatically. Unfortunately I ran into a block of 234 tests files which over 175 of them fail to execute (no log generated and browser doesn’t terminate within 5 minutes).

These are all dom-level1-core/test_hc_* tests. In trying to debug the problem (by editing a testcase), I found that I could pinpoint an issue, but then it wouldn’t pass twice in a row. Further tinkering showed that about 1 out of 4 times I ran a test (with/without changing it) it would pass. Looking at my log files that I generated during my initial automation pass on the dom-level1-core directory I see the same statistics (1 out of 4 passing) for the fully automated run. Time for a debug build to figure out what is going on.

With a debug build installed, every time I ran a test manually it would pass. The same test that would not pass two times in a row passed 6 times in a row. So much for a debug build helping me out. Next I thought that this might be related to running on a remote web server and one file at a time. I ran a series of tests on desktop Linux Fennec and they all ran end to end without hanging.

This is where I get stuck. My infrastructure is not the problem as the tests obviously run. The problem is they only run *reliably* on debug builds. This all points to some kind of timing issue with WinMo Fennec. Any tips for how to figure out what the problem is?

Leave a comment

Filed under testdev

mochitests are starting to run on windows mobile

On Wednesday I figured out how to get a standalone mochitest to write the output to a log file. This was required since in Windows Mobile builds of Fennec we could not get the iframe to load a source file with all the magic mochitest bits. That is something we need to fix, but for getting tests running (on Windows Mobile) all roads were pointing towards running each test file by itself while using a remote web server instead of localhost. Now I can just launch fennec on my phone and run a test:

\tests\fennec\fennec.exe --environ:NO_EM_RESTART=1 -no-remote -profile \tests\mochitest\mochitesttestingprofile\

I let a python script run all night long and came back with 113 tests ran. Seems the device locked up on me again. So all day Thursday I kept working on it and finding hang after hang. I decided that running python on the device might not be the best approach due to the extra memory usage. I worked for a few hours on blassey‘s remote test harness, but could not get it to detect when the process exited (really had 2 zombie threads). I found some xda tools (prun, pget, pdel) which allow me to launch my fennec command line from my tethered host machine. What I do here:

pdel [logfile]
prun -w \tests\fennec\fennec.exe ...
pget [logfile] [locallog]

Now I am not worried about space on the device with the tests or the log files.

Here is what we need to make this a solid package and running on tinderbox:

  • Sort out python script to generate mochitesttestingprofile and get it on the device
  • Fix profile and tests to remove localhost/ dependencies
  • Fix tests to remove calls to local files (an example I found)
  • Test on a release build of Fennec with desktop tests.tar
  • Verify tools like certutil.exe, ssltunnel.exe, etc.. do not cause any problems
  • Write tools in the python script to look for a test that doesn’t exit and clean up zombie processes


Filed under testdev

considering new approach to mochitest on mobile

Windows mobile is rocking the boat for us and we are getting more and more creative. My last challenge is getting the mochitests up and running and this has proved very difficult.

Last week I setup the xpcshell httpd.js web server remotely and resolved my pending xpchell test. This week I moved onto mochitest. I did a sanity check by running a desktop version of fennec mochitest run with the remote httpd.js web server and it worked (with a lot of new failures/timeouts due to proxy and other issues I didn’t investigate).

Since my experience with xpcshell and reftest on Windows Mobile has resulted in more crashes and smaller chunks, I am considering running each test file in the mochitest harness by itself. This requires some retrofitting of the mochikit integration as there is no logging available for a single test file. It will also require more time to run as the overhead of starting the browser is expensive.

Any thoughts on this? Should I just find a more powerful phone to run these tests on? Is this something that we could find other uses for within Mozilla?


Filed under testdev

windows mobile keeps hanging on me

No technical advice or progress in this post, just a rant.

In my work to get the unittests running on Windows Mobile, I have found that my device locks up quite frequently and requiring a reseat of the battery. Fair enough, but this really eats into my day after about 20 of these.

So here is what I see. During startup of fennec.exe, we load a lot of .dll files. Here is a full clip from the vs debugger window:

CertVerify: \tests\fennec\xulrunner\nspr4.dll trust = 2
CertVerify: \tests\fennec\xulrunner\plc4.dll trust = 2
CertVerify: \tests\fennec\xulrunner\plds4.dll trust = 2
CertVerify: \tests\fennec\xulrunner\sqlite3.dll trust = 2
CertVerify: \tests\fennec\xulrunner\nssutil3.dll trust = 2
CertVerify: \tests\fennec\xulrunner\softokn3.dll trust = 2
CertVerify: \tests\fennec\xulrunner\nss3.dll trust = 2
CertVerify: \tests\fennec\xulrunner\ssl3.dll trust = 2
CertVerify: \tests\fennec\xulrunner\smime3.dll trust = 2
CertVerify: \tests\fennec\xulrunner\js3250.dll trust = 2
CertVerify: \tests\fennec\xulrunner\xul.dll trust = 2
CertVerify: \tests\fennec\xulrunner\xpcom.dll trust = 2
CertVerify: \tests\fennec\xulrunner\components\xpcomsmp.dll trust = 2
Undefined Instruction: Thread=89bc4000 Proc=80458b10 'fennec.exe'
AKY=02000001 PC=795866e8(js3250.dll+0x000c66e8) RA=79562f6c(js3250.dll+0x000a2f6c) BVA=045fc000 FSR=00000805
Undefined Instruction: Thread=89bc4000 Proc=80458b10 'fennec.exe'
AKY=02000001 PC=795866e0(js3250.dll+0x000c66e0) RA=79562e98(js3250.dll+0x000a2e98) BVA=35010160 FSR=00000005
Undefined Instruction: Thread=89bc4000 Proc=80458b10 'fennec.exe'
AKY=02000001 PC=795866f0(js3250.dll+0x000c66f0) RA=79562f14(js3250.dll+0x000a2f14) BVA=35010160 FSR=00000005
CertVerify: \tests\fennec\xulrunner\components\tdynamic.dll trust = 2
[K] Time[2009/07/29 10:52:00]

We have 3 “expected” undefined instructions to determine the chipset, then we keep loading .dll files and eventually the program.

As you can see, the last line is a “[K] Time[2009/07/29 10:52:00]”. This message does not mean the device is hung as during a successful run I can see these quite often while actually running tests or using the browser, but for some reason the majority of the time I see that message the device is hung. Also what makes this interesting is this hang occurs at different places during the load (and sometimes runtime) cycle.

All workarounds are welcome:)

1 Comment

Filed under general, testdev

running httpd.js on a remote host

This exercise came about as an option for running unittests on windows mobile. We ran into an issue where the http server was running but not returning web pages. Vlad had mentioned running the web server on a host machine to be respectful of the memory on the device. In the past other members of the QA team have tried this with no luck. With the help of :waldo, I was able to get tests running while accessing a remote webserver.

For a quick background, all unittests rely on httpd.js running via xpcshell to serve web pages. For example, xpcshell update tests (test_0030_general.js) utilize httpd.js to download a .mar file and mochitest uses httpd.js for every test it runs. It is a little tricky to get this to run on a remote server as there are a few things to edit:

Now you need to fire up xpcshell to run as a server:

#set any env variables such as
#LD_LIBRARY_PATH=$firefox_dir/xulrunner on linux
cd $firefox_dir/xulrunner
./xpcshell -f ../../mochitest/httpd.js -f ../../mochitest/server.js

On the remote machine launch your test and be happy!

To really utilize this for the Fennec unittests on Windows Mobile, we will have to make many changes to the tests in addition to the ones mentioned above. Making it fit into automation would be very difficult as ip addresses can change and we might need to set it dynamically. Alternatively, running 1 host to many clients does have some attraction.


Filed under testdev

how we are tracking unittest failures for fennec

Currently Fennec has thousands of failures when running the full set of unittests. As it stands when tinderbox runs these, we just set the “pass” criteria as total failures <= acceptable # of failures. As you can imagine, this has a lot of room for improvement.

Enter the LogCompare tool which happyhans and I have been working on with help from mikeal for the couchdb backend. What we do is take the tinderbox log file, parse it and upload it to a database! This way we get a list of all the tests that were run and if they passed or failed. Now we can compare test by test what is fixed, a known failure or a new failure. What is even better is that we are running Mochitests in parallel on 4 different machines and LogCompare can tell us if the tests on machine1 pass or fail without necessarily waiting for the other tests to complete. Another bonus is we can track a specific test over time to look for random orange data.

The concept is simple, here are some of the details and caveats:

  • We track tests by test filename, not by directory or test suite
  • A single filename can have many tests (mochitest), so there is no clean way to track each specific test.
  • If a test fails, future tests (sometimes in the same file, folder, or suite) are skipped.
  • Parsing the log file is a nasty task with many corner cases
  • To match test names up correctly, we need to strip out full paths and just view the relative path/filename.
  • Need to handle when new tests are added or existing ones removed
  • Need to baseline from Firefox for full list of tests and counts

The goal here is to keep it simple while bringing the total failure count of the unittests on Fennec to Zero!


Filed under qa, testdev

how we are running xpcshell unittests on windows mobile

Here is another scrap of information on how our progress is coming on getting unittests to run on Windows Mobile. A lot has changed on our approach and tools which has allowed us to make measurable progress towards seeing these run end to end. Here are some of the things we have done.

Last month I discussed launching a unittest in wince and taking that further blassey has compiled python25 with the windows mobile compilers and added the pipelib code to allow for stdout to a file. This has been a huge step forward and resolves a lot of our problems

I found out that runxpcshelltests.py had gone under a major overhaul and my previous work was not compatible. While working with the new code, I ran into a problem where defining a variable with the -e parameter to xpcshell was not working. To work around this I have changed:
xpcshell.exe -e ‘const _HEAD_FILES = [“/head.js”, “/head.js”];’

xpcshell.exe -e ‘var[_HEAD_FILES]=[[“/head.js”,”/head.js”]];’

This new method is ugly but works. We suspect this is related to how subprocess.Popen handles the commandline to execute (appears to require 2 args: app, argv). Regardless, with a series of additional hacks to runxpcshelltests.py, we can launch xpcshell.exe and get results in a log file.

Lastly, there is another issue we have where the lack of support for cwd on windows mobile is causing some of our test_update/unit/* tests to fail. There is a workaround in bug 458950 that we have where we can support cwd when it is passed on the command line:
xpcshell.exe –environ:CWD=path

Of course we need to have this code for xpcshell.exe (as noted in bug 503137), not just fennec.exe or xulrunner.exe.

With all the changes above, we can launch our python script (including cli args) via the visual studio debugger or a command line version on our device that is tethered via USB cable and activesync.

Unfortunately this is not complete yet. We have to clean up the python code and make it a patch. That is hinged on finding better fixes for the -e parameters. Also, while running on my HTC Touch Pro, the device hangs a lot for various reasons (requiring reseating the battery). Stabilizing this could require a tool change, as well as a different way to run the tests.

Leave a comment

Filed under testdev