Tag Archives: testdev

mochikit.jar changes are in mozilla central

Last night we landed the final patches to make mochikit.jar a reality.  This started out as a bug where we would package all the mochikit harness + chrome tests into a single .jar file and then on a remote system copy that to the application directory and run the tests locally.  It ended up being much more than that, let me explain some of the changes that have taken place.

why change all of this?

In order to test remotely (on mobile devices such as windows mobile and android) where there are not options to run tools and python scripts, we need to put everything in the browser that it needs and launch the browser remotely.  The solution for tests that are not accessible over the network is to run them from local files.

what is mochikit.jar?

Mochikit.jar is an extension that is installed in the profile and contains all the core files that mochitest (plain, chrome, browser-chrome, a11y) needs to run in a browser.  This doesn’t contain any of the external tools such as ssltunnel and python scripts to set up a webserver.  When you do a build, you will see a new directory in $(objdir)/_tests/testing/mochitest called mochijar.  Feel free to poke around there.  As a standalone application all chrome://mochikit/content calls will use this extension, not a pointer to the local file system.  The original intention of mochkit.jar was to include tests data, but we found that to create an extension using jar.mn we needed a concrete list of files and that was not reasonable to do for our test files.  So we created tests.jar.

what is tests.jar?

tests.jar is the actual test data for browser-chrome, chrome, and a11y tests.  These are all tests that are not straightforward to access remotely over http, so we are running these locally out of a .jar file.  tests.jar is only created when you do a ‘make package-tests’ and ends up in the root of the mochitest directory as tests.jar.  If the harness finds this file, it copies it to the profile and generates a .manifest file for the .jar file, otherwise we generate a plain .manifest file to point to the filesytem.  Finally we dynamically registers tests.manifest from the profile.  Now all your tests will have a chrome://mochitests/content instead of chrome://mochikit/content.

What else changed?

A lot of tests had to change to work with this because we had hardcoded chrome://mochikit/content references in our test code and data.  It is fine to have that in there for references to the harness and core utilities, but to reference a local piece of test data, it was hardcoded and didn’t need to be.  A few tests required some more difficult work where we had to extract files temporarily to a temp folder in the profile and reference them with a file path.

what do I need to do when writing new tests?

please don’t cut and paste code then change it to reference a data, utility, or other uri that has chrome://mochikit/content/ in it.  If you need to access a file with the full URI or as a file path, here are some tips:

* a mochitest-chrome test that needs to reference a file in the same directory or subdir:
let chromeDir = getRootDirectory(window.location.href);

* a browser-chrome test that needs to reference a file in the same directory or subdir:
//NOTE: gTestPath is set because window.location.href is not always set on browser-chrome tests
let chromeDir = getRootDirectory(gTestPath);

* extracting files to temp and accessing them

  let rootDir = getRootDirectory(gTestPath);
  let jar = getJar(rootDir);
  if (jar) {
    let tmpdir = extractJarToTmp(jar);
    rootDir = "file://" + tmpdir.path + '/';
  }
  loader.loadSubScript(rootDir + "privacypane_tests.js", this);
Advertisements

Leave a comment

Filed under general, testdev

types of data we care about in a manifest

This is a bit controversial (similar to “what OS do you run”), but I want to start outlining what I find as useful metadata to categorize tests with.

DATA:

Currently with Reftest, we have a sandbox that provides data the manifest files can use as conditional options.  The majority of the sandbox items used are:

  • platform: MOZ_WIDGET_TOOLKIT (cocoa, windows, gtk2, qt)
  • platform: xulrunner.OS, XPCOMABI (if “”, we are on ARM)
  • environ:  haveTestPlugin, isDebugBuild, windowsDefaultTheme
  • config:    getBoolPref, getIntPref, nativeThemePref (could be a getBoolPref)

This is the type of information that a large portion of our tests care about.  Most of these options are somehow coded into mochitests as well (through getPref calls, checking the OS, or in Makefile conditions).  I would like to figure out how to add this type of data:

  • orange:   list of conditions this is random-orange on (but should pass)
  • fails:        list of conditions this is expected to fail on
  • e10s:       what service is used to cause this to fail under e10s
  • fennec:   does this fail when run on fennec, which platforms, what versions
  • remote:   does this fail when running tests remotely (required for android, originally for winmo)
  • results:    number of failures (if orange or fails above) and number of todo
  • runtime:  expected time to run this test (important on mobile)
  • product:  product name and version
  • future:     anything we find valuable in the future!

I can think of many ways to add this into the Reftest format or create a new format.  Looking at this data a but further, it really is not adding a lot of new information.  If we take the assumption that all tests are expected to pass in all configurations, any value assigned to a new piece of data would indicate that it fails under that given condition (or list of conditions).  As our supported platforms, configurations, and products grow, we will have a much greater need for this additional metadata.

INTEGRATION:

I would like to make all data pieces as tags vs raw conditions (Reftest does them like C conditions.)  This would allow much greater flexibility and adding data that doesn’t necessarily turn on/off a test.  For example, lets say a test is a random-orange for Firefox 1.9.1 (not 1.9.2), fails on Fennec Maemo 1.1 only, is orange on remote testing on Fennec android and currently is broken by e10s.  We could easily add those conditions to a list:

fails-if(OS==Maemo) fails-if(e10s==nsIPrefService) random-if(product=Firefox && xr.version==1.9.1) random-if(os=Android && remote==true) test_bitrot.html

So this is doable (please disregard any misused fails-if, random-if statements) and wouldn’t be too hard to add into a reftest.list style format for Mochitest (and even Reftest.)  Initially I thought it would be really cool to just run fails-if, random-if or skip-if statements with a small tweak to the command line.  This would give us the flexibility to turn on and off tests easier, but I realized that it would turn on/off all tests related to the condition.  I think a small adjustment in the format might allow for tags and we could tweak a run in the future with little work.  One example might be like:

fails(os=Maemo;e10s=nsIPrefService,nsIDOMGeoGeolocation) random(product=Firefox&xr.version=1.9.1;os=Android&remote=true) test_bitrot.html

This example is a minor change (which might not be needed), but helps simplify the syntax and keep in mind the idea of tags.  The backend details would need to be changed to support a ‘toggle’ of a tag in either scenario.  Maybe we just want to run e10s tests.  We can find all tests that have a e10s=nsiPrefService tag inside a fails tag block and just run those specific tests while maintaining all the other rules (skip on a certain OS or product).

There are still questions if the Reftest format is the right format for Mochitest.  It has a lot of weight since it works so well for so many of our tests.

3 Comments

Filed under testdev

tests that require privileged access

I have been working on a project to get mochitests running on a build of Fennec + electrolysis.  In general, you can follow along in bug 567417.

One of the large TODO items in getting the tests to run is actually fixing the tests which use UniversalXPConnect.  So my approach was to grep through a mochitest tests/ directory for @mozilla and parse it out.  With a few corner cases, this resulted in a full list of services we utilize from our tests (here is a sorted list by frequency 76 total services.)  Cool, but that didn’t seem useful enough.  Then I took my work that I have done for filtering (the json file) and cross referenced that with my original list of tests that use UniversalXPConnect.

Now I have a list of 59 services which all should pass in Fennec (a mozilla-central build from 2 weeks ago on n900) along with the first filename of the test which utilizes that services!

What else would be useful?

Leave a comment

Filed under testdev

accessing privileged content from a normal webpage, request for example

One project which I am working on is getting mochitests to run in fennec + electrolysis.  Why this is a project is we don’t allow web pages to access privileged information in the browser anymore.  Mochitests in the simplest form use file logging and event generation.

The communication channel that is available between the ‘chrome’ and ‘content’ process is the messageManager protocol.  There are even great examples of using this in the unit tests for ipc.  Unfortunately I have not been able to load a normal web page and allow for my extension which used messageManager calls to interact.

I think what would be nice to see is a real end to end example of an extension that would demonstrate functionality on any given webpage.  This would be helpful to all addon authors, not just me:)  If I figure this out, I will update with a new blog post.

1 Comment

Filed under testdev

filtering mochitests for remote testing

One major problem we encounter in running all our unittests on Fennec is the large volume of failures and how to manage them.  Currently we have turned off mochitests on the Maemo tinderbox because nobody looked at the results (we still run reftest/crashtest/xpcshell!)

In many of my previous posts, I outlined methods for running tests remotely and that has proven to be very useful.  In order to test this code and continue developing it (without windows mobile or a working android build yet), I have developed a simple python test-agent that can run on a linux box (including n900.)  If you are curious, check it out and watch tests run remotely…it is pretty cool.

So the real problem I need to solve is how to not run a list of tests on a mobile device.  Solving this could get us to green faster and reduce the mochitest runtime in half!

In 2008 my solution was Maemkit.  Maemkit is just a small wrapper around the python test runner scripts that does some file (renaming) and directory (splitting into smaller chunks) manipulation to allow for more reliable test runs.  This has worked great and still works.  Enter remote testing and we need to hack up Maemkit a lot to accommodate for everything.  In addition a lot of the work maemkit does is already in the test runners.

Today I have moved the filtering to be a bit more configurable and less obscure.  This is really just a prototype and toolset to solve a problem for me locally, but the idea is something worth sharing.  What I have done is built up a json file (most of this was done automatically with this parsing script) which outlines the test and has some ‘tags’ that I can filter on:

   {
     "fennec-tags" : {"orange": "", "remote": "", "timeout": ""},
     "name" : "/tests/toolkit/content/tests/widgets/test_tooltip.xul"
   },
   {
     "fennec-results" : {"fail": 0, "todo": 0, "pass": 51},
     "name" : "/tests/MochiKit_Unit_Tests/test_MochiKit-Async.html",
     "note" : []
   }

You can see I now can run or skip tests that are tagged ‘orange’ or ‘timeout’. Better yet, I can skip tests with fennec-results that match fail > 0 if I want everything to be green.

So I took this a bit further since I wanted to turn these on or off depending on if I wanted to run tests or to investigate bugs, and I allowed for a filter language to be parsed in mochitest/tests/SimpleTest/setup.js.  I then modified my runtestsremote.py (subclass of runtests.py) so when launching mochitest from the command line I could control this like:

#run all tests that match the 'orange' tag
python runtestsremote.py --filter='run:fennec-tags(orange)'

#skip all tests that have the timeout tag
python runtestsremote.py --filter='skip:fennec-tags(timeout)'

#skip all tests that have failures > 0
python runtestsremote.py --filter='skip:fennec-results(fail>0)'

You should now see the power of this filtering and that with some more detailed thinking we could have a powerful engine to run what we want.  Of course this can run on regular mochitest (if you take the code in this patch from runtestsremote.py and add it to runtests.py.in) and run all the orange tests in a loop or something like that.

As a note, I previously mentioned a parsing script.   With some cleanup you could automatically create this json filter file based on tinderbox runs and fill in the tags to identify scenarios like orange (failing sometimes), timeouts, focus problems, etc…

Happy filtering!

Leave a comment

Filed under general, testdev

patches checked in, tests can run on windows mobile!

My previous posts on the status of winmo automation outlined a series of patches to land. I am proud to say all of those have been reviewed (thanks to everybody), have landed (thanks ctalbert for checking these in) and with the help of this buildbot shim script are running very well!

So some highlights of what works:

  • Mochitest: runs great on the phone, and we use the –total-chunks and –this-chunk options so we don’t run out of memory. Right now I am testing it with –total-chunks=20, but suspect we can go a bit lower. As another note, the overhead to restart the phone, install the build, load the browser, and start the tests is 7.5 minutes on my HTC touch pro
  • Reftest/Crashtest/JSreftest: all of these run great. We need to run smaller manifest files though as after about 800 files on my device the tests come to a halt and execute maybe 1 test every few minutes. Luckily these run with manifest files so we can easily create a few manifests and have a working solution
  • Xpcshell: this is pretty straightforward. I don’t see any problems running this end to end as the harness by design only runs one case at a time. As a note, this is the only test harness that copies over the test files to the phone.
  • shim script: this turns on the webserver on your local IP so we can access it from the phone, as well as a bunch of other setup, monitoring and cleanup tasks. It would be nice to move the webserver functionality into the remote harness scripts in the near future so developers can easily run from a build tree
  • sutagent: this is actually the backbone of these tests. This tool runs on the phone and has come a long way over the last few months. This agent is a product of blassey and bmoss. The next steps here are to get the code checked into one of our source trees.

There are a few things we want to clean up, but overall we are at a great milestone on this project and ready to start rolling this out.

Leave a comment

Filed under testdev

updated status of the winmo automation project

A couple weeks ago, I posted an initial status of the winmo automation project. Here is an update of where we are.

Only a few patches have landed since the last update, but a lot of reviews have happened. Great progress has been made to resolve some of the unknown issues with running xpcshell and reftest remotely and everything is truly up for review. I expect next week at this time to have a much shorter list of patches.

1 Comment

Filed under Uncategorized