Tag Archives: performance

The lifecycle of a Talos performance regression

The lifecycle of a Talos performance regression

The cycle of landing a change to Firefox that affects performance

Leave a comment

May 8, 2014 · 9:38 am

Performance Alerts – by the numbers

If you have ever received an automated mail about a performance regression, and then 10 more, you probably are frustrated by the volume of alerts.  6 months ago, I started looking at the alerts and filing bugs, and 10 weeks ago a little tool was written to help out.

What have I seen in 10 weeks:

1926 alerts on mozilla.dev.tree-management for Talos resulting in 58 bugs filed (or 1 bug/33 alerts):


*keep in mind that many alerts are improvements, as well as duplicated between trees and pgo/nonpgo


Now for some numbers as we uplift.  How are we doing from one release to another?  Are we regressing, Improving?  These are all questions I would like to answer in the coming weeks.

Firefox 30 uplift, m-c -> Aurora:

  • 26 – regressions (4 TART, 4 SVG, 3 TS, Paint, and many more)
    • 2 remaining bugs not resolved as we are now on Beta (bug 990183, bug 990194)


Firefox 31 uplift, m-c -> Aurora (tracking bug 990085):


Is this useful information?

Are there questions you would rather I answer with this data?



Filed under Uncategorized

Performance Bugs – How to stay on top of Talos regressions

Talos is the framework used for desktop Firefox to measure performance for every patch that gets checked in.  Running tests for every checkin on every platform is great, but who looks at the results?

As I mentioned in a previous blog post, I have been looking at the alerts which are posted to dev.tree-management, and taking action on them if necessary.  I will save discussing my alert manager tool for another day.  One great thing about our alert system is that we send an email to the original patch author if we can determine who it is.  What is great is many developers already take note of this and take actions on their own.  I see many patches backed out or discussed with no one but the developer initiating the action.

So why do we need a Talos alert sheriff?  For the main reason that not even half of the regressions are acted upon.  There are valid reasons for this (wrong patch identified, noisy data, doesn’t seem related to the patch) and of course many regressions are ignored due to lack of time.  When I started filing bugs 6 months ago, I incorrectly assumed all of them would be fixed or resolved as wontfix for a valid reason.  This happens for most of the bugs, but many regressions get forgotten about.

When we did the uplift of Firefox 30 from mozilla-central to mozilla-aurora, we saw 26 regression alerts come in and 4 improvement alerts.  This prompted us to revisit the process of what we were doing and what could be done better.  Here are some of the new things we will be doing:

  • For all regressions found, attempt to find the original bug and reopen/comment in the bug
  • For some regressions that it is not easy to find the original bug, we will open a new bug
  • All bugs that have regression information will be marked as blocking a new tracking bug
  • For each release we will create a new tracking bug for all regressions
  • After an uplift from central->aurora, we will ensure we have all alerts mapped to existing regressions

As this process goes through a cycle or two, we will refine it to ensure we have less noise for developers and more accuracy in tracking regressions faster


Leave a comment

Filed under Uncategorized

status of the winmo automated tests project

I have been posting about this project for a while, so I figured I should give an update. Currently patches are landing and we are starting to get the final set of patches ready for review.

  • Talos: This was the first part of this project and we have checked in 3 of the 4 patches to get Talos TS running. There is 1 patch remaining which I need to upload for review
  • Mochitest: There are 4 patches required for this to work:
    1. Fix tests to not use hardcoded localhost – early review stages
    2. Add CLI options to mochitest for remote webserver – I need to cleanup my patch for review, at the end game
    3. Add devicemanager.py to the source tree – review started, waiting on sutagent.exe to resolve a few minor bugs
    4. Add runtestsremote.py to the source tree – review process started, waiting on other patches

    Good news is all 4 patches are at the review stage

  • Reftest: This requires 4 patches (1 is devicemanager.py from mochitest)
    1. Modify reftest.jar to support http url for manifest and test files – up for review
    2. Refactor runreftests.py – up for review
    3. Add remotereftests.py to source tree – needs work before review, but WIP posted

    Keep in mind here we are still blocked on registering the reftest extension. I also have instructions for how to setup and run this.

  • Xpcshell: this requires 3 patches (1 is device manager) and is still in WIP stages. There are two pieces to this that we still need to resolve: copying over the xpcshell data to the device and setting up a webserver to serve pages. Here are the two patches to date:
    1. Refactor runxpcshelltests.py to support subclass for winmo – WIP patch posted, close to review stage
    2. Add remotexpcshelltests.py to source tree – WIP patch posted

    I have written some instructions on how to run xpcshell tests on winmo if you are interested.

Stay tuned for updates when we start getting these patches landed and resolving some of our device selection/setup process.


Filed under testdev

tpan fennecmark – update

A quick update on what it took to get this showing up on the graph server.

As Aki tried to get this going, he kept running into issues. I looked into it and found out that he was using perfConfigurator.py to generate the config:

python PerfConfigurator.py -v -e /media/mmc1/fennec/fennec -t `hostname` -b mobile-browser --activeTests tzoom --sampleConfig mobile.config --browserWait 60 --noChrome --oldresultsServer graphs-stage.mozilla.org --oldresultsLink /server/bulk.cgi --output local.config

So I do this, run the test and it spits out a number. The problem is that it was not sending data up to the graph server as Aki pointed out in the console output (notice there are no links on the RETURN: graph links):

transmitting test: tzoom: 
		Stopped Sat, 13 Jun 2009 09:58:37
RETURN: graph links


Completed sending results: Stopped Sat, 13 Jun 2009 09:58:37

After poking around I had no luck, so I asked Alice (who is the talos/graph-server expert). She had me try two things:
1) increase the # of cycles > 1 (I did 5 for the trial)
2) edit run_tests.py to include ‘tpan’ and ‘tzoom’ in the list of tests to look for (filed bug 497922)

That did the trick. I got output to the graph server and we are all set. All that remains is to check in the raw code for fennecmark.jar. This will be done in aki’s talos-maemo repository.

Leave a comment

Filed under testdev

tpan – first draft

I got this wrapped up into patch and last night got it working on a deployed n810!

Here is what it took:

  • creating a shell page that took queryString parameters and controlled fennecmark
  • modifying fennecmark to take control commands
  • modifying fennecmark to report data correctly
  • modifying fennecmark to use a local webpage instead of a page on the internet
  • modifying the talos mobile.config to support fennecmark

the shell page and control commands go hand in hand. Here I created a very simple .html page which utilized the eventdata to communicate between privileged and no privileged modes:

    var test = "";
    function parseParams() {
      var s = document.location.search.substring(1);
      var params = s.split('&');
      for (var i = 0; i < params.length; ++i) {
        var fields = params[i].split('=');
        switch (fields[0]) {
        case 'test':
          test = fields[1];
    if (test == "Zoom" || test == "PanDown") {
      var element = document.createElement("myExtensionDataElement");
      element.setAttribute("attribute1", test);

      var evt = document.createEvent("Events");
      evt.initEvent("myExtensionEvent", true, false);

This is a very simple and basic page, but it does the trick. On the fennecmark side, I created a listener which upon receiving an event, would get the testname to run and kick off fennecmark inside of overlay.js:

var myExtension = {
  myListener: function(evt) {
    var test = evt.target.getAttribute("attribute1");
    if (test == "Zoom") { BenchFE.tests = [LagDuringLoad, Zoom]; };
    if (test == "PanDown") { BenchFE.tests = [LagDuringLoad, PanDown]; };

    setTimeout(function() {BenchFE.nextTest(); }, 3000);

The next modification to fennecmark is to fix the report.js script and conform to the talos reporting standards:

    if (pretty_array(this.panlag) == null) {
      tpan = "__start_report" + median_array(this.zoominlag) + "__end_report";
    else {
      tpan = "__start_report" + this.pantime + "__end_report";

One quirky thing here is since I am only running pan or zoom, I check if pan is null and print zoom, otherwise just print pan. I suspect as I near this code to a checkin state I will make this more flexible.

Lastly to finish this off, I need to point fennecmark at a page that is local, not on the internet. My initial stab at doing everything had me developing with the standalone pageset which doesn’t life on the production talos boxes. After some back and forth with Aki, I learned what I needed to do and modified pageload.js to do this:

browser.loadURI("http://localhost/page_load_test/pages/www.wikipedia.org/www.wikipedia.org/index.html", null, null, false);

Ok, now I have a toolset to work with. What do we need to do for talos to install and recognize it. I found out that in the .config file there is a section that deals with extensions:

# Extensions to install in test (use "extensions: {}" for none)
# Need quotes around guid because of curly braces
# extensions : 
#     "{12345678-1234-1234-1234-abcd12345678}" : c:\path\to\unzipped\xpi
#     foo@sample.com : c:\path\to\other\unzipped\xpi
#extensions : { bench@taras.glek : /home/mozilla/Desktop/talos/tpan }
extensions : {}

#any directories whose contents need to be installed in the browser before running the tests
# this assumes that the directories themselves already exist in the firefox path
  chrome : page_load_test/chrome
  components : page_load_test/components
  chrome : tpan/chrome

Here, I needed to add a new directory for chrome: tpan/chrome. In order to make this work, I needed to create a .jar file out of fennecmark instead of an unzipped extension in the profile (similar to dom inspector). This was a frustrating process until I found Ted’s wizard. After running the wizard, copying my code, tweaking config_build.sh to keep the .jar and running the build.sh script, I had what I needed.

The last step is to add the raw config to mobile.config so fennecmark will run:
tests :

- name: tpan
  shutdown: false
  url: tpan/fennecmark.html?test=PanDown
  resolution: 1
  cycles : 1
  timeout : 60
  win_counters: []
  unix_counters : []
- name: tzoom
  shutdown: false
  url: tpan/fennecmark.html?test=Zoom
  resolution: 1
  cycles : 1
  timeout : 60
  win_counters: []
  unix_counters : []

This leaves me at a good point where we can run fennecmark. Next steps are to solidify reporting, decide where to check in the fennecmark code (not just the .jar) and finally make any adjustments needed to get the installation, running and reporting well documented and stable.

Leave a comment

Filed under testdev

tpan for fennec – intial overview

My project this week is to add some fennec specific performance measurements to talos. Taras has written a great fennecmark tool that is an extension which measures page load, pan, and zoom.

My work flow is to:

  1. integrate fennec mark into talos as tpan
  2. update tpan to include more comprehensive tests
  3. make tpan simulate slow network latencies

Talos is a great framework that the mozilla crew has developed to measure performance metrics on a regular basis. It is a lightweight toolset and easy to setup/install locally, even for Fennec. I found it very straightforward to set this up and get talos results on my local linux box.

In order to meet the needs of the Fennec project and get this up and running, we are just going to implement step 1 and add the additional steps as projects on the fennec testdev page.

Leave a comment

Filed under testdev