For the last year, I have been focused on ensuring we look at the alerts generated by Talos. For the last 6 months I have also looked a bit more carefully at the uplifts we do every 6 weeks. In fact we wouldn’t generate alerts when we uplifted to beta because we didn’t run enough tests to verify a sustained regression in a given time window.
Lets look at data, specifically the volume of alerts:
this is a stacked graph, you can interpret it as Firefox 32 had a lot of improvements and Firefox 33 had a lot of regressions. I think what is more interesting is how many performance regressions are fixed or added when we go from Aurora to Beta. There is minimal data available for Beta. This next image will compare alert volume for the same release on Aurora then on Beta:
One way to interpret this above graph is to see that we fixed a lot of regressions on Aurora while Firefox 33 was on there, but for Firefox 34, we introduced a lot of regressions.
The above data is just my interpretation of this, Here are links to a more fine grained view on the data:
As always, if you have questions, concerns, praise, or other great ideas- feel free to chat via this blog or via irc (:jmaher).