Anamaria Stoica

My Mozilla Blog

Posts Tagged ‘Builder

Introducing the Average Time per Builder Report

with 3 comments

The Average Time per Builder Report measures the average run time of each builder (e.g. ‘Linux mozilla-central build’, ‘Rev3 Fedora 12 mozilla-central opt test crashtest’) within a branch, computed over a timeframe. It also calculates the percentage of time spent by the system running jobs for each builder and the percentage of successful vs. warnings vs. failed jobs. In addition, all information mentioned above is aggregated and filterable by platform (fedora vs. fedora64 vs. leopard vs. linux…), build type (debug vs. opt) and job type (build vs. talos vs. unittest vs. repack).

First & last builders sorted by avg. run time Time spent per each platform in mozilla-central (Oct 1-20)
First & last builders sorted by avg. run time (mozilla-central, Oct 1-20) Time spent per each platform in mozilla-central (Oct 1-20)

URL & Parameters

The report can be accessed at the following URL:

<hostname>/reports/builders/<branch_name> ?(<param>=<value>&)*

, <branch_name> := the branch name (e.g. mozilla-central, try, ...)

Parameters (all optional):

  • format – format of the output; allowed values: html, json, chart; default: html
  • starttime – start time, UNIX timestamp (in seconds); default: endtime minus 24 hours
  • endtime – end time, UNIX timestamp (in seconds); default: starttime plus 24 hours or current time (if starttime is not specified either)
  • tqx – used by Google Visualization API (automatically appended by the library), relevant only if format=chart; default:
  • plaform – comma separated list of platforms; filter and display results only for the listed platforms; allowed values: fedora, fedora64, leopard, linux, linux64, snowleopard, win2k3, win7, win764, xp; default: ” (all)
  • build_type – comma separated list of build types (opt, debug); filter and display results only for the listed build types; allowed values: debug, opt; default: ” (all)
  • job_type – comma separated list of job types (build, unittest, talos, repack); filter and display results only for the listed job types; allowed values: build, repack, talos, unittest; default: ” (all)
  • detail_level – the detail level for the results (builder, job_type, build_type, platform, branch). By default, results are computed per builder; the other detail levels aggregate the results at job type, build type, platform or branch level; allowed values: branch, platform, build_type, job_type, builder; default: builder

Features

1. Average Run Time

First and foremost, the report measures the average run time for each builder (detail_level=builder). This way you can see how long individual builds, unittests and talos take on average and compare them.

By setting different filters, it is possible to compare only the builders of a platform, build type or job type of interest. Just to take a couple of examples, it’s very easy to see:

  • which fedora unittest takes the longest (platform=fedora; job_type=unittest; detail_level=builder): Rev3 ‘Fedora 12 mozilla-central debug test mochitests-4/5’ with 0h 59m 45s – see Fedora Unittest Demo

or

  • which platform takes the longest to build (platform=; build_type=debug,opt; job_type=build; detail_level=builder): ‘OS X 10.6.2 mozilla-central nightly’ with 2h 52m 46s – see Platform Builds Demo

The averages are simple arithmetic means so far, calculated over the number of Build Requests found for each builder within the specified timeframe. The number of Build Requests are displayed on the ‘No. breqs’ column and are different for each builder.

As a future improvement, the median could be used instead of the simple mean, or removing the outliers when computing the mean.

2. Percentage of System Run Time

In addition to the average run times, the report measures the percentage of time spent by the system doing jobs of a certain type (‘PTG Run Time %’ column). This number is computed by summing the run times of all Build Requests of a certain builder (job type, build type or platform, depending on the chosen detail level) and divided by the sum of the run times of all Build Requests displayed, after all filters have been applied (platforms, build types or job types).

ExampleHow much time is spent per each Linux builder?

  • Filters: platform=linux; build_type=opt,debug; job_type=build
  • Detail level: builder

As you can see from the table above, when looking only at Linux build builders, the system spends 34.78% doing ‘Android R7 mozilla-central build’ builds, based on 345 Build Requests having an average of 33m 21s. The percentage goes up both with the number of Build Requests and average run time.

The example looks at jobs registered between October 1-20, 2010 on mozilla-central. The same example can be accessed on the demo page at Linux Builders Demo.

3. Aggregation

It is possible to aggregate the results for the builders on upper levels, by setting the detail_level to job_type, build_type, platform or branch.

To make things more clear, let’s take an example: How much time is it spent per each Snowleopard optimized job type?

  • Filters: platform=snowleopard; build_type=opt; job_type=build,repack,talos,unittest
  • Detail level: job_type

The example looks at jobs registered between October 1-20, 2010 on mozilla-central. See demo page at Snowleopard optimized job types.

4. Filters

There are 3 types of filters that can be set: platforms, build types and job types. All of them have been used in one or more of the previous examples. For instance, in the ‘How much time is it spent per each Snowleopard optimized job type’ example (see 3. Aggregation), the filters are set as follows: platform=snowleopard; build_type=opt; job_type=build,repack,talos,unittest.

5. Percentage of Success vs. Warnings vs. Failure

Another interesting information presented by the report is the percentage of success vs. warnings vs. failure of registered build requests. By sorting the results by these values, you can easily see which tests fail the most, always fail, or always pass.

Examples:

  • Most failing builders (note: there are 11 builders with 100% failure (failure or exception) rate; why do they always fail?)

Demo

Average Time per Builder Report Demo

This is just a demo & works only for mozilla-central for October 1-20, 2010. All links outside the purpose of this demo were deliberately disabled. Enjoy!

Note: all table columns are sortable.

Repository

The main module handling the Builders report is buildapi.model.builders.

See Also: Pushes Report, Wait Times Report

Written by Anamaria Stoica

November 10, 2010 at 8:06 am