Performance

Please does away with upfront analysis phases, and instead opts to analyse the build graph on the fly. Performance is integral to this approach, so we need some way to track it. Plotted here are some graphs of benchmarks we run against each commit. These are used to identify any regressions to performance we may introduce, and to verify any optimisation efforts have been successful.

This page is a work in progress and will be improved over time as we implement more benchmarks.

Overall parse performance

This benchmark aims to provide a good indicator of the overall parse performance of Please. This benchmark covers parsing and executing BUILD files, adding targets to the graph, and resolving dependencies. The test measures the time to analyse a large (~300,000 targets) synthetic graph. This is primarily a benchmark of asp, the core build graph and relevant parts of the Go runtime, but touches on everything relating to analysing the build graph.

This benchmark is ran five times. Plotted is the average (mean) time for each run, with the ranges representing the minimum and maximum times.

Peak memory usage

Comparing memory utilisation in Please with Bazel is difficult because the runtimes have very different memory models. Bazel runs on the JVM so has a predetermined maximum heap size that is configurable through JVM options, and The JVM doesn't generally return memory back to the OS once it has been allocated. This means that tweaking this value is very important. The Golang runtime, on the other hand, is able to resize the heap up and down, limited only by the physical memory available. Golang also more readily frees memory back to the OS over time.

Additionally, Bazel performs a full graph analysis phase before a build, whereas Please rarely analyses the whole build graph. During a build, packages are parsed on demand, and targets are queued for build as soon as they have been parsed. This means that Please generally uses less memory during build when compared to querying the graph.

Regardless, graphed here is the peak resident memory allocated by the OS to the Please process during the same parse performance test above. This has been measured using time. The actual heap size is usually much smaller than this as once Please has stopped interpreting the BUILD files, that memory will be eventually be returned to the OS over time.

Provide for performance

At the core of resolving dependencies for targets is the provideFor() method. This method resolves named dependencies, handling the require/provide semantics where necessary. It is called every time a target depends on another, so is critical to the performance of Please.

The benchmark is ran millions of times per commit and the average (median) result is plotted here. There are four series to this graph:

There are very few cases where a target will provide more than a couple of targets, so the time complexity of this method as the number of matches increases isn't a big concern.

For more information on the require/provide mechanism, see the docs here.