This post is a quick write-up of a poc on remote Bazel builds of Svelte code.

Remote Builds

For some more context on Svelte and Bazel, you may want to check out my other article.

Remote build execution is an important build performance feature in Bazel. It allows you to outsource application builds to powerful computers running in the cloud. Instead of running on limited local hardware with 4+ cores, you spread the load across 100s (500+) of cores in the cloud.

Given enough independent modules in your application, running against 500 cores in parallel will cut build times by orders of magnitude. Obviously in a sufficiently large application this could make a world of difference!

In addition to remote execution, you also have remote caching. This means developer boxes can pull down modules that were previously built by remote executors. Remote caching is perhaps the most practical feature to add since it can be done cheaply without paying for a large server farm.

An example of this would be a setup where you have your CI server build and cache the results. This works well since CI servers typically build every commit anyway.

In my experiment I decided to build a very large Svelte application to measure the impact of building remotely. The application isn’t very interesting since it’s mainly a collection of 2000+ generated components. However it does the job as a benchmarking application.

2000 components may seem unrealistic, at least in a single application. However, in a large monorepo, it’s not entirely unrealistic to have 2000 components across multiple applications.

Build Metrics

I should point out that the Svelte compiler is super fast. It seems to scale really well even with a very large number of components. The compiler can work on multiple components at once, so as a baseline I ran it against the whole project. In about 17 seconds all 2000+ components were compiled and written to disk. The components are very similar and pretty small, but still, this is very fast given the high number of components.

Next, let’s look at the numbers for the Bazel build.

First I ran a local build to establish a baseline. The first cold build was as expected very slow. It took about 2831 seconds to build the whole project.

Why is this so slow?

The most likely explanation is that my current Bazel Svelte rules are not optimized enough. Mainly because the Svelte compiler instance is restarted per file instead of hitting already started instances. Bazel has a concept of workers that may address this issue, but it would mean wrapping the Svelte compiler in something that prevents restarts of the compiler per file. In the Baseline I compiled all the components in one pass using a single invocation of the Svelte compiler.

Next I created a small build farm consisting of three Mac laptops. Sadly this is all the computing power I could bring to bear in my small apartment in Brooklyn. In a more realistic scenario you would run against much more powerful computers.

In my case I am using BuildFarm for my remote builds. The readme in the project makes it pretty straightforward to configure the remote builders.

Given the modest resources of my build farm, I didn’t expect a drastic reduction in build times compared to the baseline. The new build time was 2130 seconds, which is a reduction of 24%. My application consists of a lot of independent modules, so this would be drastically reduced if my build farm was more powerful.

Next, I went to a different machine to rebuild the same application. Now that the code was already built and cached remotely, we should notice a drastic reduction in build time. Sure enough, the build time was only 130 seconds. Basically this was an initial build of 2000+ components with a 100% cache hit rate.

Next let’s let’s make a big code change by updating 200 of the components.

Bazel is famous for incremental builds, so only the affected components should be rebuilt. This worked as expected. Building the 200 updated components completed in 350 seconds. Again, this is still subject to unoptimized Svelte rules, but the build time is proportional to the size of the change.

As a final test I pulled the 200 changed components on a different computer. Think of this as the CI scenario I described earlier. The CI server already built the changed components, so the developer build should be very fast. The build time when pulling in the delta was only 5.6 seconds.

Before, the 17 second baseline from the raw Svelte compiler seemed unapproachable, but 5.6 seconds from a cached incremental build is a good start.

Conclusion

The Svelte rules used for this experiment are still experimental and unoptimized. I suspect restarting the Svelte binary per file is where most of the slowness comes from.

Still, I think the experiment provides valuable data points for comparing remote builds vs local builds in a generic context.

As you can see, adding just a remote cache is very impactful to build times. This may be a perfect alternative if you’re not able to afford a large server farm.

Once the application was built remotely the local build times were drastically reduced. The initial build time may seem scary, but only the servers seeding the cache should be subject to that slowness.

You can download my Svelte branch here if you are interested in trying it yourself.