Turborepo 1.6
Turborepo 1.6 changes the game for Turborepo - you can now use it in any project.
- Turborepo in non-monorepos: Seeing slow builds on your project? You can now use Turborepo to speed up builds in any codebase with a
package.json
. turbo prune
now supports npm: Pruning your monorepo is now supported in monorepos usingnpm
, completing support for all major workspace managers.- Faster caching: We've improved the way we handle local file writes, meaning a big speed-up of Turborepo's cache.
Update today by running npm install turbo@latest
.
Any codebase can use Turborepo
Turborepo helps speed up tasks in your codebase. Until now, we'd built Turborepo specifically for monorepos - codebases which contain multiple applications and packages.
Turborepo is fantastic in monorepos because they have so many tasks to handle. Each package and app needs to be built, linted, and tested.
But we got to thinking: lots of codebases that aren't monorepos run plenty of tasks. Most CI/CD processes do a lot of duplicated work that would benefit from a cache.
So we're excited to announce that any codebase can now use Turborepo.
Try it out now by starting from the example, or by adding Turborepo to an existing project:
Add Turborepo to your project
- Install
turbo
:
- Add a
turbo.json
file at the base of your new repository:
- Try running
build
andlint
withturbo
:
Congratulations - you just ran your first build with turbo
. You can try:
- Running through the full Quickstart.
- Check out our updated Core Concepts docs to understand what makes Turborepo special.
When should I use Turborepo?
Turborepo being available for non-monorepos opens up a lot of new use cases. But when is it at its best?
When scripts depend on each other
You should use turbo
to run your package.json
scripts. If you've got multiple scripts which all rely on each other, you can express them as Turborepo tasks:
Then, you can run:
Because you've said that build
should be run before lint
and test
, it'll automatically run build
for you when you run lint
or test
.
Not only that, but it'll figure out the optimal schedule for you. Head to our core concepts doc on optimizing for speed.
When you want to run tasks in parallel
Imagine you're running a Next.js app, and also running the Tailwind CLI. You might have two scripts - dev
and dev:css
:
Without anything being added to your turbo.json
, you can run:
Just like tools like concurrently
, Turborepo will automatically run the two scripts in parallel.
This is extremely useful for dev mode, but can also be used to speed up tasks on CI - imagine you have multiple scripts to run:
Turborepo will figure out the fastest possible way to run all your tasks in parallel.
Prune now supported on npm
Over the last several releases, we've been adding support for turbo prune
on different workspace managers. This has been a challenge - turbo prune
creates a subset of your monorepo, including pruning the dependencies in your lockfile. This means we've had to implement logic for each workspace manager separately.
We're delighted to announce that turbo prune
now works for npm
, completing support for all major package managers. This means that if your monorepo uses npm
, yarn
, yarn 2+
or pnpm
, you'll be able to deploy to Docker with ease.
Check out our previous blog on turbo prune
to learn more.
Performance improvements in the cache
Before 1.6, Turborepo's local cache was a recursive copy of files on the system to another place on disk. This was slow. It meant that for every file that we needed to cache, we'd need to perform six system calls: open, read, and close on the source file; open, write, and close on the destination file.
In 1.6, we've cut that nearly in half. Now, when creating a cache, we create a single .tar
file (one open), we write to it in 1mb chunks (batched writes), and then close it (one close). The halving of system calls also happens on the way back out of cache.
And we didn't stop there. Over the past month we've invested significantly in our build toolchain to enable CGO which unlocks usage of best-in-class libraries written in C. This enabled us to adopt Zstandard's libzstd
for compression which gets us an algorithmic 3x performance improvement for compression.
After all of these changes we're regularly seeing performance improvements of more than 2x on local cache creation and more than 3x on remote cache creation. This gets even better the bigger your repository is, or the slower your device is (looking at you, CI). This means we've been able to deliver performance wins precisely to those who needed it the most.