Typescript 7.0 will compile 10x faster
By rewriting the Typescript compiler in Go, the Typescript team will produce a blazing-fast feature-complete compiler by the end of 2025.
On Tuesday, Microsoft announced that they were writing a native port of Typescript. This port is written in Golang, and will be released as Typescript 7.0.
Developers working in large projects can experience long load and check times, and have to choose between reasonable editor startup time or getting a complete view of their source code. We know developers love when they can rename variables with confidence, find all references to a particular function, easily navigate their codebase, and do all of those things without delay. New experiences powered by AI benefit from large windows of semantic information that need to be available with tighter latency constraints. We also want fast command-line builds to validate that your entire codebase is in good shape.
They can already compile large codebases. It is scheduled to be feature complete by the end of 2025. The developers published benchmarks for compiling real codebases. The largest benchmark is Visual Studio Code’s codebase, which has 1 million lines of Typescript. They dropped compilation time 10.4x for the entire codebase, from 77.8 seconds to 7.5 seconds.
During the announcement, the team made themselves available to answer questions on Reddit and Hacker News. The top question on Reddit is the one that is on everyone’s mind: “Why not $my_favorite_language
?”1
Portability (i.e. the ability to make a new codebase that is algorithmically similar to the current one) was always a key constraint here as we thought about how to do this. We tried tons of approaches to get to a representation that would have made that port approach tractable in Rust, but all of them either had unacceptable trade-offs (perf, ergonomics, etc.) or devolved in to "write your own GC"-style strategies. Some of them came close, but often required dropping into lots of unsafe code, and there just didn't seem to be many combinations of primitives in Rust that allow for an ergonomic port of JavaScript code (which is pretty unsurprising when phrased that way - most languages don't prioritize making it easy to port from JavaScript/TypeScript!).
In the end we had two options - do a complete from-scrach rewrite in Rust, which could take years and yield an incompatible version of TypeScript that no one could actually use, or just do a port in Go and get something usable in a year or so and have something that's extremely compatible in terms of semantics and extremely competitive in terms of performance.
This is effectively the old Worse is Better argument: that it’s best to produce a simple and imperfect program that people can actually use. If you wait until you can ship a conceptually-perfect program, you will spend a lot of extra time solving problems that you don’t need to solve and you will be outmaneuvered by people who regularly ship to users and get feedback.
I also love that the Typescript maintainers focused on the compilation times for Typescript, and they ended up implementing it in Go. It is unbelievable how quickly Go compiles code. It typically takes a few seconds to compile even large binaries. So the maintainers will get faster compilation times too. Everyone will win here.
Your developer inner loop deserves to be as optimized as possible. Waiting is wasted time. If you’ve only worked in languages that compile quickly, or scripting languages that don’t have a compilation pass, I am nothing but jealous of you.
How painful can this be?
In my first job out of college, I did systems programming on “SLAM” robotics projects (Simultaneous Localization and Mapping). SLAM basically means that autonomous robots wander around and map out their surroundings.
Since the company was a research company that didn’t productize very much, the codebase was C++ and varied wildly in quality. Some developers wrote everything using templates. Some developers wrote C/C++. Some developers wrote hyper-optimized C with assembly targeted to specific x86 architectures.
I was often stuck working on the super templated code. I even wrote a bunch, and pulled in a bunch of slow-compiling Boost libraries to boot. When in Rome! Depending on the program, a full recompile took tens of minutes, and incremental builds could easily take over 5 minutes.
Since I was the systems programmer, I wrote the code that actually caused our navigation software to drive the robot. Some robots had nice API wrappers, but usually I was given a printout of the binary layout of each of the control packets and I needed to interact with the robot over a serial port from a computer we controlled mounted on top. And there’d be a bunch of random undocumented stuff, like mystery packets that aren’t in the printout, and the packets needing to be sent in a specific undocumented order.
Imagine doing this when it takes 10 minutes from start of compilation to testing the code. I’d write the code and a test. I’d press “compile” and then drive the robot to my shared office and put it on blocks so that it couldn’t go anywhere. Then I could transfer the binary onto a USB stick (don’t forget to eject!), transfer onto the robot’s tiny computer (don’t forget to eject!), run, and… nothing would happen. Typical. 10 minutes have gone by since I pressed compile. If it takes me 5 minutes to investigate and produce a fix, and an extra 10 to compile and transfer the binary, then I can only test 4 changes an hour. Oh, and 5 other people want access to the robot and they’re all mad that I have it instead of them.
So yeah, fast compilation is really important. The more you optimize your inner loop, the more swings you can take at the ball in a given amount of time.
And I know that frontend developers know that. They optimize the hell out of their inner loops. Look at Vite’s homepage. At the time of writing, almost all of its promises are related to speed.
Instant startup
Fast hot swapping
Optimized builds
If you’re not a frontend developer, I recommend starting a React project in one of the modern app creators like Vite. Holy hell, the experience is amazing. Just run the app and it’ll just open your browser for you. Your demo app is ready to go. Go find the “hello world” string and change it to “goodbye cruel world” and hit save. It’s already in your browser because they’re observing file changes and then immediately hot swapping the new version into your browser.
So yeah, I know for sure that the parts of the frontend community that use Typescript are going to love this. They know how important it is to optimize their inner loops, and their inner loops just got faster.
The most common requests were Rust (presumably because it’s cool) and C# (presumably because it’s a Microsoft language like Typescript).