Is C++ backwards compatible or not?
I went down a rabbit hole this week exploring why so many people want C++ to break backwards compatibility (which is already something it does with every release)
I randomly came across a thread on the C++ subreddit about backwards compatibility. It seems like an innocent question but it nerd sniped me for a few hours.
Why keep back compatibility for so many years? - pnarvaja
My question/rant is about why the c++ language pushes so hard to mantain backwards compatibility with every release and never do a breaking realease and get rid of headers (like most modern languages) to benefit from faster compilation times and make the programming more comfortable? This will also imply to never bother about declaring a struct/class/enum before using it since the compiler would take care of that too. Most old programs dont even update to newer c++ version anyways, and those that do get affected by it have the option to keep with the old version.
And my first reaction was confusion. C++ is definitely not backwards compatible between releases I haven’t worked professionally in C++ in 15 years, and even I know they remove features with each release. They removed trigraphs1, the most cursed language feature of all time! Compilers like Visual Studio aren’t even binary-compatible across different major releases! What are people talking about?
But then I thought for a second. C++ is definitely backwards compatible, right? If I installed all of the dependencies from one of my C++ projects from 2010, I’m confident that they would compile and run successfully. All of these deprecated features were deprecated because few people used them.
So I started digging more. What kinds of things do people want to be backwards compatible?
Let’s look at OP’s post to start. He is specifically complaining that C++’s new module system is not the only way to reference code in other files. Why did C++ need a new module system? Because the old way was terrible. Previously, C++ files included each other use the #include
preprocessor directive. It was basically the simplest system imaginable. You just said “include this file” and C++’s preprocessor would replace the #include statement with the entire preprocessed contents of that file. As you can imagine, common includes might be processed thousands of times each in a large project. It is so slow. There are some tricks like precompiled headers that you could do, but they weren’t a full solution. You know the XKCD compiling meme? There’s real truth to it.
So C++ made a new module system that does what a sane module system would do: it makes a binary version of the module’s exported information so that the compiler can efficiently reference it.
They’re obviously never doing away with imports. Were there any other interesting feedback from the community? I dug through more forum posts and found a variety of complaints: the language needs to be simpler, the language needs to be safer, it should drop backwards-compatibility with C, fix the grammar to remove ambiguity. A lot of people had very specific niche complaints, and there was a common complaint that it’s still too easy to make unsafe programs.
From there, I wondered what the people who ran C++ thought about backwards compatibility. I found this talk from 2023 where Bjarne Stroustrup explains why they can’t really break backwards compatibility, and what they can do about it. The explanation of why they can’t make breaking changes is pretty simple: C++ projects are unbelievably huge. There are safety-critical codebases with millions and millions of lines of code. There’s just no feasible way to rewrite that mess.
So he goes over the three main options he views. I’ll note that he seems to specifically feel a lot of pressure to make C++ safer, possibly in response to Rust. So a lot of his talk is from the perspective of “how do we adopt these advances in safer languages?” and “why are C++ people so obsessed with writing hybrid C/C++ code?2”
“Fuck it, let’s break backwards compatibility.” Sorry, there’s too much existing C++ code.
Let’s add another language on top of C++, much like C++ is built on C. This is interesting but a non-starter.
Let’s add “safety profiles,” i.e. allow users to define deterministic rules for what the compiler will and will not accept. For example: maybe one type of rule disables surprising implicit type conversions.
Safety profiles were kind of interesting. One of the classic C++ complaints is “you needed to decide what subset of the language you would use, since the language specification was far too large to use everything.” Safety profiles seem to try to perform the judo move of leaning into this reality and just allowing people to enforce the subset of the language that they are using.
The idea seems to have some problems that makes it less useful than Rust’s. The arguments are very technical and very specific to the design choices. And I’m trying to keep it light here3!
I’m not going to lie: reading all of this made me glad that I left C++ behind 15 years ago. I know that many people (aerospace contractors, game developers, etc) are forced to work in C++ for their profession. But if you’re starting a project from scratch and there’s not a specific reason that you must choose C++, run (don’t walk) to the nearest other language that maps to your domain.
C (and by extension C++) supported environments with extremely limited character sets. How limited were they? You could not type #include “file.h”
because it’s not possible to type the “#”. So they made a substitution table allowing three-character sequences to stand in for the untypable characters. So you could type ??=include “file.h”
and the compiler would treat it like #include “file.h”
.
I took 100 points of emotional damage from being reminded that this is a constant motif in the C++ community.
You know, chill. Relaxed. Like any 1000-word essay on backwards compatibility of C++ should be.