You should obviously still learn to code (if you want to)
Coding is one of the easy parts of being a software engineer. There are a whole host of coding-related activities and specialties that will be important for the foreseeable future.
The recent “you shouldn’t learn to code” conversation was kicked off by Dario Amodei — the CEO of Anthropic — being asked about jobs in relation to AI systems.
The Future of U.S. AI Leadership with CEO of Anthropic Dario Amodei
I think we’ll be there in three to six months, where AI is writing 90 percent of the code. And then in twelve months, we may be in a world where AI is writing essentially all of the code.
But the programmer still needs to specify what are the conditions of what you’re doing? What is the overall app you’re trying to make? What’s the overall design decisions? How do we collaborate with other code that’s been written? How do we have some common sense on whether this is a secure design, or an insecure design? So as long as there are these small pieces that a human programmer needs to do, the AI isn’t good at… I think human productivity will actually be enhanced.
But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then we will eventually reach the point where, you know, the AIs can do everything that humans can.
Amjad Masad at Replit took the ball and ran with it.
Amjad Masad, posted to Twitter/X on March 26th 2025
I no longer think you should learn to code.
If you click through to the Tweet and then watch the associated video, you’ll see that his position is moderated a bit. He says that if Amodei is correct and essentially all code will be AI-generated within 12 months, it would be a waste of time to learn how to code. To me this gives him some outs, like “it’s 12 months from now and wow that wasn’t true, obviously that changes my response.” But I’m going to take the Tweet as his most-recent belief since it was in response to the video.
If you’re new to the industry: don’t worry. It’s still useful to know how to code now, even by Replit’s standards. At the time of writing, Replit has 7 open engineering positions: four Software Engineers, 1 SRE, 1 Head of Product Engineering, and one Design Engineer.
The CEO of NVidia, Jensen Huang, also had a similar take but on a longer time horizon.
Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path
Over the course of the last 10 years, 15 years, almost everybody who sits on a stage like this would tell you that it is vital that your children learn computer science. [That] everybody should learn how to program. And in fact, it’s almost exactly the opposite.
It is our job to create computing technology such that nobody has to program and that the programming language is human. Everybody in the world is now a programmer.
Let’s summarize their claims in terms of the timelines:
Amodei doesn’t put a timeline on his prediction. But he says in the video that it’s the kind of prediction where he will be ridiculed on at least a 10-year time horizon. So this prediction is quite far into the future but quite specific, where the full economic output of software engineering jobs will be captured by AI.
Huang things that current children should not learn to code. So let’s say this is a 12-year prediction (i.e. you should dissuade a precocious 10-year-old from learning how to code if they were interested).
Masad thinks that nobody should learn to code. You can become a non-junior software engineer about 6 years after you begin learning1, so let’s say that he predicts that the net lifetime gain on learning coding will become negative within 6 years.
Yes, these are CEOs who hype AI coding for financial gain. But I’ll be fair to them. If they were 100% confident in their stated positions, how would they behave differently? I’ll assume that these arguments are in good faith and I’ll engage with the arguments directly.
First, let’s take the most aggressive timeline: Masad’s assertion that you should not learn how to code now, given that essentially all code will be written by AI in 12 months.
What does that imply? Let’s say that you’re entering a computer science program this fall. It is obviously important to get the best internship and jobs that you possibly can, regardless of whether you can code. You don’t know how the future of work will change, but “working at a world-class engineering company” is a good bet for catching the next wave. Plus the money is always nice.
For the first 3 years of college, you’re going to study all of the regular courses. Algorithms, operating systems, etc. You learn them well. You minor in something else. But you will complete all of your assignments with generative AI. All of your teachers need to accept that you use generative AI tools on their quizzes and exams.
During your 3rd year, you will land an internship. By then, every internship interview screen will need to accept generative AI coding tools as part of their screen.
You will use AI agents to code your way through your internship, gaining positive feedback and a reference you can use for your job search.
During your 4th year, you will need to interview with prospective companies using your generative coding interview skills and your reference from your internship.
During your 5th and 6th year, you are a productive member of the organization post-graduation. You can efficiently communicate with AI agents and use generative coding to deploy, and use generated graphs to monitor the system in flight. After 2 years, you get the good news: you are being promoted from Junior Software Engineer to Software Engineer. Congratulations! You have demonstrated your economic value as a software engineer.
And now let’s couple this internship with the following: “The year is 2030. All of my company’s code is produced by AI. Nobody at my company knows a single line of code we’re running. This is the most optimal situation.”
Why does this sound so far fetched? It’s because Software Engineering has a huge “last mile” problem. This is a phrase that I’m borrowing from logistics. It’s easy to ship between shipping hubs because there is a constant flow of freight between these hubs, but the “last mile” to the house or retail store is much more difficult. You actually need to drive through traffic to that house and find where it is and park your truck somewhere and go to the door and try to collect a signature. Software engineering has its own last-mile. Once you have a spec for the code, producing the code can be mechanical. However, the process of generating a clear spec for the code is difficult, as well as determining that the code is functioning as part of a working system afterwards.
In some ways, this reminds me of articles from 10+ years ago declaring that autonomous trucking will soon be automated. But then many of the major players in the space folded or kept pushing their deadlines back. Even now the furthest along seems to be Aurora. They only operate in Texas, and their website claims that they still use vehicle operators. So the impending doom from 8 years ago appeared to be more than 10 years into the future, perhaps far more.
Let’s talk through some of the last-mile problems that software engineers face. These are the “islands” that Dario Amodei mentioned in his response. There are a bunch of them like “gaining organizational consensus,” “making decisions that benefit the business,” and “managing stakeholders” that engineers spend a lot of time doing. Current LLM technology adds extra problems that you need expertise to resolve, like hallucinations, errors summarizing, etc. But I want to focus on the islands that relate to understanding code specifically.
Island one: Security
An incorrect model of security is “if every layer of the stack simply did security correctly, then there would never be a security vulnerability.” But security problems often fall across several layers of the stack. Either every layer is properly functioning but the whole system is flawed together, or several layers are misbehaving together. An example of collective misbehavior: when I was at Etsy, we ported our infrastructure from our own servers to Google Cloud. Shortly after, we received a security bounty that an attacker could log into any account simply by pasting any large payload into the password field. This was several layers of the app failing together2:
We had accidentally configured one of the new Google infra bits to strip headers when they were above some threshold. It was something like 64Kb.
If the auth service received a request to / with no other information, it returned a 200.
The backend code checked for 200 response codes to see whether the password was accepted.
There was no single broken part of the stack, it turned out that the whole stack was wrong.
The Google infra bit needed to be changed to return a 4xx error.
The auth service needed a tighter contract, and additionally needed to return a specific payload like
{“status”: “ok”}
.The backend code needed to check for the response code and check for the expected payloads.
Fixing only one layer doesn’t reduce the fragility of the system. The security team really needed to second guess each layer and determine its correct function. So you can’t just point an AI agent at the problem and expect it to generate a holistic solution. If you just asked it to fix the problem it would very likely just fix the Google infra bit.
And this is just a very simple example. For each security construct used by your application, it’s important to understand the actual code constructs and their properties. It’s important when the AI tries deleting that CSP or removing encryption from a cookie or checking response code instead of payload on your login form because this is the simplest way to satisfy your prompt, that you have the perspective of understanding why the check was there and what value it serves.
Island two: System design
At the moment, large software ecosystems are complex systems. They have the interesting property that they can have emergent behavior3. That means that every component of a system is functioning correctly, but when they work together they fail. This means that you can’t create a properly-functioning system by simply writing a bunch of components that work well together. It also means that you cannot predict all of the failure cases for the system.
In fact, properly maintaining the system turns into a control problem. Think of your software system as an aircraft, and your graphs and alerts as the instruments of an airplane. You are simply looking at all of the available data. And when you see the system deviate from the expected behavior, you need to react and move it back within normal operating parameters.
The AI is hampered by only having access to the code. I’m sure future systems will plug into your logs and your source code and your monitoring and boil an entire ocean to tell you that everything is happening within normal parameters. But at the moment, reading the code doesn’t tell you anything about the outside usage patterns. It doesn’t tell you what happened all of the previous times an approach caused bugs at a different company. This is where the human can own the code structure and organization, and go from a middling solution generated by the AI to the best solution for your particular domain.
Island three: Handling on-call issues
This is the flip-side of the coin to system design. When the complex system is no longer functioning correctly, you need to be alerted to this fact and then understand the problem so that you can fix it.
There aren’t singular reasons that things fail. You can’t just point an AI at an error message that is flooding your logs, and expect the AI to fix all problems without being plugged into the instrument panel on the plane. Your database is failing because the requests are hanging and then failing. This could have any cause from “the database is overloaded because your application is running too hot” to “the database is misconfigured” to “your cloud provider is struggling” to “your new query has an infinite loop.” Until the AI system is able to be plugged into the instrument panel, it won’t produce proper fixes for this, and you will need to understand the underlying code and the underlying system well enough that you can convert that call stack problem into an actionable plan. And then maybe the AI takes over and can happily generate it.
Errors happen at points in code. When you are on call, it is your responsibility to look at that code and decide what is happening. Very often, this will be in a piece of business logic. Sometimes there are several options for remediating an error: do you ignore the error? Do you catch and log the error at another level? Do you need to clean up the input source? Do you need to page the staff engineer on another team because nobody on the oncall rotation worked on that project and there’s not a cookie-cutter answer? What is the difference between “the best fix” and “what is sufficient at 3am?”
Furthermore, is your on-call rotation depending on generative AI to produce solutions to get you back on track? What if it goes down during your outage? Are you just going to wait for it to come back up? Is your CEO going to be happy with that answer?
So which CEO was the most correct?
Let’s go from the most aggressive prediction to the least aggressive.
Amjad Masad at Replit: If you were entering a computer science program today, it is irresponsible to bet your whole career on learning computer science without learning how to code. In some ways, coding is the easy part of what a software engineer does. There is a whole host of interrelated activities that all inform the code, and are all informed by the code. There is only one conclusion you can draw: it is imperative that you learn how to code. I also think that it’s important to learn how to use AI tools to accelerate this process. These tools are only going to get better, so you should learn how they can accelerate you.
Jensen Huang at NVidia: This puts it on a longer timeframe, where children should not learn to code because they would be better served learning to think. In this world, everyone becomes a programmer because the systems are so powerful that the most important thing is asking good questions and providing the system good insight and input. It’s still quite a rigid timeline though, and as we see from autonomous trucking, sometimes the devil is in the details. This prediction is plausible but at risk, given the specific timeframe.
Dario Amodei of Anthropic: His view seems like the most plausible answer to me. I agree that coding is a small piece of the value of humans in software engineering. I can also imagine that eventually these AI systems will be so powerful that you can sit them in front of an AI-accelerated product manager, and they will have access to the current codebase and the logging and the monitoring and they can fly the plane by themselves, and there will no longer be humans involved in coding. But this does seem like a long ways away. There are likely several generational improvements that need to happen before these systems can all coexist while using the amount of energy we have available on Earth.
So, yes. If you weren’t sure whether you should learn to code nowadays, take comfort. We still power a whole host of human activity surrounding the actual code. So even if the AI were generating a lot of it, humans still need to be in the loop understanding and refining the code for the foreseeable future.
4 years of college and 2 years of industry experience for your first promotion.
I wasn’t on the security team, so this may not be the exact bug. It’s close enough though.
To give an example: when I was at Google, someone had compiled a cool Google Doc entitled “The GMail Death Ray” that detailed all of the ways that GMail had caused outages in non-GMail services. It was stuff like “we had a rolling deployment, but about 5 minutes after starting the service would crash. So the service started sloshing from datacenter to datacenter every 5 minutes.. starting up, saying it was healthy, and then taking down everything in that datacenter before moving on to the next one.”