<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Client/Server]]></title><description><![CDATA[A staff software engineer's view on current events, tech trends, and the occasional rant.]]></description><link>https://www.clientserver.dev</link><generator>Substack</generator><lastBuildDate>Sat, 04 Apr 2026 13:02:52 GMT</lastBuildDate><atom:link href="https://www.clientserver.dev/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jacob Voytko]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[clientserver@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[clientserver@substack.com]]></itunes:email><itunes:name><![CDATA[Jacob Voytko]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jacob Voytko]]></itunes:author><googleplay:owner><![CDATA[clientserver@substack.com]]></googleplay:owner><googleplay:email><![CDATA[clientserver@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jacob Voytko]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[How could Soham Parekh have improved his overemployment scheme?]]></title><description><![CDATA[Soham Parekh worked four tech jobs simultaneously but typically got caught at each company. Thought experiment: what would it take to get away with it?]]></description><link>https://www.clientserver.dev/p/how-could-soham-parekh-have-improved</link><guid isPermaLink="false">https://www.clientserver.dev/p/how-could-soham-parekh-have-improved</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 07 Jul 2025 12:02:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/64d2c4c9-3e98-4fcd-a105-70a44a8ac681_6000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Thought experiment: you wake up and you are Soham Parekh. You want to extract as many resources out of the tech sector via overemployment as you can. What is your approach to becoming as overemployed as possible?</p><p>If you&#8217;re not familiar with the Soham Parekh story, Suhail Doshi <a href="https://x.com/Suhail/status/1940287384131969067">put out a PSA</a> that Soham has been running an overemployment scam by simultaneously working at multiple startups. He did this remotely from India. <a href="https://techcrunch.com/2025/07/03/who-is-soham-parekh-the-serial-moonlighter-silicon-valley-startups-cant-stop-hiring/">TechCrunch reports</a> that he had been sweetening the deal by taking low salary and high equity at each of the jobs. He was caught once members of the Y Combinator community started checking each others notes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A <a href="https://news.ycombinator.com/item?id=44448461">Hacker News thread on the topic</a> was flooded with people saying that they hired him. In summary, he excels at <a href="https://leetcode.com/">LeetCode-style interviews</a>. But once it came time to work, he would give wild excuses about why he couldn&#8217;t meet deadlines. He would skip meetings, have massive delays for shipping pull requests, and get nothing done until he was eventually terminated.</p><p><strong>I think we can do better.</strong> He repeatedly got caught by companies, and since his compensation packages were always equity-heavy &#8212; which typically vests with a 1 year cliff &#8212; he was getting underpaid per job.</p><p>Let&#8217;s break down his strategy:</p><ul><li><p><strong>Focus on startups</strong>. These are companies that are growing, hungry to hire, and are small enough to make rapid decisions.</p></li><li><p><strong>Be a good candidate</strong>. Master the skill of interviewing. Convince people that you are one of the top candidates.</p></li><li><p><strong>Look enticing</strong>. Proactively ask for a lowball salary offer in exchange for equity.</p></li><li><p><strong>Add uncertainty</strong>. Claim that you are working on your US visa renewal, but raise some doubt about whether it will go through.</p></li><li><p><strong>Make a good first impression</strong>. Attend any in-person meetings or orientations.</p></li><li><p><strong>Surf the long tail</strong>. Obviously your work suffers because you&#8217;re working at 4ish places at once. The company notices eventually and fires you. Cash the paychecks and start interviewing somewhere else.</p></li></ul><p>If you truly want to become overemployed in the tech industry, there are certainly better ways to do this.</p><p>First, trying to fleece startups is a bad idea. They actually need things to get done! And everyone knows everyone in a small company, and it will quickly become clear that you&#8217;re not producing any work. As companies grow, they develop more places to hide. I know several people who ended up in situations at Google where they didn&#8217;t need to do anything for 3-6 months. I&#8217;m curious what the record is. It was hard to engineer; usually it happened when a large team catastrophically failed and the organization started picking apart its remains.</p><p>Second, the output of his work could be compared against the output of other engineers in the company. He did full stack work, but startups employ tons of people who know about full stack work. So his output isn&#8217;t keeping up and anyone can load up Github and see that he is slacking.</p><p>So how can we do better? Being a consultant in a third-party integration tech like Salesforce and targeting businesses with somewhere in the ballpark of 100-1000 employees. If you do this right, you could hold at least 2 gigs at the same time while raking in retainers. Yes, this does imply that you need to become an expert in Salesforce, but hear me out. This company size is the sweet spot where they might have decided that they definitely need Salesforce, but realized that they don&#8217;t have the in-house expertise to do a Salesforce integration and had bad experiences trying to pay Salesforce contracting fees to do parts of the integration. </p><p>First, you&#8217;re not going to do the integration yourself. You&#8217;re going to be the subject matter expert on a team that is building the integration so that there is no knowledge lost when you leave the team. You are going to host training sessions and produce documentation about how things should be done, but obviously you&#8217;ll leave out enough details that there&#8217;s room for error. You&#8217;re going to figure out what engineers on the project are completely clueless with Salesforce and assign them those parts of the project. Give yourself deliverables that are small and achievable. So at each meeting everyone will be underwater and you will have your part done and have suggestions for how everyone can proceed &#8212; namely, by telling them everything you left out of the training and documents. If people ask you for help, just schedule a training 2 days into the future and invite everyone. You&#8217;re just always trying to produce artifacts, and a meeting is a good artifact.</p><p>The goal is to drag the project out while doing minimal work and giving it the illlusion of forward progress.</p><p>Towards the end of the project, make sure that everyone on the team is trying to use <a href="https://www.clientserver.dev/p/salesforce-bets-on-storytelling-over">Agentforce</a>. That should create some great messes that you can fix.</p><p>And now the magic: your retainer! This will require some experimentation, but you want the company to buy as many hours as possible while using as few hours as possible. You might think, &#8220;I don&#8217;t want to sell 2 days per month and then have to work that,&#8221; but don&#8217;t be afraid! You already set the precedent that Salesforce work is really slow. You won&#8217;t be doing much, and if they give you something real to do, you can sign a separate contract for the overflow work. </p><p>Your goal is to pick up as many retainers as possible and have to work as few of them as you can. When you first start doing this gig, you might have to put some work in to generate buzz. You&#8217;ve gotta jump start the consulting pipeline somehow. But you will eventually find companies that sign your retainer contracts and keep you around for a year or two &#8212;  or more.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Meta’s OpenAI raid reminds me of the self-driving arms race]]></title><description><![CDATA[We&#8217;ve seen what happens when tech giants fight over researchers. Hint: it didn&#8217;t end quietly.]]></description><link>https://www.clientserver.dev/p/metas-openai-raid-reminds-me-of-the</link><guid isPermaLink="false">https://www.clientserver.dev/p/metas-openai-raid-reminds-me-of-the</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Thu, 03 Jul 2025 12:00:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a240e389-461f-43b7-8099-d4b8fed85ee5_8192x5461.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Meta has been in the news this week for <a href="https://www.wired.com/story/openai-meta-leadership-talent-rivalry/">poaching researchers across the industry, including from OpenAI</a>.</p><blockquote><p>The news comes on the heels of a major announcement from Zuckerberg. On Monday, the Meta CEO sent a memo to staff introducing the company&#8217;s new superintelligence team, which will be helmed by Alexandr Wang, formerly of Scale AI, and Nat Friedman, who previously led GitHub. The list of new hires also included a number of people from OpenAI, including Shengjia Zhao, Shuchao Bi, Jiahui Yu, and Hongyu Ren. OpenAI&#8217;s chief research officer, Mark Chen, told staff that it felt like &#8220;someone has broken into our home and stolen something.&#8221;</p></blockquote><p>On his brother&#8217;s podcast, Sam Altman says that the offers have <a href="https://www.youtube.com/watch?v=mZUG0pr5hBo&amp;t=1364s">approached $100 million per year</a>, a number also reported by <a href="https://www.wired.com/story/openai-meta-leadership-talent-rivalry/">Wired</a>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Every single article on this has the same 9 facts, and I certainly don&#8217;t have anything to add. So let&#8217;s go in a different direction: what happened the last time that tech talent poaching was in the news? How did it end up?</p><p>The biggest talent poaching spree in recent memory was in the self-driving car industry in the late 2010s<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>Back then, everyone was convinced that self-driving cars were close to being a solved problem, and they were spending as much money as possible to be the industry winner by the beginning of the 2020s.</p><p><a href="https://www.reddit.com/r/SelfDrivingCarsLie/comments/kvygic/the_autonomous_car_delusion_is_built_on_greed_in/?utm_source=chatgpt.com">Google had a self-driving program called Chauffeur.</a> The engineers on Chauffeur believed that their work could be worth billions, so at one point they rebelled. They threatened to leave the company together and get outside venture funding to build their own efforts and reap the rewards themselves.</p><blockquote><p>So Page authorized the creation of a new kind of compensation at Google. The idea was to motivate the team with the sort of incentives they would enjoy at their own startup. Chauffeur's members would remain employees, with regular salaries. The real money came in their bonuses. Every four years, Google would determine the value of the project and pay each teammate a given percentage of the sum, based on their role. Anyone who left before the four-year mark would get nothing.</p></blockquote><p>The problem, of course, is that the valuation for Chauffeur/Waymo skyrocketed. So they got huge payouts (one engineer named Anthony Levandowski<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> reportedly got $120 million) and in 2016, with the next payout 3 years into the future, everyone had enough &#8220;fuck you&#8221; money that there was no reason to wait for the next payout. Many left for more immediate paydays. I heard rumors at the time that some people were just taking their $20 million payouts and retiring, but I can&#8217;t find any official sources that confirm that idea. So treat that like the unsubstantiated rumor it is!</p><p>Many of the engineers left. For example, Anthony Levandowski left and founded a commercial trucking company called Otto, which was acquired by Uber that same year. A few years later, Uber would end up <a href="https://investor.uber.com/news-events/news/press-release-details/2020/Aurora-is-acquiring-Ubers-self-driving-unit-Advanced-Technologies-Group-accelerating-development-of-the-Aurora-Driver/default.aspx">selling their technology to Aurora</a>, who is &#8212; as of 2025 &#8212; starting to run autonomous trucking routes.</p><p>But yeah, Uber was hellbent on having self-driving technology. They famously partnered with CMU to create an <a href="https://www.cmu.edu/news/stories/archives/2015/february/uber-partnership.html">advanced research lab near the campus</a>, and then proceeded to <a href="https://www.theverge.com/transportation/2015/5/19/8622831/uber-self-driving-cars-carnegie-mellon-poached">poach 50 people who were involved with CMU&#8217;s autonomous vehicles program</a>.</p><p>By this point, everyone wanted a piece of the action. <a href="https://gaywheels.com/2018/06/apple-is-poaching-top-talent-to-win-the-self-driving-car-game/">Apple poached a Waymo employee</a>. <a href="https://electrek.co/2025/06/21/tesla-former-head-ai-warns-against-believing-self-driving-solved/">Tesla poached Andrej Karpathy</a> from OpenAI for their own autonomous vehicles efforts. Traditional car companies started feeling like they were falling behind and started acquiring startups, like <a href="https://www.vox.com/2016/9/17/12943214/sebastian-thrun-self-driving-talent-pool?utm_source=chatgpt.com">GM acquiring Cruise for a billion dollars</a>.</p><p>At the peak of this jostling, Google accused Uber of coordinating with Anthony Levandowski to steal a bunch of company secrets. Google had proof that Levandowski had downloaded tens of thousands of documents just before leaving Google. A civil suit and a criminal case followed. At the end of it, <a href="https://www.npr.org/sections/thetwo-way/2018/02/09/584522541/uber-googles-waymo-settle-case-over-trade-secrets-for-self-driving-cars">Waymo got $245 million in Uber stock in the civil trial</a>, and at the end of the criminal trial Anthony Levandowski was <a href="https://www.justice.gov/usao-ndca/pr/former-uber-executive-sentenced-18-months-jail-trade-secret-theft-google">sentenced to 18 months in prison</a>.</p><p>This is what&#8217;s exciting about Meta poaching employees from OpenAI. All of these corporations slugging it out in the news while we get to eat some popcorn and watch. Does past history foreshadow what will happen? Once people get their $100 million payouts, will they feel any pressure to continue working at Meta? Will these companies start accusing each other of steealing industry secrets? No way, this is AI. Something weirder is going to happen. And I&#8217;m excited to see what it is.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The self-driving car industry existed before all of the events in this post, but a large portion was driven by researchers trying to meet DARPA contracts and challenges. At a robotics conference in the early 2010s, I got a chance to ride in a self-driving car project run by a university team. It drove close to a large branch that was overhanging the unpaved parking lot, and brushed against it. It was a bit unnerving, especially since nobody was behind the driver&#8217;s seat. It felt like being in a car that was rolling uncontrollably down a hill. A researcher explained to me that it didn&#8217;t do great with fine obstacles like that branch, but that they had to route it near the branch because otherwise it would have too much trouble making a turn so that it could return to start. My reaction was some version of, &#8220;wow, that&#8217;s all very impressive. So remind me again, how do you stop this in an emergency?&#8221; But since it moved at all and stayed on its course, this was one of the better efforts I had heard of at the time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This name will come up again later.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[/r/golang draws a line on AI-generated projects]]></title><description><![CDATA[New rules aim to stop the subreddit from becoming a dumping ground for effortless LLM-generated tools.]]></description><link>https://www.clientserver.dev/p/rgolang-draws-a-line-on-ai-generated</link><guid isPermaLink="false">https://www.clientserver.dev/p/rgolang-draws-a-line-on-ai-generated</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 30 Jun 2025 12:00:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f97c5ed2-366a-43f4-acde-154c7346cec9_5184x3888.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>/r/golang recently expanded the scope of their AI policy to cover projects that are submitted as links. Historically, they have allowed people to post their own personal projects to the subreddit if they might be interesting to the community in some way.</p><p>Here&#8217;s the most notable section of the new policy:</p><blockquote><p><strong>Amount of AI Coding</strong></p><p>If your purpose is for review or feedback, please be clear about <em>the amount of AI coding used</em>, and if relevant, the amount of effort put into the project, which should be reflected in the project itself.</p><p>Using AI coding tools is not a disqualification for posting. However, in order to align the effort of creating a post-worthy project with reviewing it. <strong>the subreddit will remove posts for "vibe-coded" projects with little human input</strong>. This is not because such projects are "bad", but precisely because they are so easy to put out they are no longer noteworthy.</p></blockquote><p>What happened?</p><p>The subreddit has been flooded with low-effort posts recently. <a href="https://www.reddit.com/r/golang/comments/1ljvq23/this_subreddit_is_getting_overrun_by_ai_spam/">Here is a post</a> that list some of the threads that people are complaining about. Since some of the links have been deleted and more may be in the future, they are&#8230;</p><ul><li><p>A &#8220;production ready&#8221; high-speed logger that couldn&#8217;t even be benchmarked because it had a memory leak.</p></li><li><p>Something called &#8220;hands-on Go&#8221; that was an AI-slop repo.</p></li><li><p>A terminal-based notetaking app that seems to work, but was posted to Reddit with a LLM summary.</p></li><li><p>A monitoring tool <a href="https://www.reddit.com/r/golang/comments/1lj91r0/comment/mziah1o/">with astroturf support</a>. It seems to be a real tool, but again the Reddit post is obviously generated by AI.</p></li><li><p>A web framework that looks vibe-coded.</p></li><li><p>A &#8220;production ready&#8221; 7000-line message queue that was ostensibly implemented in a single commit.</p></li></ul><p>What&#8217;s interesting is that these all have different degrees of slop. Sometimes the objection is straightforward: the damn thing was obviously AI-genrated and doesn&#8217;t work. But sometimes the objection is just that the post to Reddit was written with an LLM. It just goes to show: you can&#8217;t seperate technical and social problems<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>This adds problems to one of the most thankless jobs: Reddit moderator<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>LLM-powered chatbots have obviously existed on Reddit as long as LLMs have been available, and bots have been creating accounts and spamming reposts or tried-and-true formulas for as long as I can remember<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><p>It&#8217;s already unpaid, most of your work is invisible, the visible parts of your work mostly come when the community is angry, and <a href="https://www.theverge.com/2023/6/20/23767848/reddit-blackout-api-protest-moderators-suspended-nsfw">Reddit can take your mod position at the drop of the hat</a>. And now you need to scale your effort to catch LLM slop flooding your subreddit, while still dealing with all of your previous moderation duties.</p><p>If you&#8217;re a moderator of a subreddit like /r/Golang, how much do you care about bots in your comments? If you have a bunch of bots spewing gibberish, that&#8217;s pretty bad. But if they&#8217;re staying on topic, not making obvious errors, and following the community rules? Is it your job to play Turing Test detective? No. Past a certain quality level, it&#8217;s Reddit&#8217;s job to stop them and not yours. You&#8217;re already doing them a favor by moderating. Also, I speak from 18 years of experience that the bottom of a Reddit thread has always been a hive of scum and villainy. So just by virtue of being a long-time Redditor, mods develop some immunity for whatever trash is happening if the discussion is largely on topic and nobody is getting reported.</p><p>However, something has changed with the vibecoding era. LLMs have gotten good enough that they can generate a fully-functional website, library, anything really. They are fantastic at zero-to-one implementation. They can generate the READMEs, they can create documentation, they can write the Reddit post summarizing it. And it&#8217;s all terrible.</p><p>I like how the /r/golang moderators approached the situation. It&#8217;s clear that something needed to be done, and the moderators took a thoughtful approach. It&#8217;s also clear that LLMs are here to stay &#8212; especially for code generation &#8212; so by encouraging projects to be open about their LLM usage they are leaning into the fad instead of trying to uselessly block it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>One of my favorite essays of all time, &#8220;<a href="https://gwern.net/doc/technology/2005-shirky-agroupisitsownworstenemy.pdf">A group is its own worst enemy</a>,&#8221; does a deep dive on online moderation and comes up with this as the conclusion.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I don&#8217;t know how y&#8217;all do it.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Over/under 50%, how many false positives would you get if you banned everyone who ever asked about sex or &#8220;how do you feel about these current events?&#8221; on /r/askreddit.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[There’s a lot missing from the Google Cloud outage incident report]]></title><description><![CDATA[Google&#8217;s incident report points to a null pointer bug, but many possible contributing factors like executive pressure remain unmentioned.]]></description><link>https://www.clientserver.dev/p/theres-a-lot-missing-from-the-google</link><guid isPermaLink="false">https://www.clientserver.dev/p/theres-a-lot-missing-from-the-google</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 16 Jun 2025 12:03:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a443e21f-cc84-42c8-a44e-521f8b9c50f8_810x540.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Google Cloud Platform had a 3-hour outage in the past week.</p><p>Google&#8217;s <a href="https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW">Incident Report</a> has a clear explanation of how the incident happened.</p><blockquote><p>On May 29, 2025, a new feature was added to Service Contfrol for additional quota policy checks. This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code. As a safety precaution, this code change came with a red-button to turn off that particular policy serving path. The issue with this change was that it did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash. Feature flags are used to gradually enable the feature region by region per project, starting with internal projects, to enable us to catch issues. If this had been flag protected, the issue would have been caught in staging.</p><p>On June 12, 2025 at ~10:45am PDT, a policy change was inserted into the regional Spanner tables that Service Control uses for policies. Given the global nature of quota management, this metadata was replicated globally within seconds. This policy data contained unintended blank fields. Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment.</p><p>[&#8230;]</p><p>Within some of our larger regions, such as us-central-1, as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on (i.e. that Spanner table), overloading the infrastructure. Service Control did not have the appropriate randomized exponential backoff implemented to avoid this.</p></blockquote><p>There has been a lot of discussion online, outlining different ways that the Google engineers must be the worst engineers that have ever walked the earth. I mean, there&#8217;s even someone to blame, right? The error handling is bad! There&#8217;s no feature flag, and the post promises that a feature flag cannot be misused and fixes problems like this! They didn&#8217;t implement randomized exponential backoff! Where are the tests?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>This is a classic blunder. In reality, you lack enough context to make a determination solely by reading the incident log. I&#8217;ve <a href="https://www.clientserver.dev/p/citigroup-and-extreme-overpayments">talked before</a> about how it&#8217;s important to step back and imagine the circumstances surrounding major outages.</p><p>What do I mean? Let&#8217;s ask ourselves some questions:</p><ul><li><p>Despite huge investment, Google Cloud is far behind Microsoft Azure and Amazon Web Services in marketshare. How much pressure do you think that executives feel to turn that around relative to prioritizing code quality complaints?</p></li><li><p>The incident report identifies this change as part of development of a new feature. Are we 100% sure that this was business as usual, or is it possible that this team, their group, or their product area have been under increased deadline pressure from executives?</p></li><li><p>The developers chose to enable this feature via a configuration method that replicates within seconds, instead of going through full rig-by-rig rollouts with feature flags. Were there reasons they might have preferred the faster approach?</p></li><li><p>A few days before the incident, Google <a href="https://www.cnbc.com/2025/06/10/google-buyouts-search-ads-unit.html">solicited voluntary buyouts</a> in other organizations within Google. The last time they did this, it preceded layoffs. Can you imagine individuals or teams feeling extra pressure from this, even in a different org? Can you imagine that pressure increasing when hearing people describe the &#8220;tough job market&#8221; in tech? Do they have reason to believe that their performance rating would be impacted if they didn&#8217;t launch these features?</p></li><li><p>The post talks about feature flags as a silver-bullet way to avoid an outage. When I was on Google Docs from 2010 to 2015, we shipped several bugs with bad feature flag eligibility logic. Have the feature flags become idiot proof?</p></li><li><p>The service did not have proper error handling in this code path. How rare is it for code paths in Service Control to lack proper error handling in this service? Is it normalized to expect that errors will be seen and reverted in minutes with minimal splash damage?</p></li><li><p>Was it reasonable for the reviewer and the code author to have any context on whether there was top-level error handling in this part of the service?</p></li><li><p>How many techniques are as important as &#8220;randomized exponential backoff&#8221; in keeping the entire system working? How common is it for these techniques to have uneven implementation coverage across the entire suite of services? As new techniques are identified as critical, are old systems upgraded to use them?</p></li><li><p>How easy or hard was it to understand that the service did not implement exponential randomized backoff? Was it controlled by some BCL spaghetti that nobody knew how to read?</p></li></ul><p>Sure, maybe it&#8217;s the case that the engineers that made the change are out of line. Maybe the executives were chill, there was no deadline pressure, they recklessly rejected feature flags even though they knew the risk, they are definitely not worried about layoffs and the job market more broadly, feature flags are impossible to misuse, the reviewer and code author went to extraordinary lengths to add a code path that did not have error handling, and specific people have been negligant and failing to implement a single specific technique despite everyone knowing they weren&#8217;t doing it.</p><p>Personally, it seems unlikely that every single one of these circumstances are true. I find it likely that there was deadline pressure<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. I find it likely that they had a lot of competing priorities, and it was difficult to prioritize system modernization/hardening versus new product work. I find it likely that engineering decisions are starting to change in parts of the company, ever so slightly, even if they won&#8217;t admit it, as the spectre of layoffs looms over their head.</p><p>Are all of these things true? Are any of these things true? I have no idea. I lack the complete story because Google would never put any of it in the incident report, even if they are all relevant to why the outage happened. If an external agency &#8212; some kind of FAA for software outages &#8212; investigated the incident and identified all of the factors, they would <em>absolutely</em> include all of the surrounding circumstances that included things like executive pressure and job safety concerns.</p><p>But again, I know that I lack the complete story. And accordingly I&#8217;m going to give everyone the benefit of the doubt until we learn more, if we ever do.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Many will be unsurprised that one (of many) possible citations is the URL <a href="https://news.ycombinator.com/item?id=44275870">https://news.ycombinator.com/item?id=44275870</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Especially given what I hear from Googlers I know.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[A staff engineer's advice to high schoolers who want to work in tech]]></title><description><![CDATA[Are you a high school student interested in working in tech? Here's my advice as someone who has worked at everything from FAANG to research labs, and has been in the industry since 2008.]]></description><link>https://www.clientserver.dev/p/a-staff-engineers-advice-to-high</link><guid isPermaLink="false">https://www.clientserver.dev/p/a-staff-engineers-advice-to-high</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Fri, 13 Jun 2025 17:46:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d4d51555-dd86-43b9-b003-ffbc5101cf2e_1024x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Photo credit: Lisette Voytko, taken when I was a sophomore in college. I was on the train of life, looking backwards through rose-colored glasses.</em></p><p>A friend of mine solicited advice for high school students who have the goal of working in tech. I had planned to skip my second post of this week, but I liked the writing prompt. So here you go!</p><p>Background: I&#8217;m 39. I&#8217;m a staff engineer. I&#8217;ve been coding since I was 14. I currently work at Hinge, the dating app. Previously I&#8217;ve worked at Google, Etsy, and Sarnoff Corporation (now part of SRI). I went to a state school for college, and got internships/gigs at Google Summer of Code (for Boost C++) and Johnson and Johnson. </p><p>Everyone will have different experiences and different opinions, but these are mine.</p><p>Anyways, in no particular order:</p><h2>The default advice is still good</h2><p>If you don&#8217;t know how to program, start learning. I am so jealous of the wealth of educational resources available today.</p><p>Take a programming class as soon as you can.</p><p>In college, try getting an internship in tech. Failing that, try to get the most tech-adjacent job you can. I worked for my school&#8217;s IT department until I got real internships. Internships are part of the full-time engineer recruitment process, so do your best to excel if you land one.</p><p>You should be able to point to real technical work that you&#8217;ve done. Did you make a small game in Unity? You wrote your blog that hosts itself? You made a face recognition app by following a tutorial? That&#8217;s awesome! Make a short YouTube video that demos it that you can share with people. You&#8217;ll want some way to show that you went above and beyond your coursework, which is a sign that an employee has great potential.</p><h2>Dive all the way down</h2><p>Learn everything you can about core computing stacks while you still have dedicated time to learn. There is almost always a positive return on learning about core computing stacks. Learn about CPU architecture, DMA, architectures, programming language design, network protocols, operating systems. I think about these things all the time.</p><h2>If you have a dream job, make sure it&#8217;s real</h2><p>Do you have a specific job that you want? Then you should be able to prove that it actually exists. Is it a company? Do they hire new grads or interns? Do they only recruit interns from specific schools, or would you have a realistic shot if you applied?</p><p>You should be able to find real job postings for entry-level positions. If you can&#8217;t, then isn&#8217;t that a bad sign? Yes!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Pay attention to industry trends</h2><p>Trends and fads are real even if they&#8217;re temporary. This will shape companies. It affects where funding is allocated. People will be hired and fired because of it. Companies will exist that do it. In 4 years there will be something else. There always is. Pay attention to the industry.</p><p>For example, the big trend right now is AI, and especially LLMs and agents. In 10 years it&#8217;ll be something else. If you&#8217;re in college and hoping to get an internship, you should be able to explain to an interviewer why LLMs and agents are in the news so much. Ideally you&#8217;d have some experience with them.</p><h2>Increase your luck surface</h2><p>Try presenting your projects at meetups. Talk to people about them. Participate in online communities. Get to know your professors and learn which ones have industry contacts. Before I graduated, I had 2 different full-time job offers because professors referred me to companies in the area. I got my job at Etsy because someone saw a talk I gave about a <a href="https://github.com/jakevoytko/colorblind">colorblind-simulating Twitterbot I wrote</a>.</p><p>There are many dimensions along which you can increase your luck surface. One of the best is to attach yourself to something that&#8217;s growing. It can be an industry, it can be a company, it can be anything really. But if something is growing then you must grow along with it.</p><p>At the end of the day, the computer industry is made up of people. If you get someone excited at the prospect of hiring you, don&#8217;t you think they&#8217;d rather look at your resume than the 1200 LLM submissions they got when they posted their job on the job board?</p><p>A lot of increasing your luck surface will involve encountering roadblocks and failure. That&#8217;s okay. Try to learn from every interaction.</p><h2>Be humble</h2><p>The instant you stop learning is the instant that you start to become obsolete. You will need to learn and relearn in this industry. Many people have something to teach you. Figure out what it is. Get a brain dump from them. I do this all the time.</p><h2>There&#8217;s a whole world outside of tech</h2><p>When I remember my past, I remember the moments I experienced and not the tech that I built. Even when I think about work, I think about the moments that I had with my coworkers while we built something.</p><p>There&#8217;s an entire world outside of your computer monitor. There are a practically infinite number of things you could do. Travel. Bike. Run. Go to watch parties. Go to a trivia night. Fall in love.</p><p>If you have spare credits in college, prefer to take the class that is interesting instead of the class that is easy.</p><h2>To restate an earlier point</h2><p>There is no substitute for comprehension. Comprehension is not a substitute for mastery.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Go won't make error handling easier. Cursor says "just press tab"]]></title><description><![CDATA[Go closes the door on error syntax proposals, citing lack of consensus. Behind the scenes, LLMs quietly fill the gap.]]></description><link>https://www.clientserver.dev/p/go-wont-make-error-handling-easier</link><guid isPermaLink="false">https://www.clientserver.dev/p/go-wont-make-error-handling-easier</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 09 Jun 2025 12:02:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e4f2623c-150a-471a-a6c1-9794faaf6470_1262x733.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Gopher logo license<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p><strong>Programming note</strong>: If you don&#8217;t know a lot about Golang, check out the Appendix below for all the background you need.</p><p>Go <a href="https://go.dev/blog/error-syntax">recently announced</a> that they will not pursue syntactic improvements for error checking. They managed to do this without addressing the elephant in the room, which is that LLM code generation has gotten to the point where syntactic sugar doesn&#8217;t matter as much as it used to.</p><blockquote><p><a href="https://go.dev/blog/error-syntax">[ On | No ] syntactic support for error handling</a></p><p> Still, no attempt to address error handling so far has gained sufficient traction. If we are honestly taking stock of where we are, we can only admit that we neither have a shared understanding of the problem, nor do we all agree that there is a problem in the first place. With this in mind, we are making the following pragmatic decision:</p><p><em>For the foreseeable future, the Go team will stop pursuing syntactic language changes for error handling. We will also close all open and incoming proposals that concern themselves primarily with the syntax of error handling, without further investigation.</em></p></blockquote><p>There are also a few introspective questions that have not featured heavily in the Online Discourse, but are actually really important to hammer out.</p><blockquote><p>We don&#8217;t really know how much the issue is the straightforward syntactic verbosity of error checking, versus the verbosity of good error handling: constructing errors that are a useful part of an API and meaningful to developers and end-users alike. This is something we&#8217;d like to study in greater depth.</p></blockquote><p>This is a more important point that is lost in the discussion. Do errors fail to serve both end-users and developers? If you&#8217;ve used a lot of third-party libraries in Golang, you know that the answer is &#8220;lol yes.&#8221; Since Go errors are just types, you&#8217;re at the mercy of every single layer of your stack to thoughtfully provide and handle errors. This includes all of the layers that your dependencies transitively pull in. Nothing more fun to get a random error bubbling up from a library &#8212; stack trace not included &#8212; that says something like &#8220;config error&#8221; and you&#8217;re just screaming &#8220;What failed to initialize? Do I need to set another config option? Is something misspelled? <a href="https://www.reddit.com/r/xkcd/comments/mglkd/wisdom_of_the_ancients/">What did you see</a>?&#8221;</p><p>If you think to yourself, &#8220;why don&#8217;t they just do $myfavoritesolutiontothisproblem,&#8221; I highly recommend skimming the <a href="https://github.com/golang/go/issues/40432">umbrella issue</a> that Ian Lance Taylor assembled to gather more serious proposals into one place. Why do they need an umbrella issue? Because people keep coming up with the same proposals over and over again. They&#8217;ve been debated. They&#8217;re all missing something.</p><p>I&#8217;ve only been a professional Golang coder for 2 years, but I&#8217;ve been coding Golang since before the 1.0 release. It&#8217;s my favorite language for web servers.</p><p>I was hyped for the initial <code>check</code> proposal. But now, with the advent of LLM tools, it doesn&#8217;t feel that pressing.</p><p>Go is a great language for code generation. <a href="https://fly.io/blog/youre-all-nuts/#but-its-bad-at-rust">I&#8217;m not the only one who has noticed this</a>. LLMs are excellent at automatically providing the correct error handling for the context. Do you just pass them through? Just press Tab. Do you wrap them? Just press tab.</p><p>So now you get the best of all worlds: the error handling is out in the open, and the LLM generates all of the checks. Very little effort is actually expended &#8220;writing&#8221; the code. In a world where it takes the same amount of effort to type ? as it does to type 4 lines of error handling, do you really want to waste your time with syntactic proposals?</p><p>I did wonder briefly why the Go team didn&#8217;t mention this. But pretend that you&#8217;re designing a language. It would actually be weird if you did mention it! You&#8217;re just desining a language. The tooling that exists to generate the language might not exist in 5 years. What if everyone decided that AI was a money-losing investment and LLM code generation tools started being charged at cost instead of subsidized to win marketshare? That puts you in an awkward position as a language designer. You designed your language with the assumption that tooling would go in a specific direction, and now it&#8217;s going in another direction.</p><p>So obviously, you just put out a blog post that says &#8220;we&#8217;re not going to talk about syntactic changes anymore.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Appendix: background for non-Go programmers</h3><p>Are you a programmer that doesn&#8217;t know Golang? Here&#8217;s all the context that you need!</p><p>Go has two different types of errors: panics and errors.</p><p><code>panic</code> is an unchecked exception. It behaves how you might expect: the stack is automatically unwound until either the panic is handled or it reaches the top level and the application is terminated. To give webservers as an example, you might wrap all of your HTTP handlers in a function that recovers from the panic, logs the error associated with the panic, and then returns a &#8220;500: Internal Server Error&#8221; to the request.</p><p>Go intends these to be reserved for truly exceptional events, like dereferencing nil pointers.</p><p>Nothing stops you from using these as exceptions in your program. However, if you <code>panic()</code> for a non-fatal error, the Go community will call your code &#8220;not idiomatic.&#8221; When the Go community says &#8220;not idiomatic,&#8221; they are punishing you. This is meant to be on par with excommunication from your church, or being left alone naked on a desert island. All of this is to say: you&#8217;re only supposed to panic for truly fatal problems.</p><p>The other type of error return is the <code>error</code> interface. This is an extra parameter that is returned from every function. By convention, it is the last return type. These are just types that implement an interface and don&#8217;t have any special handling. There are even some fun &#8220;gotcha!&#8221;s around Go&#8217;s nil interface semantics.</p><p>By design, Go&#8217;s errors are intended to be checked with manual if checks. There is no linguistic support for this; just use the regular control flow structures available in the language. In practice, almost every check just looks like this:</p><pre><code><code>a, err := somepackage.SomeFunction()
if err != nil {
    return nil, err
}</code></code></pre><p>Of course, the handling can get arbitrarily complex. For example, you might examine the error type to determine whether an I/O error is fatal or maybe the request can be retried. Or you might return a wrapped error to accumulate a stack trace for yourself<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>In theory you could forget to check the errors, but there are linters that make you.</p><p>So, dear programmer that doesn&#8217;t know Go, the question facing the community is: &#8220;should errors in Go be handled like they were normal types, or should errors have extra language support?&#8221;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The Go gopher was designed by <a href="https://www.tiktok.com/@renee.french?lang=en">Renee French</a>.<br>The design is licensed under the Creative Commons 4.0 Attributions license.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Errors don&#8217;t have stack traces by default in Go. Yes, sometimes it&#8217;s a problem.</p></div></div>]]></content:encoded></item><item><title><![CDATA[What ESLint learned from their 9.0 release]]></title><description><![CDATA[A year after releasing several breaking changes into a single release, ESLint published a retrospective on the process. Their document has good takeaways that are applicable beyond frontend.]]></description><link>https://www.clientserver.dev/p/what-eslint-learned-from-their-90</link><guid isPermaLink="false">https://www.clientserver.dev/p/what-eslint-learned-from-their-90</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Tue, 03 Jun 2025 12:02:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iozn!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64fe9671-5317-4979-abfa-13fa906f9bcb_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In April 2024, <a href="https://eslint.org/">ESLint</a> released version 9.0, which contained major breaking changes.</p><ul><li><p>They changed their config format.</p></li><li><p>They removed support from really old Node.js versions.</p></li><li><p>They changed defaults in their config file format.</p></li><li><p>They changed a lot of stuff in their rules engine.</p></li></ul><p>This was a hard upgrade. Since there were so many breaking changes at once, it was difficult to pinpoint a single problem. It&#8217;s not hard to find <a href="https://www.reddit.com/r/reactjs/comments/1jdt0vx/eslint_v9_migration_lessons_learned_the_hard_way/">Reddit</a> or <a href="https://news.ycombinator.com/item?id=39972086">Hacker News</a> threads of people complaining about it, but I would just skim <a href="https://www.neoxs.me/blog/migration-to-eslint-v9">Yacine Kharoubi&#8217;s blog post about upgrading</a>. The upgrade can be extensive depending on your setup.</p><p>Recently, <a href="https://eslint.org/blog/2025/05/eslint-v9.0.0-retrospective/">ESLint released a retrospective</a> on the upgrade, with the benefit of 1 year of time between the initial release and now. The retro obviously focuses on the ESLint project, but there are great takeaways even if you&#8217;re not a frontend developer.</p><p><strong>Users are lazy</strong></p><p>Since the upgrade was so extensive, it affected multiple classes of users:</p><ol><li><p>People who produce ESLint rules and plugins.</p></li><li><p>End users that run ESLint as part of their work, which may also include rules and plugins.</p></li></ol><p>ESLint bet that the plugin and rule developers would be proactive about the change, but expected the end users to lag more. But in reality, nobody was.</p><p>There are a trillion concepts that say the same thing. &#8220;Inertia.&#8221; &#8220;Users don&#8217;t read.&#8221; &#8220;Defaults are powerful.&#8221;  There&#8217;s an argument that a proactive plugin developer could get involved early in the development process and provide good advice.</p><p>But in reality, who wants to develop against an alpha API? Can you imagine spending a few weeks upgrading your code to account for some hairy change, and then the implementation detail changes and you need to spend that few weeks again? I think the ESLint developers deserve a lot of credit for trying to approach the upgrade thoughtfully, but they would be wise to pay attention to this in future upgrades that affect rules and plugins.</p><p><strong>Stay the course</strong></p><p>I really appreciated that they stuck to their initial migration timeline.</p><blockquote><p>Some suggested delaying the v9.0.0 release to give the ecosystem more time to catch up. We decided against this for several reasons:</p><ul><li><p>Users weren&#8217;t required to upgrade immediately. ESLint v8.x remained fully compatible with the ecosystem and continued receiving bug fixes, so those who didn&#8217;t want to upgrade could continue using it.</p></li><li><p>It was unclear how long such a delay would last. How could we determine when the ecosystem had &#8220;caught up&#8221;? Should we keep v9.0.0 in limbo while only providing bug fixes for v8.x indefinitely? That didn&#8217;t seem like a viable solution.</p></li><li><p>The <code>@eslint/eslintrc</code> package already offered substantial compatibility for eslintrc plugins and shareable configs with v9.0.0, addressing the most common issues we encountered.</p></li><li><p>We had communicated the upcoming changes for 18 months, with increasing reminders as we neared the first alpha release. Although adoption was slower than expected, we saw momentum building and wanted to maintain that pace. Delaying the release could have sent the wrong message and allowed further delays to snowball.</p></li></ul></blockquote><p>They faced pressure to delay once the community started to popularize the idea that the upgrade was difficult. But without the launch deadline, there wouldn&#8217;t be any pressure applied to the lazy developers. Most people would just wait until they were forced to do the upgrade.</p><p><strong>Support mitigates pain</strong></p><p>A lot has been written about how painful this upgrade is. But first, I want to highlight how thoughtfully the ESLint team approached the upgrade. First, they had a policy of updating the migration guide in the same pull request where changes were introduced.</p><blockquote><p>We also introduced a new process: the <a href="https://eslint.org/docs/latest/use/migrate-to-9.0.0">v9.0.0 migration guide</a> was updated in the same pull request as each new feature. Previously, we wrote the guide after all features had been merged, increasing the chance of omissions. This new approach helped ensure nothing was missed.</p></blockquote><p>They also had a great support story for their community. They supported version 8 and 9 side-by-side for six months. After that, they had a formalized support policy and an official commercial partner for companies that needed support for version 8 longer than ESLint could provide it. They also improved the migration tools, documentation, and front-line support over time. This allowed them to make migrations easier for people who adopted later.</p><p>Six months is not a lot of time in the grand scheme of things. Hell, Python 2 and Python 3 coexisted for a decade. But maintaining out-of-date versions is a major drain on resources, especially for major releases like this. So I commend them for defining an explicit support path.</p><p><strong>Avoid bundling lots of breaking changes together</strong></p><p>It&#8217;s easy to argue in favor of lumping breaking changes together. &#8220;You only go through the pain once. We don&#8217;t want to have a reputation of always breaking everything on releases. We can always support the two side-by-side for a specific period of time.&#8221; The ESLint project even had their own project-specific reasons. &#8220;Well, we can&#8217;t launch language plugins without all of these features.&#8221;</p><blockquote><p><strong>Too many breaking changes</strong></p><p>The biggest mistake was bundling too many breaking changes into a single major release. A key example was introducing the new configuration system alongside the <a href="https://eslint.org/blog/2023/09/preparing-custom-rules-eslint-v9/">rule API changes</a>, both of which were necessary steps to enable <a href="https://eslint.org/docs/latest/extend/languages">language plugins</a>. This often made it difficult to pinpoint the cause of issues with existing plugins. Many assumed the new configuration system was to blame, creating a narrative that it was &#8220;broken&#8221; or &#8220;not ready.&#8221; In reality, the rule API changes were just as disruptive.</p><p>We were so focused on the configuration system rollout that we underestimated the impact of the rule API changes.</p></blockquote><p>This is a good reminder that the project is about the users. What incentivizes them to upgrade? What do they gain out of it? Is it a slog that they are doing for our benefit? Are the benefits concrete for us and abstract for them?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You have the time to run a technical newsletter]]></title><description><![CDATA[A dad's playbook for running a newsletter when you have very little free time.]]></description><link>https://www.clientserver.dev/p/you-have-the-time-to-run-a-technical</link><guid isPermaLink="false">https://www.clientserver.dev/p/you-have-the-time-to-run-a-technical</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Thu, 29 May 2025 12:02:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7cd17c59-fb62-4e40-b7d3-84ac8e48a4e2_1278x852.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the 6 month anniversary of this newsletter&#8217;s first subscriber. As of today, I have 267 subscribers. It&#8217;s not much, but I didn&#8217;t have a social media following when I started, and I built it on very little time per week.</p><p>Starting this newsletter was a big deal for me. It was the first serious hobby I had since my daughter was born in 2023. And even now, I only get an hour or two per day to myself. But I&#8217;ve still managed to publish one to two times per week.</p><p>I&#8217;m so grateful for all of my readers. To pay it back, this post has everything that I&#8217;ve learned about running a newsletter on minimal free time. Even if you can only cobble together a few hours a month, you can still publish one post per month.</p><p>&#8220;How do you know that I have the time to run a newsletter? You don&#8217;t know my life.&#8221; Buddy, if you have the time to argue with strangers on the internet, I don&#8217;t want to hear it.</p><p>&#8220;Why would I start a newsletter?&#8221; That&#8217;s a separate post. Right now, I&#8217;m assuming that you are motivated and just want a playbook. Anyways, here we go!</p><h2>Where should I publish my work?</h2><p>You have limited time. Pick an existing platform instead of trying to write your own. You have plenty of options: Substack, Ghost, Wordpress, Medium, etc. Just do some basic research. For example, Substack has good discovery features that can get you extra subscribers, but its code blocks are terrible. So if you were going to share a lot of code in your posts, you&#8217;ll want to use a different platform.</p><h2>Who is my audience?</h2><p>The most important thing isn&#8217;t &#8220;what,&#8221; but &#8220;who.&#8221; Pick an audience and write each post to them. I personally imagine a junior engineer when I&#8217;m writing. They understand the fundamentals. But I have experience that they don&#8217;t, so it provides a natural premise for explanation. It&#8217;s also fun for more senior people to follow along. Finance nerds subscribe to Money Stuff even though it contains a lot of really basic explanations. Similarly, technical people aren&#8217;t scared away when the target audience is more junior than them.</p><h2>How should I get my topics?</h2><p>News is the best story source. There&#8217;s always something new happening. There might be a slow news week, but it&#8217;ll always pick up again.</p><p>You will want to scan for stories as quickly as possible. Here&#8217;s what you&#8217;re going to do:</p><ul><li><p>Make a list of all of the subreddits, social media accounts, industry publications, etc that you can think of. Basically anything that has news on your beat.</p></li><li><p>Whenever you start writing, open up every single one of these sources.</p></li><li><p>Is there anything that you find interesting? Is anything funny? Is there anything that you know more than average about? These are great story opportunities.</p></li></ul><p>Of course, there are plenty of things to write about besides news. You could pick a <a href="https://www.clientserver.dev/p/markdown-is-inevitable">facet of a technology that you find interesting</a>. You could tell a <a href="https://www.clientserver.dev/p/war-story-the-hardest-bug-i-ever">war story</a>. Whatever it is, you want to keep the topic focused. Don&#8217;t write a complete guide to every frontend technology; you&#8217;ll never finish. But is there an interesting use case for HTMX that people might not realize? A cool trending GitHub project that people don&#8217;t know about yet? Those sound achievable in a few hours.</p><p>Another useful format I&#8217;ve found is &#8220;be the guy who actually reads things.&#8221; For example, for <a href="https://www.clientserver.dev/p/how-did-googles-illegal-ad-monopoly">this post</a> on Google&#8217;s illegal ad monopoly, I went through the court filing, took the time to understand the arguments in the case, and then explained why it was interesting to the reader.</p><h2>How do I write a post?</h2><p>Write an outline. It&#8217;s the first thing you should do. I write most of my posts across two days. On the first day, I research topics and then write an outline. The next day, I turn the outline into a post, and then schedule it for the next morning and go to bed.</p><p>There are two reasons that you should write an outline. First, moving items around a bulleted list is really easy. Rewriting a few pages of text is a waste of time. Second, I find that the outline loads the topic into my brain. When I finally sit down to write the post, it&#8217;s been rattling around my brain for about 24 hours. The writing flows much more easily.</p><h2>How do I promote a post?</h2><p>This is the part that I&#8217;m worst at. Some weeks I just don&#8217;t have the energy to write AND promote, and the promotion falls by the wayside. But every single subscriber has found my newsletter because I promoted it, so it&#8217;s clearly important.</p><p>First, assemble a list of possible places to post. I have tried posting on Hacker News, Reddit, Mastodon, Bluesky, and LinkedIn. Obviously there are an almost unlimited number of sites and aggregators you could post to.</p><p>Next, don&#8217;t wear out your welcome in any of them. Most places have clearly-posted rules about self promotion. Follow them. Be a good citizen. Don&#8217;t get banned. Over time, you&#8217;ll get a feel for what posts are appropriate for what places.</p><p>You should also pay attention to any promotional tools that your platform offers. I&#8217;ve gotten some subscribers from Substack&#8217;s Notes feature, even though I&#8217;m terrible at using them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>What would I do if I had more time?</h2><p>I would likely have one more post a week. I&#8217;ve also played with the idea of having 3 posts per week without having more time. I&#8217;ve been mulling over the logistics of having a news roundup issue every Sunday, and moving my first weekly post to Tuesday. For a while I was dissuaded because <a href="https://www.pragmaticengineer.com/">Pragmatic Engineer</a> has one. But my links would be different and I would have different things to say, so maybe it&#8217;d be worth it?</p><p>I would be more diligent about self-promotion. First, I would set up Client/Server social media accounts and automate posting across all of them. Same with Substack notes. I would set up extra content messages to be posted on days that I don&#8217;t publish content on the main newsletter.</p><p>I would start looking for cross-collaboration opportunities with other newsletters.</p><h2>What else should I know?</h2><p>Rapid fire:</p><ul><li><p>Tell people that you are doing it. Post your first post to your LinkedIn and Facebook. Mention it at lunch. Tell your friends. I feel blessed by how many people I know have told me that they like my posts, or that they think that I&#8217;m doing a good job. The people around you are a built-in source of encouragement. Use them.</p></li><li><p>Most of your posts are going to be duds. But the posts that do blow up will be huge. I have 43 posts, and only six posts have over 1000 views. The two biggest posts have around 50,000 views each.</p></li><li><p>Every post is a reminder to unsubscribe. You should prefer not clicking &#8220;publish&#8221; over publishing a half-baked story.</p></li><li><p>You won&#8217;t always have a topic to write about. Sometimes it&#8217;s just a slow news week. If you want a challenge, try to find a new format and see if you&#8217;re happy with the results. Otherwise, just skip this issue.</p></li><li><p>Ask people to subscribe. Put a &#8220;Subscribe&#8221; call-to-action somewhere in your Substack post<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p></li><li><p>You can use LLMs as much or as little as you want. It&#8217;s your site. For Client/Server, I write every single word in the body of the posts. My personal viewpoint is &#8220;what would be the point if a robot did my hobby?&#8221; I do use ChatGPT to help me brainstorm titles and subheads. I tried using it for proofreading and was underwhelmed.</p></li></ul><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Substack will prompt you to do this when you publish. There&#8217;s a reason for this</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[My "30-minute rule" for LLM coding agents]]></title><description><![CDATA[This one is simple: LLM agents should save me 30 minutes over the next-best alternative while they still need well-specified problems and need my focus every few minutes.]]></description><link>https://www.clientserver.dev/p/my-30-minute-rule-for-llm-coding</link><guid isPermaLink="false">https://www.clientserver.dev/p/my-30-minute-rule-for-llm-coding</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Tue, 27 May 2025 12:02:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2501e2ec-f4d6-4d9f-8724-acd7401f5526_5184x3456.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>TL;DR</strong>: To recoup my lost flow state, LLM agents should save me &#8212; at minimum &#8212; 30 minutes of work per task over what Cursor provides.</em></p><p>A number of coding agents have been released since the beginning of the year. If you&#8217;re curious you can check out some of the announcements here:</p><ul><li><p><a href="https://blog.google/technology/google-labs/jules/">Jules</a> is Google&#8217;s effort.</p></li><li><p><a href="https://openai.com/index/introducing-codex/">Codex</a> is OpenAI&#8217;s effort.</p></li><li><p><a href="https://code.visualstudio.com/blogs/2025/02/24/introducing-copilot-agent-mode">Copilot agent mode</a> is GitHub/Microsoft&#8217;s effort.</p></li></ul><p>I tried out Jules this weekend out of curiosity. I gave it two changes:</p><ol><li><p>Update a dependency in my <a href="https://github.com/jakevoytko/crbot/">Discord chatbot</a> that my friends and I use.</p></li><li><p>Deprecate voting functionality in the chatbot, since I noticed that Discord finally added polls last year.</p></li></ol><p>Adding the dependency went fine. I could have done it faster, but Jules also reran the tests for me. My only intervention was testing it against my test account. A promising start!</p><p>The second task went poorly. Part of this was my own inexperience with these agentic workflows; for example, I told it to &#8220;remove&#8221; the feature and it just commented everything out. After it was done, it asserted that it ran the tests. While reading the code I suspected it hadn&#8217;t actually run the tests. I asked it to run them again. It gave me a passive-aggressive remark but tried to run them. Sure enough, the tests were failing and it spun its wheels trying to fix the tests.</p><p>After 15 minutes, I started worrying that Jules wouldn&#8217;t finish during my kid&#8217;s naptime. So I fired up Visual Studio Code and started racing Jules. I didn&#8217;t have any LLM augmentation, but it was mostly deletion so I didn&#8217;t need it. 20 minutes later, I was testing my finished change. Jules was still spinning its wheels.</p><p>I don&#8217;t want to overextrapolate from this example. Jules is in beta, I&#8217;m not used to these longer-lived agentic workflows, and I also intentionally underspecified the task to see how it would handle it.</p><p>At the same time, I&#8217;m disappointed that it failed as hard as it did. It has access to the full commit history of the project. It can actually go and see how the feature was added, and what the project looked like before it was added. If the task was underspecified, then it could have told me that! But instead it just didn&#8217;t make much progress beyond its initial salvo. But I want it to behave like a human that is capable of reasoning and communicating its needs.</p><p>In that 15 minutes, I had a lot of time<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> to reflect on the workflow. It required a lot of interaction. It would work for 2 or 6 or 10 minutes and then show me its progress. I needed to keep checking the tab to see whether it was done. By doing this, I learned that Jules threatens to auto-approve its own plans if you don&#8217;t respond quickly enough.</p><p>How am I supposed to get work done during that time? What engineer is most productive in the 2 and 6 and 10 minutes between their interruptions?</p><p>This leads me to my &#8220;30-minute rule&#8221; for LLM agents: in order to recoup my lost focus, I need to save at least 30 minutes over what Cursor does for me with prompting.</p><p>Why 30 minutes? Because I lose time writing a task with more exacting precision than I would for a human, then I lose time handling the questions corrections and reviews, and then I spend 10-20 minutes getting into a flow state on another task, then I go back and review what it does. Let&#8217;s give the LLM some leeway since you can run multiple. Let&#8217;s say that I break even on the agent when it can save me 30 minutes over what I did before.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Let&#8217;s pretend that Jules completed the task within the 35 minutes instead of failing. In comparison, it only took me 20 minutes &#8212; and I was alt-tabbing back to Jules constantly. I lost 15 minutes because I used the agent. In reality, I would need to run 4 Jules instances together on 4 different problems, and they all solved the problem instantly and correctly.</p><p>This also made me reflect on how the Jules LLM experience stacked up against human engineers. I like working with other engineers, and I&#8217;m happy to specify a task to the level that an engineer needs and handle their questions and review their code. Why does it feel worse with the LLM? Because my coworkers are effective engineers and they save me time. When I needed a benchmarking suite last year and gave it to a senior engineer? Man you should have seen what he wrote. It was great and saved me a ton of time. I basically told him, &#8220;we need to benchmark this thing under load. Here&#8217;s what I&#8217;ve done so far. Ideally it would have similar characteristics as our production load.&#8221; And he knocked it out of the park. Jules would have never made progress on that task.</p><p>When you&#8217;re giving a task to an engineer, you need to adjust the task to their leveling. A senior engineer might be able to convert a well-scoped business objective into an engineering project, but a junior engineer might never finish that project. So you&#8217;d break it down for them.</p><p>What kind of scope and specification does each engineering level require?</p><p>Here&#8217;s how I imagine each classic engineering level should be able to interact with scope and specification. This is an oversimplification, but it suffices here.</p><ul><li><p><strong>Intern</strong>: They can handle a well-specified problem with minimal scope. You can scale the number of hints they get, depending on their experience.</p></li><li><p><strong>Junior engineer</strong>: They can handle a well-specified problem with small scope. They may also need some hints and pointers to get started.</p></li><li><p><strong>Senior engineer</strong>: They can handle a scoped problem that is underspecified.</p></li><li><p><strong>Staff engineer</strong>: Give them an unscoped and underspecified problem.</p></li><li><p><strong>Beyond staff</strong>: You&#8217;ve evolved beyond this. You identify the problems. You prioritize the problems. You create the problems. You are the problem.</p></li></ul><p>There&#8217;s also the contractor dimension. Here are a few archetypes that I&#8217;ve run into over the years, out of contractors that can complete assignments:</p><ul><li><p><strong>Brute-force contractor</strong>: They somehow follow the contract to the letter. What about the spirit? Only if you have a spirit clause in the contract. Is there a logical contradiction? The task is impossible? They&#8217;ll somehow torture the wording to produce a deliverable. Your codebase will be worse than it was before they started.</p></li><li><p><strong>Long-term relationship contractor</strong>: This is the contractor that tries to understand the business needs and guides each contract back to sanity. These are the types of contractors that eventually become full-time employees<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p></li><li><p><strong>I&#8217;m trying to save your business contractor</strong>: This is the true domain expert. They attend the conference every year. They explain how your project fits into the greater ecosystem, how you&#8217;re failing, and necessary changes that you must make. This is unpopular because not everyone wants to hear what they have to say, and not everyone is talented enough to do it. But these people are worth their weight in gold.</p></li></ul><p>I would be tempted to rate my interactions with Jules as at the level of an Intern, but that&#8217;s an insult to interns. First of all, you kinda want your intern bothering you every 2 or 6 or 10 minutes if they legitimately have a question. Investing in your interns pays dividends, since (a) they become more capable engineers, and (b) it is one of the best recruiting strategies imaginable. So when your intern asks you a question every 6 or 10 minutes at first, then you are simply doing your job as a host by responding to them. Your focus should be on them. They will grow and the questions will get further and further apart and you will need to direct them less and less.</p><p>On the other hand, the LLM doesn&#8217;t grow from this<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. My experience with Jules was more on the lines of the &#8220;brute-force contractor,&#8221; where it was just doing everything that it could to get the project completed to the letter of what I wrote. And in this situation, it is a huge loss to have to answer questions every 6 or 10 minutes. The LLM isn&#8217;t growing from the conversation, my project isn&#8217;t moving forward.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>About 15 minutes.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>If they survive for years without the company terminating their contract for the most petty reason imaginable.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>At least, not enough.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[With a 3% workforce cut, Microsoft shows nobody is safe]]></title><description><![CDATA[Microsoft's cuts included high-performing engineers across the company. I go over the management dynamics that lead to great engineers being terminated.]]></description><link>https://www.clientserver.dev/p/with-a-3-workforce-cut-microsoft</link><guid isPermaLink="false">https://www.clientserver.dev/p/with-a-3-workforce-cut-microsoft</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 19 May 2025 12:01:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1572994c-7541-47d8-b0b3-dbee3a1e2f0c_500x750.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://www.cnbc.com/2025/05/13/microsoft-is-cutting-3percent-of-workers-across-the-software-company.html">Microsoft announced that it is laying off about 3% of its workforce</a>. This is about 7,000 people.</p><p>Some outlets <a href="https://www.theregister.com/2025/05/16/microsofts_axe_software_developers/">reported</a> that the layoffs hit developers hard. But if you chase down the numbers this doesn&#8217;t seem right; <a href="https://www.bloomberg.com/news/articles/2025-05-14/microsoft-layoffs-hit-software-engineers-as-industry-touts-ai-savings">Bloomberg is reporting that 40% of the workers laid off were software engineers</a>. Yet in 2021 &#8212; the most recent year I could find accurate data about Microsoft&#8217;s software engineer employment counts &#8212; Microsoft said that they had <a href="https://devblogs.microsoft.com/engineering-at-microsoft/welcome-to-the-engineering-at-microsoft-blog/">over 100,000 developers</a> and their 2021 10-K lists them as having <a href="https://microsoft.gcs-web.com/node/29516/html">181,000 employees</a>. So estimating that the company is about 55% developers, that suggests that developers were spared relative to other departments. If you have more accurate numbers, HMU!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But it&#8217;s still astonishing to see the list of developers that were laid off.</p><ul><li><p><a href="https://www.theregister.com/2025/05/16/microsofts_axe_software_developers/">The &#8220;faster CPython&#8221; project</a> was all laid off including 3 core Python developers.</p></li><li><p><a href="https://www.linkedin.com/feed/update/urn:li:activity:7328198436225220608/">A director of AI</a> was let go.</p></li><li><p>I previously wrote about <a href="https://www.clientserver.dev/p/typescript-70-will-compile-10x-faster">Microsoft speeding up Typescript compilation by 10x</a>; <a href="https://x.com/rbuckton/status/1922364558426911039">at least one of the people responsible was laid off</a>.</p></li><li><p>If you look on forums, it&#8217;s easy to find (unsubstantiated) posts stating that the people laid off were <a href="https://www.reddit.com/r/cscareerquestions/comments/1klrb8o/comment/ms54rce/">not low performers</a>.</p></li><li><p>Microsoft themselves said that they made cuts without regard to performance.</p></li></ul><p>The situation is unbelievable from the bottom-up approach. You just look at the list and you think, &#8220;my god, they laid off the guy that improved Typescript compilation speed by 10x. He made a substantial improvement that could end up helping millions of developers. I work in this menial area of the codebase. My contributions could never hope to match his. How did this happen? Why am I still here and they are gone?&#8221;</p><p>We&#8217;re <a href="https://www.clientserver.dev/p/salesforce-bets-on-storytelling-over">previously written</a> about the disconnect between executives and leaf-node employees and how they view layoffs. This is because you are trying to apply top-down logic from your bottom-up perspective. In the bottom-up perspective, you are using all of the facts on the ground to make your decisions. You are actually looking at the person, their history with the company, the weight of their contributions, etc.</p><p>But let&#8217;s flip this around to the top-down view again. This is going to be an oversimplified view of organizational staffing, but it will suffice for the discussion. Companies look at roles in terms of having a seniority component (entry-level, junior, senior, staff, etc) and a role (backend, iOS, Android, ML, etc). Sometimes a large company will further specialize on domain, so you might have a Senior Backend Engineer in Payments or a Junior iOS Engineer in Adtech.</p><p>So periodically the executives need to plan for the future. They go through a bunch of planning exercises to figure out &#8220;given where we want to be in N years, what organization do we need? What engineering roles are needed to support those goals? What seniority do each of those roles require?&#8221;</p><p>So from the top down, &#8220;we&#8217;re not doing enough on AI&#8221; could translate into many shifts. Sometimes if you are misaligned with your future goals, you can fix it through attrition and headcount allocation. But if you need to change quicker, you likely need to conduct a layoff. Sometimes the management layer will do a nod towards the bottom-up view<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> , but they&#8217;ve already made peace with the idea that talented people will be let go. These decisions are mostly &#8220;Ah, I see that you are a ::checks notes:: staff compiler engineer on ::checks notes:: the Typescript compiler. It turns out that we have 1 more of these than we need and it seems that you are pretty expensive, so you will now be routed into the severance pipeline.&#8221;</p><p>And that&#8217;s where the disconnect happens! From the top-down perspective it&#8217;s largely just a numbers exercise, but from the bottom down perspective everyone is just shouting &#8220;you let go of THIS PERSON? And you kept THIS PERSON, whom everyone knows is terrible? Make it make sense.&#8221;</p><p>If you&#8217;re worried about this happening to you, then what should you do? You should try to find a way to work on your company&#8217;s top priorities. And then you should do good work<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. Personally, I&#8217;ve asked myself what I want to accomplish with my work. I personally want to be proud of the impact of my work. So my current philosophy is, &#8220;I want to work for a business model that I feel good about, and I want to be on a team that directly impacts it.&#8221; I know that I would have been reasonably happy on something like productivity software, but I&#8217;m much happier in my current role at a dating app. But these don&#8217;t have to be your values. You should think really hard about what you what out of this life, and what would make you proud looking back. And then you should try to find a company that does that as their business model.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>They might, for example, ask lower-level managers for a safe employee that should be spared.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For someone who has written about gaming promotion criteria this may seem like lame advice, but you always have to start with the fundamentals.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Malicious compliance by booking an available meeting room]]></title><description><![CDATA[In 2011, Larry Page became CEO of Google and tried to fix meetings. But his new policies were no match for Google Calendar pedants.]]></description><link>https://www.clientserver.dev/p/malicious-compliance-by-booking-an</link><guid isPermaLink="false">https://www.clientserver.dev/p/malicious-compliance-by-booking-an</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Thu, 15 May 2025 12:02:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/04c57421-a089-4ec7-aee4-e2e7abc40f9f_524x419.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Back in 2011, Larry Page became the CEO of Google in place of Eric Schmidt. This happened at a time when Google was feeling the growing pains of becoming a huge company. It had 30,000 employees and was growing rapidly. But you could really feel the weight; projects were getting more ambitious, taking longer, and often failing more spectacularly.</p><p>At the time, I remember an anecdote told by Larry Page. He said that companies like Yahoo! used to be a punchline at Google because it would take them weeks to get something onto their homepage. Google could accomplish the same thing in a few hours, or a few days at worst. But now he was the CEO of a company where it took weeks to get something onto the homepage, and he was sure that he was the butt of some startup&#8217;s jokes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Anyways, all of this clearly bothered Larry Page. He wanted to fix it. One of his first actions was to shutter tons of projects that didn&#8217;t make tactical or strategic sense, and focus on fewer efforts. This came with the catch phrase &#8220;more wood behind fewer arrows.&#8221; For example, they shuttered <a href="https://en.wikipedia.org/wiki/Google_Buzz">Google Buzz</a> so that it wouldn&#8217;t distract from <a href="https://en.wikipedia.org/wiki/Google%2B">Google+</a>.</p><p>And second, Larry Page emailed the whole company <a href="https://www.businessinsider.com/this-is-how-larry-page-changed-meetings-at-google-after-taking-over-last-spring-2012-1">a ham-fisted attempt to revamp how meetings were done</a>.</p><ul><li><p>Every meeting needed a &#8220;decision-maker.&#8221;</p></li><li><p>Meetings should be capped at 10 people.</p></li><li><p>Everybody in a meeting should give input or they shouldn&#8217;t be in the meeting.</p></li><li><p>Hour-long meetings should be only 50 minutes to give the participants an opportunity to use the restroom between meetings.</p></li></ul><p>They later softened some of the language by saying that these were properties of &#8220;decision-oriented meetings,&#8221; implying there were other types of meetings that someone might need to attend. But you could never shake the feeling that Larry Page had to make decisions all day long and forgot that sometimes people meet for other reasons.</p><p>Anyways, let&#8217;s focus on the fact that Larry Page wanted hour-long meetings to only be 50 minutes. This is a good thing! It gives people a chance to stretch, go to the bathroom, grab a snack, etc. During a Q/A on the changes<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, someone asked him whether Google Calendar should default to 25 and 50 minutes for meeting lengths instead of 30 and 60 minutes. Larry Page said &#8220;yes.&#8221; And then someone on the Google Calendar team implemented this.</p><p>And then nothing changed. When 2:50 rolled around and your meeting was supposed to end, do you think people actually ended the meeting? Noooooo. Absolutely not! Meetings continue until the participants of the next meeting are clawing on your door like a pack of zombies.</p><p>At one point, one team in the NYC office noticed that their standups were about 10 minutes long. They didn&#8217;t want to compete with meetings that respected the half-hour boundaries. And why would they need to? Every meeting room had free slots at the last 10 minutes of every hour because people were now booking 50-minute meetings. So they did what any rational engineering team would do: they started booking their standup in the tiny 10-minute time slices that were free on the calendar of every meeting room.</p><p>I found this out when I saw them knock on the door to a meeting room by my desk. 2:50 rolls around and someone knocks on the door and says &#8220;I have the meeting room.&#8221;</p><p>The person in the room responds, &#8220;No you don&#8217;t, it&#8217;s 2:50.&#8221;</p><p>&#8220;Look again at the room&#8217;s calendar. You booked a 50-minute meeting, we have the room for the last 10 minutes of the hour for our standup.&#8221;</p><p>I could hear the muffled exasperation. &#8220;You&#8217;ve got to be joking me.&#8221;</p><p>&#8220;We have the room, sorry.&#8221;</p><p>Then everyone shuffled out of the room, looking vaguely pissed off. And who could blame them! Can you imagine if someone actually held you to this policy? You&#8217;re there stammering &#8220;it&#8217;s the default, I meant for the room to be ours for an hour&#8221; and they counter with the fact that their names are listed as the active participant? I mean, I&#8217;d personally tell them that I wasn&#8217;t going to leave the room, but surely it worked a lot?</p><p>I wish I knew the identities of these brave meeting crashers. I saw them pull this stunt twice and then ride off into the sunset, and I never got to learn what team they were on. I wish I knew their motivations. Were they true believers in the 50-minute policy? Were they bored pedants? Were they wraiths, cursed to hunt the office for available meeting rooms? I&#8217;ll never know for sure.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Likely at a TGIF, but truthfully I don&#8217;t remember.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Ubuntu Is Betting Big on sudo's Rust rewrite]]></title><description><![CDATA[Canonical is rolling out sudo-rs by default in Ubuntu 25.10, embracing Rust&#8217;s safety while shedding some of sudo&#8217;s legacy baggage]]></description><link>https://www.clientserver.dev/p/ubuntu-is-betting-big-on-sudos-rust</link><guid isPermaLink="false">https://www.clientserver.dev/p/ubuntu-is-betting-big-on-sudos-rust</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 12 May 2025 12:03:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/840df665-1492-40f8-8b3f-8dc6bbe2ab92_480x480.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://discourse.ubuntu.com/t/adopting-sudo-rs-by-default-in-ubuntu-25-10/60583">Ubuntu will include a Rust-only rewrite of </a><code>sudo</code><a href="https://discourse.ubuntu.com/t/adopting-sudo-rs-by-default-in-ubuntu-25-10/60583"> in its 25.10 release</a>, slated for October of this year.</p><blockquote><p><a href="https://discourse.ubuntu.com/t/adopting-sudo-rs-by-default-in-ubuntu-25-10/60583">Adopting sudo-rs By Default in Ubuntu 25.10</a></p><p>The <code>sudo-rs</code> project is designed to be a drop in replacement for the original tool. For the vast majority of users, the upgrade should be completely transparent to their workflow. That said, <code>sudo-rs</code> is not a &#8220;blind&#8221; reimplementation. The developers are taking a &#8220;less is more&#8221; approach. This means that some features of the original <code>sudo</code> may not be reimplemented if they serve only niche, or more recently considered &#8220;outdated&#8221; practices.</p></blockquote><p>This effort is part of Canonical&#8217;s north star of <a href="https://discourse.ubuntu.com/t/carefully-but-purposefully-oxidising-ubuntu/56995/1">&#8220;oxidising&#8221; Ubuntu</a>. They want to improve Ubuntu&#8217;s stability and resilience by replacing the most core programs with memory-safe alternatives. And obviously <code>sudo</code> is a great place to start. Its role is to let trusted user accounts run commands as another user &#8212; including root. A vulnerability in <code>sudo</code> is obviously a worst-case scenario: if an attacker could trick sudo into providing root access, then it&#8217;s game over and the system is fully compromised.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Since sudo is written in C, it has no protection against these types of problems. In fact, attacks have happened in recent memory. In 2021, Qualsis <a href="https://blog.qualys.com/vulnerabilities-threat-research/2021/01/26/cve-2021-3156-heap-based-buffer-overflow-in-sudo-baron-samedit">published a CVE</a> where calling <code>sudoedit -s '\'</code> allows an attacker to use environment variables to get root access on any default sudo implementation. This bug had been lurking for about ten years, which is about as close to a &#8220;holy shit&#8221; moment as I usually get with security issues.</p><p>So what&#8217;s left before sudo-rs can ship by default in Ubuntu? They have a few things to finish before launch: proper internationalization support, and some attempts at reducing the binary size.</p><p>I also want to revisit a fun point from the initial announcement: they&#8217;re intentionally dropping features and breaking compatibility with regular sudo!</p><p>Naturally, they&#8217;re doing this in cases where they don&#8217;t expect that it will matter. <a href="https://github.com/trifectatechfoundation/sudo-rs?tab=readme-ov-file#differences-from-original-sudo">You can see the complete list here</a>, but it ranges from mundane stuff like &#8220;the sudoers file must be valid UTF-8,&#8221; which is fair enough in the Year of our Lord 2025. There is also stuff like &#8220;This will only work in a system with PAM, we should not rely on the sudoers file to specify things like umasks.&#8221; But then there was the one that really surprised me: they are removing sendmail support from sudo.</p><p>Wait, sudo can send email? And not just to the regular mail spool destinations inside of Linux, but it can actually fire up sendmail and send an honest-to-God email directly? This surprised me, although I&#8217;m not familiar enough with system administration. When you&#8217;re not a system administrator, it&#8217;s really easy to imagine aggroing an administrator, who would be very happy to tell you that there are obviously systems where sendmail and sudo both run, but there isn&#8217;t any type of integrated mail spooler. It&#8217;s a variation on Zawinski&#8217;s law<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> for sure, and since <code>sudo</code> is a command runner, it can read mail given the right input command. It&#8217;s only natural that it should want to send email back.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Zawinski&#8217;s Law: &#8220;Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.&#8221;</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[How did Spotify ship that iOS payment app update so fast?]]></title><description><![CDATA[A judge ruled that Apple violated her previous order. The next day, Spotify shipped an update linking to their external purchase flow, among other things. But how did they do it so quickly?]]></description><link>https://www.clientserver.dev/p/how-did-spotify-ship-that-ios-payment</link><guid isPermaLink="false">https://www.clientserver.dev/p/how-did-spotify-ship-that-ios-payment</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 05 May 2025 12:03:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/99480597-b330-47e9-8480-d9090c66a0b1_827x844.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On April 30th, a district judge <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.364265/gov.uscourts.cand.364265.1508.0_2.pdf">ruled</a> that Apple violated a 2021 injunction to loosen restrictions on alternative payment methods.</p><p>The next day, Spotify <a href="https://newsroom.spotify.com/2025-05-01/following-landmark-court-ruling-spotify-submits-new-app-update-to-apple-to-benefit-u-s-consumers/">submitted a new release</a> of their iOS app for consideration.</p><blockquote><p>Once Apple approves our update, U.S. consumers:</p><ul><li><p>Can finally see how much something costs in our app, including pricing details on subscriptions and information about promotions that will save money;</p></li><li><p>Can click a link to purchase the subscription of choice, upgrading from a Free account to one of our Premium plans;</p></li><li><p>Can seamlessly click the link and easily change Premium subscriptions from Individual to a Student, Duo, or Family plan;</p></li><li><p>Can use other payment options beyond just Apple&#8217;s payment system&#8212;we provide a wider range of options on our website; and</p></li><li><p>Going forward, this opens the door to other seamless buying opportunities that will directly benefit creators (think easy-to-purchase audiobooks)</p></li></ul></blockquote><p>First, let&#8217;s reflect on the absurdity of this. These were not allowed by Apple before a week ago. Spotify refused to use in-app purchases for their premium accounts. <a href="https://www.youtube.com/watch?v=V7YNAqQNQis">Accordingly, they could only show the text &#8220;You can&#8217;t upgrade to Premium in the app. We know, it&#8217;s not ideal&#8221; in addition to the name of your current plan</a>. The user had to deduce that they should visit in a browser. They weren&#8217;t even allowed to say how much it cost! That&#8217;s egregious! I get a little angry just looking over Spotify&#8217;s list of what their app update does.</p><p>That&#8217;s all well and good.</p><p>However, I&#8217;m super curious about one facet of this news article: how did Spotify turn around their release in 24 hours? Corporations aren&#8217;t supposed to move quickly.</p><p>Let&#8217;s think about the naive option: that they wrote it from scratch in under 24 hours. It&#8217;s some links, it uses their pre-existing branding. How hard could it be?</p><p>But now let&#8217;s think about what would be necessary to get a project started and completed in a corporation:</p><ul><li><p>A product manager specifies behavior and goals.</p></li><li><p>A designer produces a mockup or prototype of the change.</p></li><li><p>A separate branding person either produces or reviews the copy in the design.</p></li><li><p>A developer needs to implement the change.</p></li><li><p>The change needs to be &#8220;QA&#8221;d in a number of scenarios: large system fonts, other languages, etc.</p></li><li><p>Someone in the manager chain ensures that all of the resources are available.</p></li><li><p>Someone in Legal needs to review to make sure you stayed within the bounds of what the injunction permits.</p></li><li><p>Someone reaches out to the relevant backend teams to ensure that they have the capacity for you to deploy your change.</p></li><li><p>And then someone needs to actually build and deploy the change to Apple.</p></li></ul><p>And don&#8217;t forget that everyone needs to be available and working at top speed. And note the fact that many of these steps must be done serially.</p><p>Is it possible that they were able to slash through the red tape and have one or two people crank out new screens of the app that required minimal testing, minimal legal review, and Just Worked with no problems? Sure, it&#8217;s possible. But it&#8217;s unlikely.</p><p>Okay, so that option is out. What about the next option: they are using React Native or Electron or something. What if they are simply able to remove all of their iOS-specific flags within the application code and push a new update to users after some testing?</p><p>This one is trivially easy to debunk if you look at <a href="https://www.lifeatspotify.com/locations/new-york">open job listings for iOS engineers at Spotify</a>. Here are three bullet points from a listing open at the time of this writing:</p><blockquote><ul><li><p>You know how to write readable, idiomatic, and maintainable Swift and are willing to follow already defined coding guidelines and workflows.</p></li><li><p>You are experienced with a variety of iOS frameworks.</p></li><li><p>You have a deep understanding of Cocoa design patterns and API design.</p></li></ul></blockquote><p>What does this mean? iOS engineers are expected to understand the iOS platform. And this means that they are writing their native applications from scratch instead of using a framework like React Native to produce the mobile apps<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>So that only leaves the final option, which is frankly the most impressive: they built this in anticipation that it would be used at the conclusion of the court battle, and they&#8217;ve been potentially maintaining it for years.</p><p>If so, this is an impressive amount of foresight for a corporation, especially a public one. The original injunction was issued three years ago. What would need to happen? Sometime between then and now, an executive mandated that this iOS paywall would be built. Between then and now, some team has been thanklessly cultivating this garden, just waiting for the moment that it would finally be ready to deploy.</p><p>It doesn&#8217;t cost <em>that much</em> to keep a feature on life support once it&#8217;s written. But it does cost some small amount of engineering effort! You&#8217;d expect that most pieces of the app are updated at least every few months, and some pieces would be updated multiple times a day. They were surely working in this area of the app, performing other work and maintenance. So they&#8217;ve been putting up with the annoyance for potentially years, all in exchange of being able to produce this update a week earlier than they would have otherwise.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It does appear that Spotify uses web views for their Desktop apps.</p></div></div>]]></content:encoded></item><item><title><![CDATA[War story: I fixed this bug after 3 months with a shower thought]]></title><description><![CDATA[I talk over a bug from a computer vision prototype from 2009 that required hardware fixes, software fixes, BIOS fixes, Windows Registry fixes, and a tour through distributed queuing.]]></description><link>https://www.clientserver.dev/p/war-story-i-fixed-this-bug-after</link><guid isPermaLink="false">https://www.clientserver.dev/p/war-story-i-fixed-this-bug-after</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 28 Apr 2025 12:03:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vljR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Back in 2009-2010, I worked on a DARPA contract. The contract was to create an augmented-reality <s>video game</s> training simulator. Soldiers would enter a fake Iraqi or Afghan village, strap on VR goggles, and start a mission with their team. A training scenario would play through the goggles that was synchronized across every member of the team. They could walk through the world normally, and the characters in the scenario were shown on the goggles.</p><p>For the curious: the hardware setup was 4 cameras on a helmet, an <a href="https://en.wikipedia.org/wiki/Inertial_measurement_unit">IMU</a>, VR goggles, and a plastic gun. The gun was placed into the scene using computer vision. Tying all of this together was a top-of-the-line gaming laptop in a backpack.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Here is baby me, wearing the finished research prototype in Camp Pendleton:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vljR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vljR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vljR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vljR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vljR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vljR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg" width="516" height="600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:600,&quot;width&quot;:516,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:39757,&quot;alt&quot;:&quot;Me, wearing a helmet with mounted cameras, wearing a backpack with a laptop, holding a fake gun, while in a fake desert environment in a warehouse&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.clientserver.dev/i/161936832?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Me, wearing a helmet with mounted cameras, wearing a backpack with a laptop, holding a fake gun, while in a fake desert environment in a warehouse" title="Me, wearing a helmet with mounted cameras, wearing a backpack with a laptop, holding a fake gun, while in a fake desert environment in a warehouse" srcset="https://substackcdn.com/image/fetch/$s_!vljR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vljR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vljR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vljR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01a8351d-0edb-4f87-b03b-b603a80c07df_516x600.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As usual, my company overpromised our capabilities so they would win the contract, and that was the engineers&#8217; problem. It was an 18-month-long death march.</p><p>We did have some advantages. We had research prototypes where camera pairs could be tracked through 3D space. We had a new GPU-powered <a href="https://en.wikipedia.org/wiki/Computer_stereo_vision">stereo vision</a> algorithm that performed well on real-world occlusions like branches and fingers. We had some code that used a &#8220;landmark database&#8221; to correct any dead-reckoning drift that had accumulated in the system. These were all required by the system.</p><p>The advantages got us like 5% of the way. We still needed working hardware. We needed all of the systems code to interop with that hardware. We still needed to identify VR goggles that could work with the system. We still needed to manage a fleet of these machines remotely during the scenario. And my boss demanded that we repurpose a pipes-and-filters library from an old project at the company, so we had to throw away all of our normal executables and rewrite everything within this framework.</p><p>About a third of the way into the project, we finally had all of the hardware and software ready for a first test run of the independent backpack system. This was just a small part of the project &#8212; it didn&#8217;t include the guns, the scenarios, etc &#8212; but it was proof of life. The hardware worked okay on the bench. The software worked okay on the bench. Let&#8217;s see what happens when we run it together! We turned everything on, strapped a researcher into the system, created a scene where a guy was standing still in front of him, and then started the scenario.</p><p>It actually worked! The researcher was slowly shuffling around<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> and looking at the virtual bad guy. Then he stopped. &#8220;Something&#8217;s wrong. The FPS dropped. It&#8217;s barely working. This isn&#8217;t even showing 1 frame per second.&#8221;</p><p>We passed around the helmet and looked. The video was really choppy.</p><p>We did the most reasonable debugging step: we turned everything off and on again. The same thing happened. It worked fine for a few seconds, and then it got unusably choppy.</p><p>I opened the flap of the laptop. Heat whooshed up into my face. I took the laptop out. It was almost too hot to handle. We checked the CPU temperature and it was through the roof. Well, that&#8217;s bad news! We needed this to operate in deserts, and it wasn&#8217;t even working on a crisp fall day in New Jersey.</p><p>My boss wanted to see what happened if we prevented the laptop from doing any thermal throttling. We found a BIOS setting related to throttling. I crawled over MSDN and developer forums looking for any documentation or posts about this and found a Windows Registry setting we could try.</p><p>We started the system again. &#8220;I think it worked,&#8221; the researcher with the helmet said. And then the laptop powered down. At first it wouldn&#8217;t turn on, but eventually it sputtered back to life. The laptop got so hot that it tripped a temperature sensor and the machine powered off rather than suffer damage.</p><p>Someone said, &#8220;Just cut it open, this backpack idea is junk. Just cut as much of the fabric off as you can. Tie the laptop to the straps.&#8221;</p><p>So some Guy With A Knife materialized and cut the backpack down to the straps and we ziptied it down. Then we started the system again.</p><p>The researcher got up and shuffled around. It lasted longer than it had before. We started doing experiments, but after a couple of minutes the video got really choppy again.</p><p>I wanted to run an experiment. &#8220;If this is heat, shouldn&#8217;t it recover when it cools off? Can we just leave it on the desk for a few minutes and see if it gets better?&#8221;</p><p>We restarted the scenario, waited until it started stuttering, put it on the desk, and checked back 15 minutes later. The framerate never recovered and the computer&#8217;s temperature was fine.</p><p>One of the systems guys wondered if there was separate power-based throttling. Since the machine wasn&#8217;t plugged in, he hypothesized that either Windows or the BIOS was slowly scaling the clock speed below the performance we needed. The deadlines were pretty tight: it took just under 100 milliseconds to run the stereo algorithm, and we were taking 10 pictures per second. We did some more research and discovered (1) yes it was probably doing power throttling, and (2) we couldn&#8217;t disable it on this specific laptop. So we did the next best thing: we had the researcher carry around a huge portable battery with the laptop plugged in. The system lasted longer! But after about 10 minutes, it started stuttering again.</p><p>The hardware guys started spec&#8217;ing their own portable battery that could fit onto the backpack to prevent the power throttling. Meanwhile, my boss pulls me aside and tells me he thinks there&#8217;s a system problem and that we need to do much more intensive bench testing. He wanted me to run everything on the bench all day and try to identify the problem. In some ways, he was saying, &#8220;Please argue with everyone for an entire day.&#8221; There was only 1 working unit and everyone needed it.</p><p>And this created a huge buzz around the office! My boss asked me about the system problem whenever he saw me. People kept asking me how the system problem was going. My vice president popped his head in and asked about the systems problem. I was quite argumentative 15 years ago, and even I couldn&#8217;t argue with this. There probably <em>was</em> something happening in this new pipes-and-filters code I had written!</p><p>On Monday, I put our only working prototype on the bench and turned it on. I started swatting away everyone who tried to take the system for whatever they were doing. After about 10 minutes, it suddenly started exhibiting the problem. I wonder if it&#8217;s always about 10 minutes. I turn it on and wait a half hour. No stuttering. I went to lunch and came back. It&#8217;s stuttering again.</p><p>Okay, how did our pipes-and-filters code work? It was controlled from an XML file<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> and specified a series of processing nodes in a graph. Each node specified a DLL and class that should be loaded into the application, and allowed the user to specify three things:</p><ol><li><p>The node&#8217;s configuration: name, any node-specific configuration, input queue depth, output queue depth.</p></li><li><p>All of the data in the node&#8217;s output.</p></li><li><p>A list of other nodes that are the inputs to this node.</p></li></ol><p>And then the framework would load all of the DLLs, connect the graph together, etc. This was a framework that was written for a previous project. My boss had been eyeing it for some time. This was finally his chance to split it into a standalone library. My boss also had some axes to grind. The elders who ran the previous project had strict rules: all pipelines should be linear &#8212; do NOT run things in parallel! &#8212; only pass along exactly what needs to be output, write very granular filters. My boss had the exact opposite philosophy: the pipeline should be maximally parallelizable, every filter should pass along every packet it got &#8220;because you&#8217;ll always need to correlate something with something else eventually, and there&#8217;s just so much churn constantly editing the configs as you develop.&#8221;</p><p>How often did data flow through the system? The cameras took pictures at 10FPS, since the laptop could only run the stereo vision GPU algorithm in about 100 milliseconds, give or take. The IMUs generated position data at 200 packets per second. There was some fanout to a few calculator modules &#8212; visual odometry and a few things that I don&#8217;t remember &#8212; and then went to the GPU and a few other modules &#8212; and then all of the data flowed into an octopus-merge sink that merged all of the position and stereo data with the camera images, and sent all of that to the video game engine for display. I&#8217;m a little sorry for the vagueness, but this was quite some time ago. The point is, we did as much fanout as possible and then merged it all back together for the final publication to the video game server.</p><p>At this point in the investigation, I have no idea where to start. So I start looking at each of the individual components.</p><p>Each node had fixed-length queues. Were these somehow filling up? I doubled their lengths. Within a half hour the system was stuttering. I tried tuning other configuration settings. No dice.</p><p>I poured over all of the code I had written. I wrote a merge filter that was a heavy abuse of the S++ STL&#8217;s map type, and I was quite proud of them. Were these somehow the problem? I added some logging to them and ran the system until the problem happened again. It didn&#8217;t seem to be the problem.</p><p>At this point, I go over everything that I&#8217;ve done with my boss. We brainstorm one or two more things it could be, but in the next few days I tested each of them and it would still seize up after 15-30 minutes.</p><p>And at that point, my boss decided that we should stop investigating it. It&#8217;s just a research prototype, we&#8217;re not shipping this to customers. We just needed to make it easy to restart a scenario remotely when it acted up. So for a few months, we just lived with this.</p><p>And then one day three months later, I was in the middle of a meeting for an unrelated project. I was speaking a sentence that had nothing to do with the project. Suddenly a moment of clarity overcame me.</p><p>The stuttering isn&#8217;t caused by a single component. The stuttering is caused by EVERYTHING. The sensor sample rate. The fanout. The fan in. The GPU algorithm. Passing every packet along.</p><p>So here&#8217;s what was happening: We had about 240 sensor samples per second: 200 IMU samples and four cameras producing 10 images each. Do you remember the rules that our elders had for using this system? Never parallelize and only pass along data that you will use? Well, it turned out those didn&#8217;t just exist to annoy my boss.</p><p>So let&#8217;s say all of these sensor messages fan out to 4 modules. Each of those 4 modules gets 240 sensor messages per second. If they then pass all of these messages to a merge, that merge has to process almost 1000 messages per second, as does every downstream module.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zFGK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zFGK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 424w, https://substackcdn.com/image/fetch/$s_!zFGK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 848w, https://substackcdn.com/image/fetch/$s_!zFGK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 1272w, https://substackcdn.com/image/fetch/$s_!zFGK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zFGK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png" width="836" height="538" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:538,&quot;width&quot;:836,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:73306,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.clientserver.dev/i/161936832?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zFGK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 424w, https://substackcdn.com/image/fetch/$s_!zFGK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 848w, https://substackcdn.com/image/fetch/$s_!zFGK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 1272w, https://substackcdn.com/image/fetch/$s_!zFGK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8a58c96-d292-42ff-8313-368e6af0dbb3_836x538.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The downstream modules all have input queues. When the queues in this system filled up, they stopped accepting new messages and just dropped the others. We thought by setting the queue length in the thousands, we would always be guaranteed to catch up.</p><p>However, we had an extremely tight budget on our stereo vision. It took a little less than 100ms to run, and needed to process 10 frames per second. Did it really take a little less than 100ms to run? No, in practice that was the floor. It wasn&#8217;t uncommon for it to take 120 or 130 milliseconds. The laptop was doing a lot of work. RAM was filled to the gills, the CPUs were running close to max, and the GPUs were also fully saturated. Just because of system scheduling, sometimes it took even longer to finally get around to reading the GPU response.</p><p>The problem is that the octopus merge effectively waited for the stereo packet to synchronize all of the outputs of the various streams, so it had the same frame rate as the stereo module. When it would finally finish processing a frame, it would gallop through a bunch of IMU packets until it finally reached one with camera frames it hadn&#8217;t yet seen. But every delay would slowly make the queue longer, and longer, and longer. Eventually, the whole queue would be completely filled up, and the new camera frames would mostly be dropped on the floor. Suddenly, most of the images don&#8217;t actually get merged or sent downstream because they&#8217;re just never added to the queue. And the final few steps of the system &#8212; merge and send to the video game server &#8212; were very cheap, so it was never noticed that they were doing appreciably less work.</p><p>So I removed the &#8220;pass through unnecessary packets&#8221; behavior and put a system on a bench. Multiple hours later, it was still running smoothly. It even seemed to be a little more responsive than before! I finally caught up with my boss and delivered the news.</p><p>He went through all seven stages of grief. He was shocked that I even suggested this. Denial came quick. Anger came quicker. But we worked through the stages until he reached the inevitable conclusion: he couldn&#8217;t just pass through all information to each filter. We actually needed to spend a few minutes to modify the config when we needed this to happen.</p><p>What are the takeaways? Well first, having good monitoring would have made this a non-issue. However that was a bit challenging in 2009 - it was years before common monitoring tools like Prometheus and Grafana were developed. Second, this may have been the first <a href="https://www.clientserver.dev/p/citigroup-and-extreme-overpayments">&#8220;emergent behavior&#8221; bug</a> that I debugged, where no individual component of the system was malfunctioning but the whole system was misbehaving. When I was 24, I was still fixated on the idea that some individual component must be misbehaving. In a way, I was the component that was misbehaving, because I was one of the technical owners of the application graph.</p><p></p><p><em>Do you have a war story that you&#8217;d like to share? If so, email a brief pitch to jakevoytko@gmail.com</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The cameras on the helmet were spaced really far apart to make stereo vision work. It was like 2x a normal interocular distance. When you viewed this through goggles it basically made you unable to move, because your body stopped trusting how far away anything was. We never overcame this, and this was probably the primary takeaway for the government.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>What a depressing thought.</p></div></div>]]></content:encoded></item><item><title><![CDATA[How did Google's illegal ad monopoly work?]]></title><description><![CDATA[I dug through the ruling that declares Google an illegal monopoly. I explain the ad market and outline the ways the court accuses them of acting anticompetitively.]]></description><link>https://www.clientserver.dev/p/how-did-googles-illegal-ad-monopoly</link><guid isPermaLink="false">https://www.clientserver.dev/p/how-did-googles-illegal-ad-monopoly</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 21 Apr 2025 12:03:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nTTU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, Google was found to be operating an illegal monopoly in the ad tech space. <a href="https://fingfx.thomsonreuters.com/gfx/legaldocs/zgpojydmzvd/GOOGLE%20ruling.pdf">I highly recommend reading the ruling</a> if you want to understand the case. It provides a great description of how modern adtech stacks work, and how Google abused them  unfairly. You probably need a few hours to fully absorb it.</p><p>You don&#8217;t have a few hours? You&#8217;ve come to the right place. I don&#8217;t have any qualifications &#8212; I&#8217;m not a lawyer and I never worked in adtech. But unlike many of you, I read the ruling and I&#8217;m doing my best to summarize it. If you want something authoritative, then find someone who knows what they&#8217;re talking about.</p><p>Anyways, here are my notes:</p><ul><li><p>Google was accused of having an illegal monopoly in three parts of the adtech stack: publisher ad servers, buy-side ad exchanges, and advertiser-side ad networks like Adwords. The entire complaint focuses heavily on their abuse of the buy-side ad server and exchanges, saying that they were able to unfairly exclude other exchanges from competing because of anticompetitive features in the publisher ad server.</p></li><li><p>To describe the adtech stack, imagine that a publisher like New York Times wants to run advertisements. Their view of the ecosystem might look something like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nTTU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nTTU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 424w, https://substackcdn.com/image/fetch/$s_!nTTU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 848w, https://substackcdn.com/image/fetch/$s_!nTTU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 1272w, https://substackcdn.com/image/fetch/$s_!nTTU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nTTU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png" width="782" height="375" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:375,&quot;width&quot;:782,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:73349,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.clientserver.dev/i/161645092?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nTTU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 424w, https://substackcdn.com/image/fetch/$s_!nTTU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 848w, https://substackcdn.com/image/fetch/$s_!nTTU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 1272w, https://substackcdn.com/image/fetch/$s_!nTTU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf2afa9c-42bf-4dc3-9b6d-d3f9f2ba1437_782x375.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The NYTimes contacts an ad server (run by Google) that fires off a query when someone visits the page. The ad server contacts competing exchanges (one of which is owned by Google) to determine which exchange can provide the best bid. Each exchange itself runs an auction across all of the sell-side platforms to determine what to bid for the given request.</p></li><li><p>The NYTimes&#8217; ad server is supposed to make decisions that are in the best interest of the New York Times. Otherwise, why would they use it? However, Google repeatedly takes advantage of the fact that they have a monopoly on ad servers AND also runs its own exchange. They give their own ad exchange preferential treatment over competing exchanges, deliberately harming the competing exchanges.</p></li><li><p>Google added features across the publisher platform / ad exchange ecosystem to limit competition.</p><ul><li><p><strong>First look</strong>: any publisher using Google&#8217;s publisher platform was required to offer Google&#8217;s exchange a buyout price. So Google&#8217;s ad exchange could buy inventory even though another ad exchange might have bid higher.</p></li><li><p><strong>Open Bidding</strong>: to counteract Google&#8217;s stranglehold on the market, publishers began a practice called &#8220;header bidding&#8221; where they could offer the inventory across many ad exchanges simultaneously. Google added Open Bidding as an implementation to their own ad platform so that publishers could easily use the Google-provided header bidding instead of rolling their own. However, there was a big difference: when it was done on-platform, Google would charge non-Google ad exchanges a 5% fee.</p></li><li><p><strong>Last look</strong>: After the auction ended across the exchange, Google&#8217;s ad exchange was allowed to submit a bid 1 cent higher than the winner and steal the auction.</p></li><li><p><strong>Project Poirot</strong>: Google&#8217;s publisher platform simply offers non-Google exchanges much lower revenue, causing Google&#8217;s ad exchange to receive a significant boost. On average, spending on non-Google ad exchanges dropped 15%.</p></li><li><p><strong>Unified pricing rules</strong>: Worried about anticompetitiveness, Google removed Last Look and introduced &#8220;Unified Pricing Rules.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>&#8221; Some publishers had grown concerned about their reliance on Google. To counter this, they set their Google exchange bids much higher than their non-Google ad bids. In theory, this should have caused the Google exchanges to receive much less traffic. Unified Pricing Rules forced their Google ad exchange bids to be the same price as non-Google ad exchanges. Google allowed publishers to set higher prices for non-Google properties.</p></li></ul></li><li><p>After all of these changes, many publishers were disgruntled over their reliance on Google&#8217;s ad tech stack. They had explicitly tried to break their reliance on Google through &#8220;header bidding&#8221; that would allow them to contact multiple exchanges. But Google ended up capturing them by moving a Google-friendly header bidding into the publisher ad server. The judgement notes that Google retained 99 of their top 100 publishers because there were no realistic alternatives to Google&#8217;s publisher stack.</p></li><li><p>Google tried to argue that the entire market is a two-sided marketplace. But the ruling notes that across the adtech industry, each of the components are considered to be their own separate lines of business and function as their own marketplaces.</p></li><li><p>Google&#8217;s assertions that there were alternative stacks like Facebook&#8217;s or Amazon&#8217;s were not compelling because it&#8217;s not like the New York Times could exclusively advertise on Facebook or Amazon. They have inventory on their site that they need to fill and Google&#8217;s publisher platform is the only viable game in town.</p></li><li><p>In conclusion: the court agreed with the characterization of their publisher ad server and ad exchange constituted monopolies. Additionally, because of the anticompetitive practices listed above (and some unlisted), they acted illegally and anticompetitively.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>When I left Google in 2015 after almost 5 years on the Google Docs team, I asked everyone I knew the following question: &#8220;Did Google make money off of me?&#8221; I got every answer under the sun.</p><ul><li><p>&#8220;Obviously! you helped write features that are checklist items in enterprise contracts.&#8221;</p></li><li><p>&#8220;Obviously! the product is mostly free but you are responsible for any downstream revenue it may generate in perpetuity.&#8221;</p></li><li><p>&#8220;Obviously not! If Google Docs were a standalone business it would be wildly unprofitable.&#8221;</p></li><li><p>&#8220;Obviously not! It&#8217;s silly to believe that Docs is the primary reason that anyone buys an enterprise contract over Microsoft.&#8221;</p></li></ul><p>But to me, the most correct answer came from an engineer on the Google Sheets team. He said, &#8220;They obviously made money on you. It&#8217;s incalculably large. You participated in creating an ecosystem between Docs, Chrome, the other editors, and every future integration that Docs has. Strengthening a part of the ecosystem strengthens the whole thing, and causes everything to become more valuable as a unit.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>&#8221;</p><p>The ad money was obviously the secret sauce to this ecosystem. Nobody pretended that the Docs/Drive organization could stand alone. Microsoft would have killed us before breakfast<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. But Docs was granted years and years where the only objective was growth. Nobody cared if the growth was enterprise or consumer. They just wanted growth. Google Docs became huge and eventually the focus shifted towards enterprise customers.</p><p>It also enabled lots of bad behavior across the company. In truth, the ad money erased all of the risk. There was no consequence for the management chain lacking vision. There was no consequence to being an also-ran in a new industry. Many of the engineering &#8220;best practices&#8221; that I learned only make sense when money and delivery timelines don&#8217;t matter.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If you haven&#8217;t worked in tech for long, anytime a company removes a controversial thing and adds a new thing in the same breath, that new thing is somehow worse.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Yes, I did believe the take that maximized my contribution. No, I will not be taking questions at this time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Back then, Microsoft&#8217;s tactic when competing against Google for a contract was telling the prospective client, &#8220;If you choose us, we will give you every feature in Google&#8217;s offering for free.&#8221; How do you compete with that?</p></div></div>]]></content:encoded></item><item><title><![CDATA[Slopsquatting targets LLM coders with supply-chain attacks]]></title><description><![CDATA[Sometimes LLMs generate fake package names. Attackers know this, and publish fake packages under these hallucinated names.]]></description><link>https://www.clientserver.dev/p/slopsquatting-targets-llm-coders</link><guid isPermaLink="false">https://www.clientserver.dev/p/slopsquatting-targets-llm-coders</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Fri, 18 Apr 2025 12:00:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9f0acccc-7958-47e1-91d4-f7d7ad2e8421_3079x2049.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Researchers have warned us about LLM package hallucinations for a while. But now, researchers have discovered that the LLM hallucinations are stable and frequent. 5-20% of package names (depending on the LLM) are fake. Attackers know this, and they are squatting on these fake names. They are doing this today.</p><p>To back up: In <a href="https://web.archive.org/web/20230606134351/https://vulcan.io/blog/ai-hallucinations-package-risk/">June of 2023 </a>and <a href="https://arxiv.org/abs/2406.10279">June of 2024</a>, security researchers explored LLMs hallucinating package names in popular languages. This was a variant of &#8220;<a href="https://en.wikipedia.org/wiki/Typosquatting">typosquatting</a>&#8221;, where an attacker publishes a package with an easily-confused name and waits for people to accidentally download it.</p><p>The idea behind the slopsquatting attack is simple:</p><ul><li><p>LLMs generate code that reference nonexistant packages.</p></li><li><p>An attacker publishes packages under these names. Maybe there is a malicious payload now. Maybe a later update will have the malicious payload.</p></li><li><p>A developer generates code containing the fake package name.</p></li><li><p>The package exists and seems to work. The developer accepts the changes into their project.</p></li></ul><p>To prove their point, one of the researchers <a href="https://www.bankinfosecurity.com/hackers-use-ai-hallucinations-to-spread-malware-a-24793">published a fake empty package</a> under the name <code>huggingface-cli</code>.</p><blockquote><p><a href="https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/">The Register - AI hallucinates software packages and devs download them &#8211; even if potentially poisoned with malware</a></p><p>The result, he claims, is that <code>huggingface-cli</code> received more than 15,000 authentic downloads in the three months it has been available.</p><p>"In addition, we conducted a search on GitHub to determine whether this package was utilized within other companies' repositories," Lanyado said in <a href="https://www.lasso.security/blog/ai-package-hallucinations">the write-up</a> for his experiment.</p><p>"Our findings revealed that several large companies either use or recommend this package in their repositories. For instance, instructions for installing this package can be found in the README of a repository dedicated to research conducted by Alibaba."</p><p>Alibaba did not respond to a request for comment.</p><p>Lanyado also said that there was a Hugging Face-owned project that incorporated the fake huggingface-cli, but that <a href="https://github.com/huggingface/diffusers/commit/56b68459f50f7d3af383a53b02e298a6532f3084">was removed</a> after he alerted the biz.</p></blockquote><p>And I have some sympathy for the Hugging Face employee that added a dependency to the fake package<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. It&#8217;s lazy to point a finger at them. However, the modern package manager situation is unbelievable. I&#8217;m supposed to just download these third-party libraries with dozens of dependencies across dozens of authors and hope that it&#8217;s going to all work out in the end? You&#8217;ll wake up screaming every night if you think too hard about it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.clientserver.dev/subscribe?"><span>Subscribe now</span></a></p><p>A fake package with a plausible name is fine. But how often do LLMs actually cause this to happen? Thanks to researchers, <a href="https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/">now we have some data</a>.</p><p>First, LLMs hallucinate fake packages a lot.</p><blockquote><p>In a recent study, researchers found that about 5.2 percent of package suggestions from commercial models didn't exist, compared to 21.7 percent from open source or openly available models.</p></blockquote><p>The names they hallucinate tend to be persistent.</p><blockquote><p>The recurrence appears to follow a bimodal pattern - some hallucinated names show up repeatedly when prompts are re-run, while others vanish entirely - suggesting certain prompts reliably produce the same phantom packages.</p><p>As <a href="https://socket.dev/blog/slopsquatting-how-ai-hallucinations-are-fueling-a-new-class-of-supply-chain-attacks">noted</a> by security firm Socket recently, the academic researchers who explored the subject last year found that re-running the same hallucination-triggering prompt ten times resulted in 43 percent of hallucinated packages being repeated every time and 39 percent never reappearing.</p></blockquote><p>The researchers also noted that open-source models tended to hallucinate package names about 4 times as often as their commercial counterparts. So, as of September 2024 &#8212; which is when this research was published &#8212; you get what you pay for.</p><p>&#8220;But that was September, and it&#8217;s April now! The LLM world moves so fast! Surely all of the models have had time to adapt.&#8221; If you believe that then don&#8217;t check your dependencies. That&#8217;s way higher than my own personal risk tolerance. Personally, I&#8217;m going to keep reviewing the code that they generate until I&#8217;m made obsolete as a reviewer.</p><p>But take heart! People are doing this in the wild and training each other.</p><blockquote><p>He also noted that recently a threat actor using the name "_Iain" published a playbook on a dark web forum detailing how to build a blockchain-based botnet using malicious npm packages.</p><p>Aboukhadijeh explained that _Iain "automated the creation of thousands of typo-squatted packages (many targeting crypto libraries) and even used ChatGPT to generate realistic-sounding variants of real package names at scale. He shared video tutorials walking others through the process, from publishing the packages to executing payloads on infected machines via a GUI. It&#8217;s a clear example of how attackers are weaponizing AI to accelerate software supply chain attacks."</p></blockquote><p>I tried to find these videos, but this was futile since I don&#8217;t know anything about the dark web. The extent of my l33t haxxing was running a ton of search queries against Bing and Yandex instead of Google.</p><p>I think the only silver lining is that this guy is selling video courses. If he were wildly successful with this approach he wouldn&#8217;t need to grind out the videos to make extra money. But you know what they say about attacks: they never get worse. They only get better and better.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/p/slopsquatting-targets-llm-coders?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/p/slopsquatting-targets-llm-coders?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.clientserver.dev/p/slopsquatting-targets-llm-coders?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>As long as they didn&#8217;t have YOLO mode enabled. Not even once.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Bazel is too pure for this world]]></title><description><![CDATA[At the 10-year anniversary of Bazel's announcement, I reflect on my disappointment that Bazel never became a viable build system for small-time development.]]></description><link>https://www.clientserver.dev/p/bazel-is-too-pure-for-this-world</link><guid isPermaLink="false">https://www.clientserver.dev/p/bazel-is-too-pure-for-this-world</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 14 Apr 2025 12:03:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f3618fe7-1bff-43bc-ac7a-b8a4f108ecda_959x1278.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few weeks ago was the 10-year anniversary of Bazel&#8217;s public announcement. An engineer from the founding team <a href="https://blog.engflow.com/2024/10/01/birth-of-the-bazel/">published a retrospective</a> outlining the history of the project.</p><p>As a Blaze-pilled former Googler, I originally wished Bazel would become a good build system for every language. I didn&#8217;t want it to be the <em>only</em> build system for each language. I didn&#8217;t care if it was the best one. I just wanted to be able to easily use it everywhere. But it never reached &#8220;viable&#8221; in many situations, for reasons that I&#8217;ll get into below. I&#8217;ve come to think that Bazel is for three groups of people:</p><ul><li><p>Blaze-pilled former Googlers<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p></li><li><p>Companies that are large enough to staff a build infrastructure team.</p></li><li><p>Companies with insanely slow builds or tests.</p></li></ul><p>There are certainly other people who use it. They could probably live without it.</p><p>Why are its use cases so narrow? It&#8217;s because Google is a separate branch of engineering evolution. If Google hired an experienced engineer off the street today, they would think, &#8220;why is all of this so alien? Why can&#8217;t I recognize any of this?&#8221; It&#8217;s because Google is an old company by internet standards. They had to invent their ability to scale as they went along. So they invented a bunch of new engineering, and then Google ossified around the engineering. So now onboarding into Google is learning this ossified way, the Google way<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>Bazel was born from Google&#8217;s build infrastructure. It has the same virus: Bazel makes you do things the Bazel way. This is a huge problem for Bazel. The world has drifted away from Bazel&#8217;s happy place.</p><p>Bazel wants every target to be fully specified. It should have an exact set of inputs and a well-defined configuration. It should be able to clearly define all of its outputs ahead of time. And everything must be reproducible: the same inputs should always produce the same outputs. The build system also wants to touch every directory of your project. Ideally, you would define per-module BUILD files and maintain them over time.</p><p>In practice, the world has moved more aggressively towards &#8220;convention over configuration&#8221; and not worrying too much about the details. Take Go for instance: you can just run &#8220;<code>go install</code>&#8221; or &#8220;<code>go get</code>&#8221; or &#8220;<code>go build</code>&#8221; and it Just Works. Or building up a <a href="https://nextjs.org/docs/app/getting-started/project-structure">Next.js directory structure</a>.</p><p>And the world has also moved towards <a href="https://www.clientserver.dev/p/the-human-urge-to-replace-makefiles">command runners</a> and layers of compilers. For example, a modern frontend project would not have a monolithic compiler that understood how to use every single layer of the app. You might have the Tailwind compiler that dealt with the Tailwind, then a SCSS compiler that dealt with the SCSS, then some React compilers and plugins, and throw in a &#8220;css-in-Javascript&#8221; plugin. And then all you need is to run the correct npm run command and then your build system will invoke all of these compiler layers under the hood.</p><p>Bazel has tried to adapt to this over time. For example, additions like <a href="https://github.com/bazel-contrib/bazel-gazelle">Gazelle</a>, the <a href="https://github.com/aspect-build/rules_js">modern frontend rules</a>, and <a href="https://github.com/bazel-contrib/rules_foreign_cc">rules_foreign_cc</a> acknowledge that it&#8217;s difficult to build and maintain BUILD.bazel files, and additionally that most things work with external build systems that Bazel must interface with.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But yet, every time I use Bazel, I fall into some corner case that just doesn&#8217;t work, and then I&#8217;m arms-deep in someone else&#8217;s rule trying to figure out why the quoting is broken or something like that. I always hated running into some obscure command quoting problem when using a rule, or writing patch files to imported third-party libraries to make them work in the hermetic build environment, or discover that you are downloading a third-party dependency that doesn&#8217;t always have a consistent hash, or upgrading Bazel and getting a host of deprecation warnings<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. I just don&#8217;t run into these problems when I don&#8217;t use Bazel.</p><p>Another common problem I&#8217;ve had: I work on OSX about 70% of the time and my Windows gaming desktop about 30% of the time. Thanks to a job I had 15 years ago, I&#8217;m comfortable developing in Windows and would rather not use WSL unless forced to. And Bazel always forces you, since many rules are powered by genrules &#8212; glorified Bash scripts &#8212; and many libraries do not include the Windows-equivalent commands. If you think &#8220;lul who uses Windows?&#8221; you may be surprised to learn that the answer is that <a href="https://survey.stackoverflow.co/2024/technology">a plurality (and almost a majority) of developers use Windows professionally</a>.</p><p>And empirically, developers often don&#8217;t want Bazel. How do I know? I went through all of the open-source libraries on <a href="https://bazel.build/community/users">Bazel&#8217;s &#8220;Who's Using Bazel&#8221; page</a>. Some of them are abandoned micro projects that they never should have listed. Even worse, some of those projects are abandoned micro projects that had already deleted their Bazel files by the time they were abandoned.</p><p>But I want to draw your attention to the following two projects:</p><ul><li><p><a href="https://github.com/google/nomulus">https://github.com/google/nomulus</a>, Google&#8217;s cloud service for operating TLDs. Bazel was removed in favor of Gradle.</p></li><li><p><a href="https://github.com/kubernetes/kubernetes">https://github.com/kubernetes/kubernetes</a>, the library that Google open-sourced which became the popular orchestration layer we all know and love. Bazel was removed in favor of Make.</p></li></ul><p>Nomulus is owned by Google. Kubernetes started at Google. If anyone could figure out how to make them work, it&#8217;s these two projects. But it turns out that Bazel just isn&#8217;t a killer feature here, relative to &#8220;we have lowered the barriers for high-quality contributions to our codebase.&#8221;</p><p>In an open-source context, you should just follow open-source conventions. Nobody wants to learn the <code>BUILD.bazel</code> syntax. Nobody wants to know what a genrule or a custom rule is. It&#8217;s a Java project. They just want to invoke <code>gradlew</code> or <code>npm run</code> or <code>make</code> or <code>pip</code> or whatever people typically do.</p><p>But oh man, if you&#8217;re in a company that can afford to maintain Bazel? It&#8217;s incredible. Building code at Google was incredible. I can see why entire companies have spawned to provide consulting services or incremental technology improvements for Bazel. And I can see why ex-Googlers everywhere are torturing their new employers with the threat of Bazel<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>.</p><p>You can do some really cool things with Bazel. When I was at Etsy, the Java codebase that powered the search stack was tragically slow to work on. Builds took a while, and rerunning the test suite took forever. It noticeably impacted development speed.</p><p>An engineer &#8212; somehow not ex-FAANG &#8212; protoyped a rewrite Bazel. Of course, you need to do things The Bazel Way, so it took him a few weeks to break all of the circular dependencies that their previous build system was chill with, but Bazel absolutely COULD NOT accept. But he broke the circles one at a time, and eventually he had something that could compile.</p><p>The project was immediately greenlit when he demoed building the project and running their test suite. The build was already faster than it was before. And then he made a single change and reran the tests. Only the 2 tests that depended on the file rebuilt and reran. Instead of taking minutes to build and run the whole suite, it was over in a few seconds. After that, they went even further and used <code>rules_k8s</code> to quickly push containerized builds to any of their clusters. It was super cool, and I later tried it myself and had zero problems doing this pattern. It just worked.</p><p>But inevitably, I ran into problems somewhere else.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>A director at a mid-size tech company once told me, &#8220;I love hiring ex-FAANG engineers. Preferably when they&#8217;ve worked somewhere between FAANG and here.&#8221; I knew exactly what he meant.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Google tried recruiting me back a few years ago. They hadn&#8217;t reached out to me in a while, so I asked, &#8220;why now?&#8221; The recruiter admitted that it was disproportionately difficult to onboard Staff+ engineers into Google&#8217;s engineering culture because Google does things differently. So they were trialing interviewing ex-Googlers who had leveled up outside of the company.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>This is a vast improvement over what would happen 10 years ago, which is that your project would be hopelessly broken every time you upgraded.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>About once every 6 months, when we run into some random technical problem on our team, I suggest &#8220;I think this is the inflection point where we should use Bazel&#8221; just to troll my manager. He falls for it every time. I will not stop doing this.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Helping Shopify employees game their AI promotion criteria]]></title><description><![CDATA[Shopify's CEO just announced that AI usage is expected of employees, and they will get peer feedback on AI usage. So why not help them out and brainstorm how they can game it.]]></description><link>https://www.clientserver.dev/p/helping-shopify-employees-game-their</link><guid isPermaLink="false">https://www.clientserver.dev/p/helping-shopify-employees-game-their</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Thu, 10 Apr 2025 12:03:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9f29ee1b-80ad-4e46-8c51-e3f4d8612e78_1270x953.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Shopify&#8217;s CEO, Tobi Lutke, posted a company-wide memo declaring that using AI is now expected of Shopify employees.</p><p>After it leaked, he <a href="https://x.com/tobi/status/1909251946235437514">posted it to X/Twitter</a> in its entirety. The full memo is quite long, so below is the most important section.</p><blockquote><h2><strong>What This Means</strong></h2><ol><li><p><strong>Using AI effectively is now a fundamental expectation of everyone at Shopify.</strong> It's a tool of all trades today, and will only grow in importance. Frankly, I don't think it's feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest I cannot see this working out today, and definitely not tomorrow. Stagnation is almost certain, and stagnation is slow-motion failure. If you're not climbing, you're sliding.</p></li><li><p><strong>AI must be part of your GSD Prototype phase</strong>. The prototype phase of any GSD project should be dominated by AI exploration. Prototypes are meant for learning and creating information. AI dramatically accelerates this process. You can learn to produce something that other team mates can look at, use, and reason about in a fraction of the time it used to take.</p></li><li><p><strong>We will add AI usage questions to our performance and peer review questionnaire</strong>. Learning to use AI well is an unobvious skill. My sense is that a lot of people give up after writing a prompt and not getting the ideal thing back immediately. Learning to prompt and load context is important, and getting peers to provide feedback on how this is going will be valuable.</p></li><li><p><strong>Learning is self directed, but share what you learned</strong>. You have access to as much of the cutting edge AI tools as possible. There is <a href="http://chat.shopify.io/">chat.shopify.io</a>, which we had for years now. Developers have <a href="https://proxy.shopify.ai/">proxy</a>, Copilot, Cursor, Claude code, all pre-tooled and ready to go. We&#8217;ll learn and adapt together as a team. We&#8217;ll be sharing Ws (and Ls!) with each other as we experiment with new AI capabilities, and we&#8217;ll dedicate time to AI integration in our monthly business reviews and product development cycles. Slack and Vault have lots of places where people share prompts that they developed, like <code>#revenue-ai-use-cases</code> and <code>#ai-centaurs</code>.</p></li><li><p><strong>Before asking for more Headcount and resources</strong>, teams must demonstrate why they cannot get what they want done using AI. What would this area look like if autonomous AI agents were already part of the team? This question can lead to really fun discussions and projects.</p></li></ol></blockquote><p>I&#8217;ve previously written about <a href="https://www.clientserver.dev/p/meta-terminates-its-low-performers">gaming performance management criteria at big companies</a>, which is where you selectively overperform on tasks that are valued by the promotion criteria, and try to avoid doing anything that is not rewarded by the promotion criteria.</p><p>Throughout the past 20 years, there have been common playbooks for gaming promotions in big companies. For example: let&#8217;s say that you&#8217;re a manager and you want to get promoted to director or higher. Career ladders normally say that as you get promoted, you will be managing progressively larger teams (or teams of teams). The naive approach is to deliver impact with your small team and then gradually get more engineers under you as you gain trust.</p><p>But that will take a long time. Let&#8217;s game the promotion criteria. Isn&#8217;t it better to find a company priority and work on some critical functionality for the success of the initiative, and use that position to argue for as much headcount as possible<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>? As the team grows, you can even convince some engineers under you to become managers of a few employees each. Boom! Suddenly you&#8217;re managing a team that a normally a director would. If you don&#8217;t mess up the execution, you&#8217;re a shoo-in for that promotion.</p><p>But the AI trend will require us to find new ways to game promotion criteria. So I want to help all of the Shopify employees that are now figuring out what this means. Obviously I don&#8217;t have access to your career ladder. But I have access to that memo and I&#8217;m feeling spicy. Let&#8217;s brainstorm ways that career-minded Shopify employees can game their promotion system.</p><p>First, let&#8217;s look at &#8220;AI values&#8221; that the memo calls out.</p><ul><li><p>A good Shopify employee&#8230;</p><ul><li><p>Uses AI to get 100x the work done</p></li><li><p>Improves by 20-40% every year</p></li><li><p>Learns from other Shopify employees</p></li><li><p>Follows the Shopify values &#8220;Be a constant learner&#8221; and &#8220;thrive on change&#8221;</p></li></ul></li><li><p>A good team&#8230;</p><ul><li><p>Uses AI during their &#8220;Get shit done&#8221; prototyping phase</p></li><li><p>Can demonstrate why AI cannot perform the work they want from a new headcount</p></li></ul></li><li><p>The performance review process&#8230;</p><ul><li><p>Will have AI usage questions on their performance and peer-review questionnaire</p></li><li><p>Will explicitly involve getting feedback from peers on AI usage</p></li></ul></li></ul><p>Okay, first things first: make sure that an AI agent can&#8217;t replace you. Can it give engineering feedback on designs and product specs? Can it take a design and product spec and produce an engineering spec? Can it produce code from the product spec? Can it write tests for that code? Can it fix code based on comments on your PR?</p><p>Is your job safe? Great! Let&#8217;s become a 100x AI-powered engineer.</p><p>We know that sharing knowledge is important. Document what it can and cannot do. I don&#8217;t know what Shopify uses. Google Docs? Notion? Cram your findings into there and share this with your immediate team.</p><p>First, it&#8217;s clear that Tobi wants the company to use AI to rapidly prototype. Does vibe coding come naturally to you? No? Then you need to find someone who will teach you. And here&#8217;s the trick: instead of just having them show you, have them give a public demo. Be the person who asks in a Slack channel, &#8220;hey, <code>@soandso</code> is going to demo how they build large prototypes quickly in Cursor. DM me if you have something neat you&#8217;d like to present, and react :raised-hand: if you want an invite.&#8221;</p><p>Why would you set up a meeting? Because there are going to be peer review questions involving the use of AI. You want to remind your coworkers early-and-often that you are a 100x AI Engineer.</p><p>Now that you know how to prototype rapidly using AI, use that knowledge to show everyone that you are a 100x AI engineer. Build an &#8220;AI Solutions&#8221; site. Why? As an outsider, I noticed that Tobi referenced Slack channels as the knowledge repositories for prompting. Slack is a great place to coordinate work and shitpost, and a bad place to store persisted instititutional memory. So you&#8217;re going to have the AI spit out an internal website. Index it by outcome (prototype, debugging, etc). Your first story will be a prototyping story showing the prompts you used to build the prototyping site. Next you&#8217;re going to add the example from the <code>@soandso</code> talk you organized. And then finally you&#8217;re going to have an AI scrape all of the messages from that Slack channel to fill out content. Remember: we&#8217;re being a 100x engineer, so if it sounds hard you&#8217;re just going to throw the AI at it.</p><p>Go ahead and share your new prototype site in the Slack channels where people have been collecting prompts. Mention that anyone who wants to add one should reach out to you. At this point, you have the option to never think about this again. Well, you should probably present it to your team or group or whatever. Who will be writing your peer reviews for your performance review? Make sure that they are all included in that group.</p><p>Now let&#8217;s look at your current project. What are the parts that you&#8217;re ashamed of? What are the parts that kinda suck? Try to throw AI at it and see what sticks. We&#8217;re trying to 100x, so don&#8217;t waste your time if it&#8217;s not working out. If something doesn&#8217;t look promising just immediately bin it.</p><p>Do your services have the READMEs they should? Well, they do now! Give Cursor some of the important files in your service and an example README and it&#8217;ll spit out a pretty reasonable one. Just fix it up and send it out.</p><p>Point it to particularly fragile parts of your project and ask it &#8220;what are the most likely bugs in this project?&#8221; and &#8220;how can this be refactored so that it is easier to modify X?&#8221;</p><p>If anything actually works, make sure you make a big deal about how you generated it with AI. You can even mention it in the commit message or the PR description&#8217;s body. Since you&#8217;re going to get peer review feedback on the use of AI, you want everyone to know that you are an AI engineer.</p><p>If you&#8217;re used to seeing people game promotion criteria in companies, this may look different than you&#8217;re used to! In most large companies you scale your impact with the size of the team or the value of the technical contribution. But in a regime where rapid prototyping and proper value sharing is important, you&#8217;re trying to scale your impact based on how much you can share without someone saying, &#8220;stop showing us all of this junk.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If you want an alternative: you can create a newsletter, get all of your coworkers to subscribe, and then never shut up about generative AI.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[You should obviously still learn to code (if you want to)]]></title><description><![CDATA[Coding is one of the easy parts of being a software engineer. There are a whole host of coding-related activities and specialties that will be important for the foreseeable future.]]></description><link>https://www.clientserver.dev/p/you-should-obviously-still-learn</link><guid isPermaLink="false">https://www.clientserver.dev/p/you-should-obviously-still-learn</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Mon, 07 Apr 2025 12:03:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e05509ac-0099-47d9-bd6b-ceabea704f4c_1276x957.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The recent &#8220;you shouldn&#8217;t learn to code&#8221; conversation was kicked off by Dario Amodei &#8212; the CEO of Anthropic &#8212; being asked about jobs in relation to AI systems.</p><blockquote><p><a href="https://www.youtube.com/live/esCSpbDPJik">The Future of U.S. AI Leadership with CEO of Anthropic Dario Amodei</a></p><p>I think we&#8217;ll be there in three to six months, where AI is writing 90 percent of the code. And then in twelve months, we may be in a world where AI is writing essentially all of the code.</p><p>But the programmer still needs to specify what are the conditions of what you&#8217;re doing? What is the overall app you&#8217;re trying to make? What&#8217;s the overall design decisions? How do we collaborate with other code that&#8217;s been written? How do we have some common sense on whether this is a secure design, or an insecure design? So as long as there are these small pieces that a human programmer needs to do, the AI isn&#8217;t good at&#8230; I think human productivity will actually be enhanced. </p><p>But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then we will eventually reach the point where, you know, the AIs can do everything that humans can.</p></blockquote><p>Amjad Masad at Replit took the ball and ran with it.</p><blockquote><p><a href="https://x.com/amasad/status/1905103640089825788">Amjad Masad, posted to Twitter/X on March 26th 2025</a></p><p>I no longer think you should learn to code.</p></blockquote><p>If you click through to the Tweet and then watch the associated video, you&#8217;ll see that his position is moderated a bit. He says that if Amodei is correct and essentially all code will be AI-generated within 12 months, it would be a waste of time to learn how to code. To me this gives him some outs, like &#8220;it&#8217;s 12 months from now and wow that wasn&#8217;t true, obviously that changes my response.&#8221; But I&#8217;m going to take the Tweet as his most-recent belief since it was in response to the video.</p><p>If you&#8217;re new to the industry: don&#8217;t worry. It&#8217;s still useful to know how to code now, even by Replit&#8217;s standards. At the time of writing, <a href="https://jobs.ashbyhq.com/replit">Replit has 7 open engineering positions</a>: four Software Engineers, 1 SRE, 1 Head of Product Engineering, and one Design Engineer.</p><p>The CEO of NVidia, Jensen Huang, also had a similar take but on a longer time horizon.</p><blockquote><p><a href="https://vulcanpost.com/853029/dont-learn-to-code-jensen-huang-on-career/">Don&#8217;t learn to code: Nvidia&#8217;s founder Jensen Huang advises a different career path</a></p><p>Over the course of the last 10 years, 15 years, almost everybody who sits on a stage like this would tell you that it is vital that your children learn computer science. [That] everybody should learn how to program. And in fact, it&#8217;s almost exactly the opposite.</p><p>It is our job to create computing technology such that nobody has to program and that the programming language is human. <strong>Everybody in the world is now a programmer. </strong></p></blockquote><p>Let&#8217;s summarize their claims in terms of the timelines:</p><ol><li><p>Amodei doesn&#8217;t put a timeline on his prediction. But he says in the video that it&#8217;s the kind of prediction where he will be ridiculed on at least a 10-year time horizon. So this prediction is quite far into the future but quite specific, where the full economic output of software engineering jobs will be captured by AI.</p></li><li><p>Huang things that current children should not learn to code. So let&#8217;s say this is a 12-year prediction (i.e. you should dissuade a precocious 10-year-old from learning how to code if they were interested).</p></li><li><p>Masad thinks that nobody should learn to code. You can become a non-junior software engineer about 6 years after you begin learning<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, so let&#8217;s say that he predicts that the net lifetime gain on learning coding will become negative within 6 years.</p></li></ol><p>Yes, these are CEOs who hype AI coding for financial gain. But I&#8217;ll be fair to them. If they were 100% confident in their stated positions, how would they behave differently? I&#8217;ll assume that these arguments are in good faith and I&#8217;ll engage with the arguments directly.</p><p>First, let&#8217;s take the most aggressive timeline: Masad&#8217;s assertion that you should not learn how to code now, given that essentially all code will be written by AI in 12 months.</p><p>What does that imply? Let&#8217;s say that you&#8217;re entering a computer science program this fall. It is obviously important to get the best internship and jobs that you possibly can, regardless of whether you can code. You don&#8217;t know how the future of work will change, but &#8220;working at a world-class engineering company&#8221; is a good bet for catching the next wave. Plus the money is always nice.</p><ul><li><p>For the first 3 years of college, you&#8217;re going to study all of the regular courses. Algorithms, operating systems, etc. You learn them well. You minor in something else. But you will complete all of your assignments with generative AI. All of your teachers need to accept that you use generative AI tools on their quizzes and exams.</p></li><li><p>During your 3rd year, you will land an internship. By then, every internship interview screen will need to accept generative AI coding tools as part of their screen.</p></li><li><p>You will use AI agents to code your way through your internship, gaining positive feedback and a reference you can use for your job search.</p></li><li><p>During your 4th year, you will need to interview with prospective companies using your generative coding interview skills and your reference from your internship.</p></li><li><p>During your 5th and 6th year, you are a productive member of the organization post-graduation. You can efficiently communicate with AI agents and use generative coding to deploy, and use generated graphs to monitor the system in flight. After 2 years, you get the good news: you are being promoted from Junior Software Engineer to Software Engineer. Congratulations! You have demonstrated your economic value as a software engineer.</p></li></ul><p>And now let&#8217;s couple this internship with the following: &#8220;The year is 2030. All of my company&#8217;s code is produced by AI. Nobody at my company knows a single line of code we&#8217;re running. This is the most optimal situation.&#8221;</p><p>Why does this sound so far fetched? It&#8217;s because Software Engineering has a huge &#8220;last mile&#8221; problem. This is a phrase that I&#8217;m borrowing from logistics. It&#8217;s easy to ship between shipping hubs because there is a constant flow of freight between these hubs, but the &#8220;last mile&#8221; to the house or retail store is much more difficult. You actually need to drive through traffic to that house and find where it is and park your truck somewhere and go to the door and try to collect a signature. Software engineering has its own last-mile. Once you have a spec for the code, producing the code can be mechanical. However, the process of generating a clear spec for the code is difficult, as well as determining that the code is functioning as part of a working system afterwards.</p><p>In some ways, this reminds me of <a href="https://www.latimes.com/projects/la-fi-automated-trucks-labor-20160924/">articles from 10+ years ago declaring that autonomous trucking will soon be automated</a>. But then many of the major players in the space folded or kept pushing their deadlines back. Even now the furthest along seems to be <a href="https://aurora.tech/">Aurora</a>. They only operate in Texas, and their website claims that they still use vehicle operators. So the impending doom from 8 years ago appeared to be more than 10 years into the future, perhaps far more.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Let&#8217;s talk through some of the last-mile problems that software engineers face. These are the &#8220;islands&#8221; that Dario Amodei mentioned in his response. There are a bunch of them like &#8220;gaining organizational consensus,&#8221; &#8220;making decisions that benefit the business,&#8221; and &#8220;managing stakeholders&#8221; that engineers spend a lot of time doing. Current LLM technology adds extra problems that you need expertise to resolve, like hallucinations, errors summarizing, etc. But I want to focus on the islands that relate to understanding code specifically.</p><p><strong>Island one: Security</strong></p><p>An incorrect model of security is &#8220;if every layer of the stack simply did security correctly, then there would never be a security vulnerability.&#8221; But security problems often fall across several layers of the stack. Either every layer is properly functioning but the whole system is flawed together, or several layers are misbehaving together. An example of collective misbehavior: when I was at Etsy, we ported our infrastructure from our own servers to Google Cloud. Shortly after, we received a security bounty that an attacker could log into any account simply by pasting any large payload into the password field. This was several layers of the app failing together<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>:</p><ul><li><p>We had accidentally configured one of the new Google infra bits to strip headers when they were above some threshold. It was something like 64Kb.</p></li><li><p>If the auth service received a request to / with no other information, it returned a 200.</p></li><li><p>The backend code checked for 200 response codes to see whether the password was accepted.</p></li></ul><p>There was no single broken part of the stack, it turned out that the whole stack was wrong.</p><ul><li><p>The Google infra bit needed to be changed to return a 4xx error.</p></li><li><p>The auth service needed a tighter contract, and additionally needed to return a specific payload like <code>{&#8220;status&#8221;: &#8220;ok&#8221;}</code>.</p></li><li><p>The backend code needed to check for the response code and check for the expected payloads.</p></li></ul><p>Fixing only one layer doesn&#8217;t reduce the fragility of the system. The security team really needed to second guess each layer and determine its correct function. So you can&#8217;t just point an AI agent at the problem and expect it to generate a holistic solution. If you just asked it to fix the problem it would very likely just fix the Google infra bit.</p><p>And this is just a very simple example. For each security construct used by your application, it&#8217;s important to understand the actual code constructs and their properties. It&#8217;s important when the AI tries deleting that CSP or removing encryption from a cookie or checking response code instead of payload on your login form because this is the simplest way to satisfy your prompt, that you have the perspective of understanding why the check was there and what value it serves.</p><p><strong>Island two: System design</strong></p><p>At the moment, large software ecosystems are complex systems. They have the interesting property that they can have emergent behavior<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. That means that every component of a system is functioning correctly, but when they work together they fail. This means that you can&#8217;t create a properly-functioning system by simply writing a bunch of components that work well together. It also means that you cannot predict all of the failure cases for the system.</p><p>In fact, properly maintaining the system turns into a <a href="https://www.clientserver.dev/p/citigroup-and-extreme-overpayments">control problem</a>. Think of your software system as an aircraft, and your graphs and alerts as the instruments of an airplane. You are simply looking at all of the available data. And when you see the system deviate from the expected behavior, you need to react and move it back within normal operating parameters.</p><p>The AI is hampered by only having access to the code. I&#8217;m sure future systems will plug into your logs and your source code and your monitoring and boil an entire ocean to tell you that everything is happening within normal parameters. But at the moment, reading the code doesn&#8217;t tell you anything about the outside usage patterns. It doesn&#8217;t tell you what happened all of the previous times an approach caused bugs at a different company. This is where the human can own the code structure and organization, and go from a middling solution generated by the AI to the best solution for your particular domain.</p><p><strong>Island three: Handling on-call issues</strong></p><p>This is the flip-side of the coin to system design. When the complex system is no longer functioning correctly, you need to be alerted to this fact and then understand the problem so that you can fix it.</p><p>There aren&#8217;t singular reasons that things fail. You can&#8217;t just point an AI at an error message that is flooding your logs, and expect the AI to fix all problems without being plugged into the instrument panel on the plane. Your database is failing because the requests are hanging and then failing. This could have any cause from &#8220;the database is overloaded because your application is running too hot&#8221; to &#8220;the database is misconfigured&#8221; to &#8220;your cloud provider is struggling&#8221; to &#8220;your new query has an infinite loop.&#8221; Until the AI system is able to be plugged into the instrument panel, it won&#8217;t produce proper fixes for this, and you will need to understand the underlying code and the underlying system well enough that you can convert that call stack problem into an actionable plan. And then maybe the AI takes over and can happily generate it.</p><p>Errors happen at points in code. When you are on call, it is your responsibility to look at that code and decide what is happening. Very often, this will be in a piece of business logic. Sometimes there are several options for remediating an error: do you ignore the error? Do you catch and log the error at another level? Do you need to clean up the input source? Do you need to page the staff engineer on another team because nobody on the oncall rotation worked on that project and there&#8217;s not a cookie-cutter answer? What is the difference between &#8220;the best fix&#8221; and &#8220;what is sufficient at 3am?&#8221;</p><p>Furthermore, is your on-call rotation depending on generative AI to produce solutions to get you back on track? What if it goes down during your outage? Are you just going to wait for it to come back up? Is your CEO going to be happy with that answer?</p><p><strong>So which CEO was the most correct?</strong></p><p>Let&#8217;s go from the most aggressive prediction to the least aggressive.</p><p>Amjad Masad at Replit: If you were entering a computer science program today, it is irresponsible to bet your whole career on learning computer science without learning how to code. In some ways, coding is the easy part of what a software engineer does. There is a whole host of interrelated activities that all inform the code, and are all informed by the code. There is only one conclusion you can draw: it is imperative that you learn how to code. I also think that it&#8217;s important to learn how to use AI tools to accelerate this process. These tools are only going to get better, so you should learn how they can accelerate you.</p><p>Jensen Huang at NVidia: This puts it on a longer timeframe, where children should not learn to code because they would be better served learning to think. In this world, everyone becomes a programmer because the systems are so powerful that the most important thing is asking good questions and providing the system good insight and input. It&#8217;s still quite a rigid timeline though, and as we see from autonomous trucking, sometimes the devil is in the details. This prediction is plausible but at risk, given the specific timeframe.</p><p>Dario Amodei of Anthropic: His view seems like the most plausible answer to me. I agree that coding is a small piece of the value of humans in software engineering. I can also imagine that eventually these AI systems will be so powerful that you can sit them in front of an AI-accelerated product manager, and they will have access to the current codebase and the logging and the monitoring and they can fly the plane by themselves, and there will no longer be humans involved in coding. But this does seem like a long ways away. There are likely several generational improvements that need to happen before these systems can all coexist while using the amount of energy we have available on Earth.</p><p>So, yes. If you weren&#8217;t sure whether you should learn to code nowadays, take comfort. We still power a whole host of human activity surrounding the actual code. So even if the AI were generating a lot of it, humans still need to be in the loop understanding and refining the code for the foreseeable future.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>4 years of college and 2 years of industry experience for your first promotion.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I wasn&#8217;t on the security team, so this may not be the exact bug. It&#8217;s close enough though.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>To give an example: when I was at Google, someone had compiled a cool Google Doc entitled &#8220;The GMail Death Ray&#8221; that detailed all of the ways that GMail had caused outages in non-GMail services. It was stuff like &#8220;we had a rolling deployment, but about 5 minutes after starting the service would crash. So the service started sloshing from datacenter to datacenter every 5 minutes.. starting up, saying it was healthy, and then taking down everything in that datacenter before moving on to the next one.&#8221;</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Next.js middleware was completely optional until 2 weeks ago]]></title><description><![CDATA[I break down a CVE where an attacker could bypass middleware in every Next.js version. What was it trying to do? How did it break? Why do we use middleware at all?]]></description><link>https://www.clientserver.dev/p/nextjs-middleware-was-completely</link><guid isPermaLink="false">https://www.clientserver.dev/p/nextjs-middleware-was-completely</guid><dc:creator><![CDATA[Jacob Voytko]]></dc:creator><pubDate>Thu, 03 Apr 2025 12:03:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/36ec56d1-87ac-460f-a8ed-d77a30fddb5a_958x1277.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Two weeks ago, Next.js had a critical vulnerability that affected every version that has ever been released. This has a 9.1/10 <a href="https://en.wikipedia.org/wiki/Common_Vulnerability_Scoring_System">CVSS score</a>, meaning that the attack is easy to perform and has severe consequences. An attacker can craft a HTTP request that causes Next.js to skip running any middleware that should have run.</p><p>Researchers Yasser Allam and Rachid Allam put out an <a href="https://zhero-web-sec.github.io/research-and-things/nextjs-and-the-corrupt-middleware">in-depth explanation</a> explaining the vulnerability that they had discovered. I highly recommend reading it.</p><p>They identified three different sets of vulnerabilities that were in Next.js&#8217; handling of recursive calls. It looks like these are subrequests that are intended to be made after the middleware has executed. So Next.js adds these headers to notify the handler that the middleware can be skipped. However, the system didn&#8217;t treat these headers like they could be supplied by an untrusted user. So it blindly accepted its request to disable middleware.</p><ol><li><p>In versions of Next.js prior to 12.2, simply passing the header &#8220;<code>x-middleware-subrequest: pages/_middleware</code>&#8221;, causes the middleware to be skipped and the page to be executed.</p></li><li><p>After version 12.2, it gets a little more complicated. Now it could either be &#8220;<code>x-middleware-subrequest: middleware</code>&#8221; or &#8220;<code>x-middleware-subrequest: src/middleware</code>&#8221; depending on whether the application placed the middleware in the first or second location.</p></li><li><p>In recent versions, Next.js allows calls to execute recursively up to 5 times before skipping the middleware. So now you can simply call &#8220;<code>x-middleware-subrequest: middleware:middleware:middleware:middleware:middleware</code>&#8221; to trick Next.js into believing that the middleware had recursed 5 times already and simply did not execute it.</p></li></ol><p>This bug checks all of my boxes for &#8220;is this a fun CVE?&#8221; As someone who often works on hidden load-bearing layers of applications, I love it when hidden infrastructure layers of an application turn out to have juicy bugs. I love security vulnerabilities that are simple enough to explain in a casual blog post. And who doesn&#8217;t love a complete bypass?</p><p>Why is the impact so high?</p><p>Many news publications advertised this as an &#8220;auth bypass.&#8221; It&#8217;s a lot worse than that, but it&#8217;s also true that this can be an auth bypass. It&#8217;s common for authorization and authentication middleware to redirect to login screens when the user is not logged in, or to read the user ID from a secure cookie and provide it within the request&#8217;s context. Normally you&#8217;d expect the endpoint to try to access this user object and then break because of a null pointer exception, but it depends on how the endpoint is written. </p><p>You can imagine the appeal of auth libraries that work exclusively with middleware, especially in the frontend environment where Developer Experience is king. You can just tell users &#8220;Installation is soooo easy. Just install this middleware and it runs everywhere! Add a React login form in minutes!&#8221; And then you hope nobody notices that the <a href="https://nextjs.org/docs/pages/building-your-application/authentication#authorization">Next.js docs have heavy-handed recommendations</a> that put middleware as an optional up-front step, backstopped by architectural decisions.</p><blockquote><p>For both cases, we recommend:</p><ul><li><p>Creating a <a href="https://nextjs.org/docs/pages/building-your-application/authentication#creating-a-data-access-layer-dal">Data Access Layer</a> to centralize your authorization logic</p></li><li><p>Using <a href="https://nextjs.org/docs/pages/building-your-application/authentication#using-data-transfer-objects-dto">Data Transfer Objects (DTO)</a> to only return the necessary data</p></li><li><p>Optionally use <a href="https://nextjs.org/docs/pages/building-your-application/authentication#optimistic-checks-with-middleware-optional">Middleware</a> to perform optimistic checks.</p></li></ul></blockquote><p>If the CVE were just an authentication or authorization bypass, it would still be a severe bug. But this can bypass all middleware. The researchers give another motivating example: is a pesky Content Security Policy getting in your way? Well, run the bypass and that CSP goes away. My own motivating example is more fun-oriented: is a middleware script enforcing a max field size and preventing you from stuffing the entire contents of the Bee Movie script into every text field in your app? Put a bypass on it.</p><p>There was another interesting bit of drama, which was that Cloudflare attempted to block requests in this format and then <a href="https://x.com/elithrar/status/1903526240847331362">Cloudflare had issues with their fix.</a> It turned out that a lot of requests had valid reasons to include the header, so there was no way for them to filter attacker traffic instead of valid use cases. So at the moment, Cloudflare&#8217;s implementation is opt-in.</p><p>The internet is held together with magic intercepting scripts. I&#8217;ve worked for several web companies, and every company has a magic and hidden implementation of critical functionality that completely abstracts all of the hard stuff from their endpoint. It turns out that you just don&#8217;t want to repeatedly write the same integration code </p><p>In practice, writing endpoints is tedious. You need to declare the route and the HTTP method used to reach the endpoint. Any authentication code needs to run. Any authorization code needs to run. If the user is not authenticated they need to get redirected to /login and if they&#8217;re not authorized they get an error. Then you need to unpack and validate the request. If there are any JWTs, gotta check those. Was this larger than the max request size? That sounds important. Let&#8217;s also make sure that exceptions get caught and logged somewhere. Maybe something with distributed tracing. And now that you&#8217;ve performed all of these steps, we can now start serving the request.</p><p>Or, hear me out: you can just write middleware that lets you handle it in every endpoint that needs it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.clientserver.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Client/Server! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>