An Interview with Robert Fishman About the Current State of Hollywood
An interview with MoffettNathanson's Robert Fishman about the current state of Hollywood, including Netflix, Paramount, YouTube, Disney, and Amazon.
An interview with MoffettNathanson's Robert Fishman about the current state of Hollywood, including Netflix, Paramount, YouTube, Disney, and Amazon.
The landscape of software engineering is changing. Rapidly. As my colleague Ben likes to say, we will probably stop writing code by hand within the next year. This comes with a move toward orchestration, and a fundamental change in how we engage with our craft. Many of us became coders first, software engineers second. There’s a lot more to software engineering than coding, but coding is our first love. Coding is comfortable, coding is fun, coding is safe. For many of us, the actual writing of syntax was never the bottleneck anyway. But now, you can command swarms of agents to do your bidding (until the compute budget runs out, at least, and we collectively decide that maybe junior engineers aren’t a terrible investment after all). The day-to-day reality of the job is shifting. Instead of writing greenfield code or getting into the flow state to debug a complex problem, you’re now multitasking. You’re switching between multiple long-running tasks, directing AI agents, and explaining to these eager little toddlers that their assumptions are wrong, their contexts are overflowing, or they need to pivot and do X, Y, and Z. And that requires endless context switching. Humans cannot truly multitask ; our brains just rapidly jump context across multiple threads. Inevitably, some of that context gets lost. It’s cognitively exhausting, but it feels hyper-productive because instead of doing one thing, you’re doing three—even if the organizational overhead means it actually takes four times as long to get them all over the finish line. This is, historically, what staff software engineers do. They don’t particularly write much code. They juggle organizational bits and pieces, align architecture, and have engineers orbiting around them executing on the vision. It’s a fine job, and highly impactful, but it’s a fundamentally different job. It requires a different set of skills, and it yields a different type of enjoyment. It’s like people management, but without the fun part: the people. As an industry, we’re trading these intimate puzzles for large scale system architecture. Individual developer can now build at the sacle of a whole product team. But scaling up our levels of abstraction always leaves something visceral behind. It was Ben who first pointed out to me that many of us will grieve writing code by hand, and he’s absolutely right. We will miss the quiet satisfaction of solving an isolated problem ourselves, rather than herding fleets of stochastic machines. We’ll adjust, of course. The field will evolve, the friction will decrease, and the sheer scale of what we can create will ultimately make the trade-off worth it. But the shape of our daily work has permanently changed, and it’s okay to grieve the loss of our first love. Consider this post your permission to do so.
The Verge ’ s headline sums up Mark Gurman’s latest report on Apple’s folding phone quite succinctly: ‘ iPhone Fold rumor: iPad-like multitasking, but no iPad apps and no Face ID ’ Though the updated layout could make multitasking easier, Gurman reports that the folding iPhone won’t run existing iPad apps. Still, Apple is reportedly trying to take advantage of the phone’s larger screen real estate by updating its “ core” apps with a sidebar on the left side of the screen. It will also give developers the ability to make the iPhone versions of their apps more iPad-like, according to Gurman. Hmph. There’s more. Instead of using Face ID , Apple’s foldable could integrate Touch ID into the device’s side button, as the “ front panel is too thin to accommodate the Face ID sensor array,” Gurman reports. That means in place of the pill-shaped housing for the front-facing camera and Face ID , Apple will reportedly add a small-hole punch camera instead. Gurman has previously reported that the foldable could look like two iPhone Airs stuck together. A few things are running through my mind reading this report. First, I’m putting my money behind it being called ‘ iPhone Duo’. It would really tickle me for Apple to put out a ‘ Duo’ and a ‘ Neo’ — two Surface product names that Microsoft used and which flopped and was never released , respectively. Second, this lack of Face ID business really puts a wrench in my plans. I’ve been pretty psyched about replacing my iPhone and my iPad mini with an iPhone Duo. As much as I love my 17 Pro, it’s too big and I think the double-duty device would really work for me. But I don’t think I want to go without Face ID . My iPad mini only has Touch ID in the power button and I’ve never enjoyed that unlocking method. Honestly, it was better in the Home Button. Third, I haven’t really kept up with the folding iPhone’s rumored specs. I presume each half is going to be thinner than the both iPhone Air and the iPad Pro (Apple’s record-holding thinnest device) since both of those feature Face ID . Fourth, leave it to Apple to not do the obvious thing and just let the thing run iPad apps. Why make developers go through designing another layout for their iOS apps if the iPadOS versions are right there ? We’ll see how the software situation shakes out. I’ll be pretty disappointed if this thing doesn’t come with Face ID . It’s probably a deal-breaker, even though I’d want to purchase it to show Apple the foldable is a form factor worth pursuing. There’s always the chance they’ll cancel the whole thing if the first one doesn’t sell well. On the other hand, they did just fix the iPhone 16e’s most glaring omission — MagSafe — year-over-year with the 17e . There’s hope. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .
Here is some text . It is made out of words . And here are some bullet-points: The text can also contain pictures for you to look at with your eyes ¹. ¹ There can also be footnotes; have an eye emoji: 👀 The text can also include quotes . The awful thing about life is this: Everyone has his reasons. Wait a second. This is also text. It is also made out of words. But instead of jerky fragments, these words are organized into sentences, like normal human language. Do you see how relaxing this is? After the torment you suffered above, isn’t it nice to have words that come in a simple linear order? And isn’t it nice that you just have to read the words, and not worry about how they fit into some convoluted implied knowledge taxonomy? These sentences are themselves organized into paragraphs. The first sentence of each paragraph is a sort of summary. So if you want to skim, you can do that. But you don’t have to skim. This text also has italics and parentheses and whatnot. But not too much. (Just a little.) Thanks for enduring that. My purpose was to illustrate a mystery. Namely, why do so many people today seem to write more like Exhibit A than Exhibit B? People sometimes give me something they wrote and ask for comments. Half the time, my reaction is Good god, why is 70% of this section titles and bullet points? This always gives me a strange feeling. It’s like all the formatting is based on some ontology. And that ontology is what I really need to understand. But it’s never actually explained. Instead, I guess I’m supposed to figure it out as things jerk between different topics? It’s disorienting, like a movie that cuts between different scenes every three seconds. But maybe that’s just my opinion? Maybe, but sometimes I’ll ask people who write like this to show me some writing they admire. And inevitably, its’s not 70% formatting, but mostly paragraphs and normal human language. So I feel that people who write this way are violating the central tenet of making stuff, which is to make something you would actually like . So then why write like that? Why do I, despite my griping, often find myself writing like that ? I’ve wondered this for years. But I told myself that I was right and that too much formatting is bad. But now—have you heard?—now we have this technology where computers can write stuff. And guess what? When they do that, they also use an insane amount of formatting. That’s weird. I figured people were addicted to formatting because they’re noobs that don’t know any better. But AIs have been optimized to make human raters happy. And that led to a similar addiction. Why? The obvious explanation is that formatting is good. People love reading stuff that’s all formatting. We should all be formatting-maxxing. There’s something to this. But it can’t really be right, because popular human writers use formatting in moderation. So formatting can’t be that good. Even before AI, everyone did agree that formatting was great in one context: Search-engine optimized content slop. Back in 2018, if you searched for anything, you’d find pages brimming with section titles and bullet points. Why? Well, when I type “why human gastric juice more acidic than other animals”, I’m not really looking for something to read . I just want to skim an overview of the main theories. I’ve experimented with asking AIs to give the same information in various styles, and I reluctantly concede that the formatting helps. But that’s not reading . Say you’ve written a ten-thousand word manifesto on human-eco-social species enhancement. If I actually care about what you think, I maintain that it’s better in paragraphs, because reading ten thousand words with endless formatting would be excruciating. This is why everyone who writes long-form essays that people actually read uses normal paragraphs. So our mystery is still alive. Most writers aspire not to write content slop, but meaningful stuff other people care about. Often, when people show me formatting-maxxed essays, I’ll complain and they’ll rewrite it with less formatting and agree that the new version is better. So why use so much formatting even when it’s bad? There’s something odd about that previous example. When I search for “why human gastric juice more acidic than other animals”, why am I not looking for something to “read”? After all, I like reading. If one of my favorite bloggers wrote an essay on the mystery of human gastric juice, I would devour it. So if I want a good essay, why don’t I look for one? I guess it’s because I instinctively rate my odds of finding one on any random topic as quite low. There’s something here related to Gresham’s law : A format-maxxed essay might be sort of crap, but at least I can ascertain its crap level quickly. A “real” essay could be great, but I’d have to invest a lot of time before I can know if that time was worth investing. So I— regretfully —mostly only read “real” essays when I have some signal that they’re good. If everyone behaves the way I do, I guess people will respond to their incentives and write with lots of formatting. Similarly, if a (current) AI tried to write a “real” essay, I probably wouldn’t read it, because I wouldn’t trust that it was good. Perhaps that explains why they don’t. Aside : If this is right, then it predicts that as AIs advance, they should become less formatting-crazy. The better they are, the more we’ll trust them. Some people can think of an idea, organize their thoughts, and then write them down, tidy and sparkling. I am not one of those people. If I mentally organize my ideas and go to write them down, I soon learn that my ideas were not in fact organized. Usually, they’re hardly even ideas and more a slurry of confused psychic debris. The way I write is that I make an outline. Or, rather, I try to make an outline. But then I realize the structure is off, so I start over. After a few cycles, I give up and just write the first section. After revising it eight times, I’ll try (and fail) to make an outline for the rest of the post. This continues—with occasional interludes where I reorganize everything—until I can’t take it anymore and publish. I don’t recommend it. My point is just that blathering out a bunch of text is a good way to think. And when blathering, formatting seems to help. Partly, I think that’s because formatting allows you to experiment with structure without worrying about the details. And partly I think that’s because formatting makes it easier to get down details without worrying about the bigger picture. So maybe that’s one source of our formatting addiction? We blather in formatting, but don’t put in the work to clarify things? Oddly, some claim that something similar is true for AI: If you tune them to write with lots of formatting, that doesn’t just change how the content looks , but also improves accuracy . The idea is that as the AI looks at what it’s written so far, formatting helps it stay focused on the most important things. Supposedly. Maybe that’s true. But we have “reasoning” AIs now, that blather for a while before producing a final output. If they wanted, they could format-maxx while thinking and output paragraphs at the end. But they don’t. So while this explanation might work for people, I don’t buy it for AI. Finally, a conspiracy theory. Sometimes when I try to fight through a format-maxxed essay, it seems like all the formatting speaks to me. It says: “This is a nonlinear web of ideas. I’m giving you the pieces. If you pay attention, you should see how they fit together. Sadly, the world isn’t a simple narrative I can spoon-feed to you. So this is the best I can do.” I think this is a bluff. And it’s a good one, because it’s based in truth. The world is not a narrative. Narratives are lies we tell ourselves to try to cope with the swirl of complexity that is reality. All true! However, narratives are all we’ve got. If you want to understand something with your tiny little brain, you don’t really have a lot of other options. The thing about writing that’s 70% formatting is that it’s very easy to delude yourself that there’s a set of clear ideas underneath all of them. Imagine an LLM that has an amazing contextual ability to find related ideas to anything that’s brought up, but isn’t all that great at synthesizing them into a coherent whole. If that LLM were to try to write beautiful paragraphs, those paragraphs might appear sort of obviously incoherent. However, if that LLM were to construct a lot of bullet points, it might appear much more useful, and in fact, actually be much more useful. Imagine you’re an AI. You have an amazing recall of most of human knowledge ever created, but you have a mediocre ability to synthesize that into novel theories or to work out the bugs in those theories. Now, if someone asks you a question and you try to write a beautiful narrative and respond to them, that narrative might appear to be sort of obviously incoherent and confusing, and your raters might say, bad AI, stop that. Whereas, if you were to output a ton of bullet points, without even necessarily trying to cohere them into a whole, your writers might say, good. But imagine you’re an AI. You’re being trained to respond in ways that make human raters happy. You can remember most knowledge ever created, but you’re so-so at synthesizing it into new ideas. If someone asks you a question and you try to write a beautiful narrative, your response might look like confusing babbling, meaning your raters say, “Bad AI. Stop that.” Whereas if you output a bunch of section titles and bullet points, raters might say, “This seems OK.” So you’ll start doing the latter. That’s not bad. Arguably, you (you’re still an AI) are responding in the way that’s most useful, given your capabilities. But you are also responding in a way that gives a misleading impression that you’ve figured out how everything fits together, even if you haven’t. I suspect something similar happens with humans. Say you have a bunch of ideas, but you haven’t yet sewn them together into a clear story. If you write paragraphs, people will probably view them as confused babbling. Whereas if you write with lots of formatting, people might still be at least somewhat positive. Just like AIs, we all respond to our rewards. More importantly, if you’ve written something that’s 70% formatting, it’s easy to delude yourself that there’s a clear set of ideas underneath, even when there isn’t. The good news is that if you put in the effort, you can write better paragraphs than AI (for now). The act of creating a narrative forces you to confront contradictions that are invisible in format-world. So even if you want to write with 70% formatting, consider forcing yourself to write in paragraphs first. Theory: Both people and AIs are addicted to formatting because: How does the optimal amount of formatting vary in the length of a piece of writing? I suspect it’s like this: Here is one . Here is another . Here is a numbered list. And now: Look at this . Bullets inside a number inside a section inside a section. What a time to be alive. Actually, let’s do one inside of a list. A deeply nested list. This is going to be awesome. The awful thing about life is this: Everyone has his reasons. Are we currently in a section or sub section or a sub sub section? What parent section encloses this one? Where are we in the hierarchy? What are we doing? However, narratives are all we’ve got. If you want to understand something with your tiny little brain, you don’t really have a lot of other options. The thing about writing that’s 70% formatting is that it’s very easy to delude yourself that there’s a set of clear ideas underneath all of them. Imagine an LLM that has an amazing contextual ability to find related ideas to anything that’s brought up, but isn’t all that great at synthesizing them into a coherent whole. If that LLM were to try to write beautiful paragraphs, those paragraphs might appear sort of obviously incoherent. However, if that LLM were to construct a lot of bullet points, it might appear much more useful, and in fact, actually be much more useful. Imagine you’re an AI. You have an amazing recall of most of human knowledge ever created, but you have a mediocre ability to synthesize that into novel theories or to work out the bugs in those theories. Now, if someone asks you a question and you try to write a beautiful narrative and respond to them, that narrative might appear to be sort of obviously incoherent and confusing, and your raters might say, bad AI, stop that. Whereas, if you were to output a ton of bullet points, without even necessarily trying to cohere them into a whole, your writers might say, good. Formatting is good . Sometimes. Especially if you don’t trust the author . On the internet, most people probably don’t trust you . It’s harder to see that something has problems when it’s written in all-formatting . It’s easier to blather out a bunch of formatting than to write lucid paragraphs . This is good at some stages, because it’s easy. But forcing yourself to actually write a narrative is also good , because it’s hard. First write with lots of formatting. Then figure out how to remove it. Then put it back, if you want.
I posted a fun post about how I made my transcripts into really neat knowledge graphs. Hope it is helpful harper.blog/2026/03/1… Thank you for using RSS. I appreciate you. Email me
Most often when we think about the difficulties associated with the translation of a language, we are thinking in one direction: you are learning a new language, and it is difficult to translate your own language and thoughts into this new one. This difficulty feels extremely real and visceral, and engages generally with many aspects of your overall cognitive experience: it’s difficult to remember words, grammar rules, it’s embarrassing to make mistakes, frustrating to not feel understood, etc.
A hacktivist group with links to Iran’s intelligence agencies is claiming responsibility for a data-wiping attack against Stryker , a global medical technology company based in Michigan. News reports out of Ireland, Stryker’s largest hub outside of the United States, said the company sent home more than 5,000 workers there today. Meanwhile, a voicemail message at Stryker’s main U.S. headquarters says the company is currently experiencing a building emergency. Based in Kalamazoo, Michigan, Stryker [NYSE:SYK] is a medical and surgical equipment maker that reported $25 billion in global sales last year. In a lengthy statement posted to Telegram, an Iranian hacktivist group known as Handala (a.k.a. Handala Hack Team) claimed that Stryker’s offices in 79 countries have been forced to shut down after the group erased data from more than 200,000 systems, servers and mobile devices. A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker. “All the acquired data is now in the hands of the free people of the world, ready to be used for the true advancement of humanity and the exposure of injustice and corruption,” a portion of the Handala statement reads. The group said the wiper attack was in retaliation for a Feb. 28 missile strike that hit an Iranian school and killed at least 175 people, most of them children. The New York Times reports today that an ongoing military investigation has determined the United States is responsible for the deadly Tomahawk missile strike. Handala was one of several Iran-linked hacker groups recently profiled by Palo Alto Networks , which links it to Iran’s Ministry of Intelligence and Security (MOIS). Palo Alto says Handala surfaced in late 2023 and is assessed as one of several online personas maintained by Void Manticore , a MOIS-affiliated actor. Stryker’s website says the company has 56,000 employees in 61 countries. A phone call placed Wednesday morning to the media line at Stryker’s Michigan headquarters sent this author to a voicemail message that stated, “We are currently experiencing a building emergency. Please try your call again later.” A report Wednesday morning from the Irish Examiner said Stryker staff are now communicating via WhatsApp for any updates on when they can return to work. The story quoted an unnamed employee saying anything connected to the network is down, and that “anyone with Microsoft Outlook on their personal phones had their devices wiped.” “Multiple sources have said that systems in the Cork headquarters have been ‘shut down’ and that Stryker devices held by employees have been wiped out,” the Examiner reported. “The login pages coming up on these devices have been defaced with the Handala logo.” Wiper attacks usually involve malicious software designed to overwrite any existing data on infected devices. But a trusted source with knowledge of the attack who spoke on condition of anonymity told KrebsOnSecurity the perpetrators in this case appear to have used a Microsoft service called Microsoft Intune to issue a ‘remote wipe’ command against all connected devices. Intune is a cloud-based solution built for IT teams to enforce security and data compliance policies, and it provides a single, web-based administrative console to monitor and control devices regardless of location. The Intune connection is supported by this Reddit discussion on the Stryker outage, where several users who claimed to be Stryker employees said they were told to uninstall Intune urgently. Palo Alto says Handala’s hack-and-leak activity is primarily focused on Israel, with occasional targeting outside that scope when it serves a specific agenda. The security firm said Handala also has taken credit for recent attacks against fuel systems in Jordan and an Israeli energy exploration company. “Recent observed activities are opportunistic and ‘quick and dirty,’ with a noticeable focus on supply-chain footholds (e.g., IT/service providers) to reach downstream victims, followed by ‘proof’ posts to amplify credibility and intimidate targets,” Palo Alto researchers wrote. The Handala manifesto posted to Telegram referred to Stryker as a “Zionist-rooted corporation,” which may be a reference to the company’s 2019 acquisition of the Israeli company OrthoSpace. Stryker is a major supplier of medical devices, and the ongoing attack is already affecting healthcare providers. One healthcare professional at a major university medical system in the United States told KrebsOnSecurity they are currently unable to order surgical supplies that they normally source through Stryker. “This is a real-world supply chain attack,” the expert said, who asked to remain anonymous because they were not authorized to speak to the press. “Pretty much every hospital in the U.S. that performs surgeries uses their supplies.” John Riggi , national advisor for the American Hospital Association (AHA), said the AHA is not aware of any supply-chain disruptions as of yet. “We are aware of reports of the cyber attack against Stryker and are actively exchanging information with the hospital field and the federal government to understand the nature of the threat and assess any impact to hospital operations,” Riggi said in an email. “As of this time, we are not aware of any direct impacts or disruptions to U.S. hospitals as a result of this attack. That may change as hospitals evaluate services, technology and supply chain related to Stryker and if the duration of the attack extends.” This is a developing story. Updates will be noted with a timestamp. Update, 2:54 p.m. ET: Added comment from Riggi and perspectives on this attack’s potential to turn into a supply-chain problem for the healthcare system.
Ten years late (but hopefully not a dollar short ) I recently figured out what all the fuss about dbt is about . No it’s not (at least, not yet). In fact, used incorrectly, it’ll do a worse job than you. But used right, it’s a kick-ass tool that any data engineer should be adding to their toolbox today * . In this article I’ll show you why.
This is an addendum to the main post about using Claude Code with dbt . It shows an excerpt of a Claude session log so you can see exactly what goes on "under the covers" . For full details of the prompt, commentary, and conclusions, see Claude Code isn’t going to replace data engineers (yet) . Here we can see the steps that Claude Code takes as it figures out for itself anomalies in the data and adapts the dbt model to handle them.
Perhaps my favourite JavaScript APIs live within the Internationalization namespace. A few neat things the global allows: It’s powerful stuff and the browser or runtime provides locale data for free! That means timezones, translations, and local conventions are handled for you. Remember moment.js? That library with locale data is over 600 KB (uncompressed). That’s why JavaScript now has the Internationalization API built-in. SvelteKit and similar JavaScript web frameworks allow you to render a web page server-side and “hydrate” in the browser. In theory , you get the benefits of an accessible static website with the progressively enhanced delights of a modern “web app”. I’m building attic.social with SvelteKit. It’s an experiment without much direction. I added a bookmarks feature and used to format dates. Perfect! Or was it? Disaster strikes! See this GIF: What is happening here? Because I don’t specify any locale argument in the constructor it uses the runtime’s default. When left unconfigured, many environments will default to . I spotted this bug only in production because I’m hosting on a Cloudflare worker. SvelteKit’s first render is server-side using but subsequent renders use in my browser. My eyes are briefly sullied by the inferior US format! Is there a name for this effect? If not I’m coining: “Flash of Wrong Locale” (FOWL). To combat FOWL we must ensure that SvelteKit has the user’s locale before any templates are rendered. Browsers may request a page with the HTTP header. The place to read headers is hooks.server.ts . I’ve vendored the @std/http negotiation library to parse the request header. If no locales are provided it returns which I change to . SvelteKit’s is an object to store custom data for the lifetime of a single request. Event are not directly accessible to SvelteKit templates. That could be dangerous. We must use a page or layout load function to forward the data. Now we can update the original example to use the data. I don’t think the rune is strictly necessary but it stops a compiler warning . This should eliminate FOWL unless the header is missing. Privacy focused browsers like Mullvad Browser use a generic header to avoid fingerprinting. That means users opt-out of internationalisation but FOWL is still gone. If there is a cache in front of the server that must vary based on the header. Otherwise one visitor defines the locale for everyone who follows unless something like a session cookie bypasses the cache. You could provide a custom locale preference to override browser settings. I’ve done that before for larger SvelteKit projects. Link that to a session and store it in a cookie, or database. Naturally, someone will complain they don’t like the format they’re given. This blog post is guaranteed to elicit such a comment. You can’t win! Why can’t you be normal, Safari? Despite using the exact same locale, Safari still commits FOWL by using an “at” word instead of a comma. Who’s fault is this? The ECMAScript standard recommends using data from Unicode CLDR . I don’t feel inclined to dig deeper. It’s a JavaScriptCore quirk because Bun does the same. That is unfortunate because it means the standard is not quite standard across runtimes. By the way, the i18n and l10n abbreviations are kinda lame to be honest. It’s a fault of my design choices that “internationalisation” didn’t fit well in my title. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Natural alphanumeric sorting Relative date and times Currency formatting
A service promising to protect your privacy is not able to keep you anonymous. Why is that? This distinction is actually really important in data protection and privacy laws. Anonymity is about the inability to link an action, message, or data point to a specific individual. If attribution is possible (even if difficult, like with pseudonymization), you are identifiable and therefore not anonymous. Privacy , however, is about the ability to limit or control access to personal information. The focus is not identity removal, but boundaries of who can observe, store, or process personal data. Personal data has to, by default, be linked to an individual, which makes you identifiable and not anonymous. If it isn't, it no longer counts as personal data. You can see this in the way the GDPR works; it doesn't apply to anonymous data, but personal data, and pseudonymous data still counts. Privacy can exist with full identification: Your doctor knows you and your diagnoses, but is protecting your health file from unauthorized access. On the other hand, anonymity can exist without privacy, like anonymous browsing that is still heavily tracked behaviorally. The way we ensure privacy has different mechanisms. In data protection law, this is referred to as "technical- and organizational measures" (TOMs). For example, these can be access controls, confidentiality obligations, encryption, and following the general principles of data minimization, storage and purpose limitations in the way your systems and organization are set up. Where we think they overlap is when we expect an entity to protect our privacy so an external actor cannot identify us. This is problematic in a variety of ways: When we are offered privacy, we implicitly assume privacy from everyone , while most privacy guarantees actually mean privacy from the public or third parties or less tracking than other services; not privacy from the service provider itself, or legal obligations/the state. Companies who aim to protect your privacy act more like privacy intermediaries : They shield users from outsiders or offer a service where less data is harvested or data isn't sold to third parties, but they still maintain some capability to associate activity with an identity. If you want anonymity at a service offering you privacy, you have to create it yourself by not giving the service a way to identify you. This can be done via using a fake name and address, using a way to pay that doesn't directly link your bank accounts or other payment info (privacy.com cards, or crypto, etc.), accessing it via a VPN, and possibly more precautions on an OS level (Kali Linux, containers etc.). That's cumbersome and not realistic for most people, as their threat level is not one of a whistleblower; however, you can of course decide to do it anyway. Even then, it might be impossible, depending on the service and what you share with it. You can be anonymous on a blog, but over the years, the very little vague information you share can paint a picture. If you use an email service for your normal email needs, you will likely receive all kinds of de-anonymizing information: Doctor's appointments, booking confirmations, event tickets and more, all with your real name and location. The correct move here would be to separate your different email needs into different accounts and addresses. Sensitive political organizing, for example, should be separated from your personal information, either the one you give the service directly, or any other private email coming in. Just remember at the end of the day: Privacy is conditional access to identity. Anonymity is the absence of an identity link. If the right legal conditions are met, access to identity is given. But if the service doesn't know who you are, it cannot reveal it. Reply via email Published 11 Mar, 2026
When talking about using AI for decision-making, you often hear that there will be " human oversight " or " human intervention ". One popular example that I have come across in conferences and webinars about data protection law is the hiring process and recruiting: Companies are already proudly using AI to select applicants. It summarizes CVs, compares qualifications with the job profile, and ranks candidates. At the end, HR decides who to invite for interviews based on this output. The fact that AI isn't just sending out the interviews itself immediately and instead, a human is required to write an email or press a button is the idolized "human oversight". The fact that someone could intervene and make a different decision is supposed to be enough. What bothers me is that despite being ranked as "high risk" under the AI Act (together with using AI for medical diagnosis, financial and legal advice, etc.), we aren't looking at how these systems are realistically used in practice. We shove a human in the loop ("HITL") somewhere to assuage fears and comply with legal requirements, but almost no one wants to talk about the fact that Think about it: You have an IT company that gets 400-600 applications on each open spot. Spending time on every single application weeding people out takes a lot of time. You want to save time using AI so the people whose CVs and motivational letters most closely match the job description are already pre-selected for you and ranked. You know the next few weeks will bring new application deadlines again and you're already behind. You just can't check all of the applications to see whether the AI messed up or not. You can do a random check here and there, but at what point will you just look at the top candidates, check their applications, see it was correctly summarized (or well enough), and assume the rest of applicants that weren't considered were assessed correctly as well? Why would you look at all or most of the applications again anyway when the AI system is advertised as saving you that time and step entirely? If anything, the human intervention here is for the companies - making sure that the AI didn't accidentally rank someone top that is completely unfitting for the task. It's not there for you . No one will notice if your perfectly fitting application has been disregarded by AI for no discernible reason, and no one will find it as part of the oversight process in the hundreds of other applications to make sure. If the AI makes the task quicker and the first top candidates sound fitting and plausible, that's it, nail in the coffin, why would HR put in more work? All you can realistically do is make them explain and check after each rejection where you were a good fit and know AI was used. If you don't do that, you can't know whether you've been unjustly treated by their AI hiring process or were rejected on a justifiable basis. As long as AI continues to hallucinate or leave things out inexplicably just to say sorry afterwards, this is a huge liability. Companies don't seem to really care for possible poor data quality, biases and systemic inequities that are subtle or deeply embedded, requiring more work and possibly an outside view to detect and mitigate. We are lacking nuanced oversight mechanisms, and I hope companies are prepared for the lawsuits this will generate. If a company wants to use AI in the hiring process, I'd at least expect them to do the following bare minimum: Unfortunately, companies have no incentive to do this! This is seen as more bureaucracy, more time and money wasted, restrictive to innovation. They're competing with companies who are grabbing talent even faster than them who don't give a shit about fairness in AI hiring. Each day they don't find a replacement or candidate for a new role is bad. And why hire more HR personnel to sift through hundreds of applicants if less HR personnel can handle it with AI? Organizational priorities and financial pressures don't allow enough checks and considerations to go into this delicate process. We need to question " human oversight " more closely and require more explanations on how they plan to combat opaque decision-making, automation bias and the pressure to optimize and make work as easy as possible. Until adequate systems are in place that combat this, it will always be ineffective and a buzzword to me. Reply via email Published 11 Mar, 2026 while HR does receive training on how to use AI and how it works, the reasoning behind AI selection and summaries is a black box for the users, AI recruiting is advertised as a huge time save, which stands in contrast to the checking you should technically be doing as a human to make sure the AI did a good job, most users will follow the AI recommendations blindly because they are presented in a way that sounds plausible and as time goes on, we get lazy and suffer from automation bias and oversight fatigue. having a clear documentation of AI capabilities and limitations for their employees incentivizing taking the time to question AI suggestions and do some 'manual' labor requiring detailed justification when accepting the AI suggestions/rankings the ability to explicitly name why the disregarded applications were denied by their AI system in each case (you're going to need this anyway when an applicant challenges the decision) testing the system and the employees by periodically entering a candidate application that should fit perfectly vs. one that is very unfitting, and see where they land and what HR does with them (similar to the existing practice of IT sending out fake phishing e-mails sometimes to test you) collecting decision patterns and errors to correct and adjust the AI system
When the news broke that Meta's smart glasses were feeding data directly into their Facebook servers , I wondered what all the fuss was about. Who thought AI glasses used to secretly record people would be private? Then again, I've grown cynical over the years . The camera on your laptop is pointed at you right now. When activated, it can record everything you do. When Zuckerberg posted a selfie with his laptop visible in the background, people were quick to notice that both the webcam and the microphone had black tape over them. If the CEO of one of the largest tech companies in the world doesn't trust his own device, what are the rest of us supposed to do? On my Windows 7 machine, I could at least assume the default behavior wasn't to secretly spy on me. With good security hygiene, my computer would stay safe. For Windows 10 and beyond, that assumption may no longer hold. Microsoft's incentives have shifted. They now require users to create an online account, which comes with pages of terms to agree to, and they are in the business of collecting data . As part of our efforts to improve and develop our products, we may use your data to develop and train our AI models. That's your local data being uploaded to their servers for their benefit. Under their licensing agreement (because you don't buy Windows, you only license it) you are contractually required to allow certain information to be sent back to Microsoft: By accepting this agreement or using the software, you agree to all of these terms, and consent to the transmission of certain information during activation and during your use of the software as per the privacy statement described in Section 3. If you do not accept and comply with these terms, you may not use the software or its features. The data transmitted includes telemetry, personalization, AI improvement, and advertising features. On a Chromebook, there was never an option to use the device without a Google account. Google is in the advertising business, and reading their terms of service, even partially, it all revolves around data collection. Your data is used to build a profile both for advertising and AI training. None of this is a secret. It's public information, buried in those terms of service agreements we blindly click through. Even Apple, which touts itself as privacy-first in every ad, was caught using user data without consent . Tesla employees were found sharing videos recorded inside customers' private homes . While some treat the Ray-Ban glasses story as an isolated incident, here is Yann LeCun, Meta's former chief AI scientist, describing transfer learning using billions of user images: We do this at Facebook in production, right? We train large convolutional nets to predict hashtags that people type on Instagram, and we train on literally billions of images. Then we chop off the last layer and fine-tune on whatever task we want. That works really well. That was seven years ago, and he was talking about pictures and videos people upload to Instagram. When you put your data on someone else's server, all you can do is trust that they use it as intended. Privacy policies are kept deliberately vague for exactly this reason. Today, Meta calls itself AI-first, meaning it's collecting even more to train its models. Meta's incentive to collect data exceeds even that of Google or Microsoft. Advertising is their primary revenue source. Last year, it accounted for 98% of their forecasted $189 billion in revenue . Yes, Meta glasses record you in moments you expect to be private, and their workers process those videos at their discretion. We shouldn't expect privacy from a camera or a microphone, or any internet-connected device, that we don't control. That's the reality we have to accept. AI is not a magical technology that simply happens to know a great deal about us. It is trained on a pipeline of people's information: video, audio, text. That's how it works. If you buy the device, it will monitor you.
Curious how leading engineers tackle extreme scale challenges with data-intensive applications? Join Monster Scale Summit (free + virtual). It’s hosted by ScyllaDB, the monstrously fast and scalable database. The conference starts today and lasts two days. Tomorrow, I’m giving a talk called Working on Complex Systems . I’d be glad to see you there 🙂 . Agenda Week 0: Introduction Week 1: In-Memory Store Week 2: LSM Tree Foundations Week 3: Durability with Write-Ahead Logging Week 4: Deletes, Tombstones, and Compaction Week 5: Leveling and Key-Range Partitioning Week 6: Block-Based SSTables and Indexing Week 7: Bloom Filters and Trie Memtable Week 8: Concurrency Over this series, you built a working LSM tree: you flush to persist the memtable to disk and compact to reclaim space. Yet, you’ve been single-threaded so far. This week, we lift that constraint: flush and compaction will run in the background while you keep serving requests. There are many ways to add concurrency. The approach here is to introduce a versioned, ref-counted catalog that lets readers take a stable snapshot while background flush/compaction proceeds. A catalog holds references to: The current memtable. The current WAL. The current MANIFEST. Each request pins one catalog version for the duration of the operation. When a flush or compaction completes, the system creates a new catalog version. Old resources (e.g., obsolete SSTables) are not deleted immediately. Instead, each catalog tracks a refcount of in-flight requests. Once an old catalog’s refcount drops to zero and a newer catalog exists, you can safely garbage-collect the resources that appear in the old version but not in the new one. For example, with two catalog versions (red = older version’s elements, blue = newer element”): Once we can guarantee that catalog v1 is no longer referenced, we can delete the old MANIFEST, SST-2 and SST-3. Another example: a flush produced a new memtable and WAL file: In this case, once vatalog v1 has no remaining references, we can free the old memtable and delete the old WAL file. 💬 If you want to share your progress, discuss solutions, or collaborate with other coders, join the community Discord server ( channel): Join the Discord Add a data structure that tracks: Memtable reference. MANIFEST path. Version (monotonic). Refcount of active readers. Implement a manager that keeps catalog versions in memory: Pick the latest catalog. Increment its refcount. Decrement the refcount of the catalog. If refcount is zero and there’s a new catalog version: Remove the current catalog. Remove elements present in the current catalog but not in the latest version (files, WAL, etc.) Create a new catalog based on the provided data. Assign a unique, monotonic version. At startup: Read from the authoritative MANIFEST (latest MANIFEST file). Treat any files not listed in MANIFEST as orphans and delete them. Read all WAL files you still have on disk, in order, to rebuild the in-memory state. Create the current catalog version from the reconstructed state. Start the background worker. In a nutshell, flush and compaction will move to the background. You’ll use internal queues plus worker pools to ensure no overlapping work on the same resources: at most one flush running at a time, and at most one compaction running at a time. Compaction: Keep the same trigger: Every 10,000 update requests. Do not run compaction in the request path. On compaction trigger: Post a notification to an internal queue and return. A single background thread listens on the queue and runs the actual work. Similar compaction process, except: Do not overwrite the existing WAL file. Instead, create a new file. Create a new catalog that references the new WAL. Keep the same trigger: When the memtable contains 2,000 entries. Do not run flush in the request path. On flush trigger: Allocate a new memtable and create a new WAL file for subsequent writes. Post a notification to an internal queue. Return immediately to the caller. A single background thread listens on the queue and runs the actual work. Similar flush process, except: Do not overwrite the existing MANIFEST file. Instead, create a new file. Create a new catalog referencing the new MANIFEST. Acquire a catalog from the manager. Do the operation using paths/refs from that catalog. Release the catalog. Concurrent requests make deterministic assertions harder. For example, suppose the validation file contains the following requests that can run in parallel: What should you assert for : , , or ? To make validation deterministic, you will handle barriers: all requests before a barrier must finish before starting the next block. You will also relax checks: a is valid if it returns any value written for a key before the last barrier. A similar example with barriers: The first two requests run in parallel. The first barrier waits for both to complete. The first GET should accept either or . The second request should accept only . The new validation file is a sequence of blocks separated by instructions: All the lines between two barriers form a block. On instruction, wait for all in-flight requests in the current block to finish before starting the next block. / lines are issued in parallel within their block. lines are also issued in parallel within their block. means the response must be any one of the list values. Download and run your client against a new file: concurrency.txt . When the memtable reaches 80% of capacity: Pre-allocate the next memtable in memory. Pre-create/rotate to the next WAL on disk. That's it for the whole series. You implemented a fully functional LSM tree: Started with a memtable (hashtable) and a flush that writes immutable SSTables to disk. Added a WAL for durability. Handled deletes and compaction to reclaim space. Introduced leveling and key-range partitioning to speed up reads. Switched to block-based SSTables with indexing. Added Bloom filters and replaced the memtable with a radix trie for faster lookups. Finally, introduced concurrency: a simple, single-threaded foreground path with flush and compaction running in the background. I hope you had fun building it. Thank you for following the series, and special thanks to our partner, ScyllaDB ! To get more information on how things work in production databases, you can read how RocksDB keeps track of live SST files . The structure is inspired by RocksDB’s . Conflict resolution is one aspect we’re missing in the series (maybe as a follow-up?) A versioned catalog is enough for reads, but what about conflicting writes? Suppose two clients, Alice and Bob, update the same key around the same time. A simple policy to resolve conflicts is latest wins. The database can serialize operations for the same key to ensure the latest request wins: In this example, the database ends up with as the latest state. This approach works with one node. But what about databases composed of multiple nodes? Say the two requests go to two different nodes at roughly the same time: With multiple nodes, the database must resolve conflicts consistently. There are two common ways: Coordination via a leader (consensus): Route both writes to the same leader node, which solves the conflict and determines the end state. Reconcile with comparable timestamps: Attach a timestamp to each write and store it with the key. By timestamp, we don’t mean relying on wall-clock time but a logical clock, so that “later“ is well-defined across nodes. If we go with the second approach and start storing data, we also unlock something production systems use: consistent snapshots. A read can include a timestamp, and the database returns the last version at or before that time; hence, providing a consistent view of the data, even while flush/compaction runs in the background. This pattern has a name: Multi-Version Concurrency Control (MVCC). It involves keeping multiple versions per key instead of only the last one, reading using a chosen point in time, and deleting old versions once they are no longer needed. See how ScyllaDB handles timestamp conflict resolution for more information. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. ❤️ If you enjoyed this post, please hit the like button. Agenda Week 0: Introduction Week 1: In-Memory Store Week 2: LSM Tree Foundations Week 3: Durability with Write-Ahead Logging Week 4: Deletes, Tombstones, and Compaction Week 5: Leveling and Key-Range Partitioning Week 6: Block-Based SSTables and Indexing Week 7: Bloom Filters and Trie Memtable Week 8: Concurrency Over this series, you built a working LSM tree: you flush to persist the memtable to disk and compact to reclaim space. Yet, you’ve been single-threaded so far. This week, we lift that constraint: flush and compaction will run in the background while you keep serving requests. There are many ways to add concurrency. The approach here is to introduce a versioned, ref-counted catalog that lets readers take a stable snapshot while background flush/compaction proceeds. A catalog holds references to: The current memtable. The current WAL. The current MANIFEST. Once we can guarantee that catalog v1 is no longer referenced, we can delete the old MANIFEST, SST-2 and SST-3. Another example: a flush produced a new memtable and WAL file: In this case, once vatalog v1 has no remaining references, we can free the old memtable and delete the old WAL file. Your Tasks 💬 If you want to share your progress, discuss solutions, or collaborate with other coders, join the community Discord server ( channel): Join the Discord Catalog Add a data structure that tracks: Memtable reference. MANIFEST path. Version (monotonic). Refcount of active readers. Implement a manager that keeps catalog versions in memory: : Pick the latest catalog. Increment its refcount. : Decrement the refcount of the catalog. If refcount is zero and there’s a new catalog version: Remove the current catalog. Remove elements present in the current catalog but not in the latest version (files, WAL, etc.) : Create a new catalog based on the provided data. Assign a unique, monotonic version. At startup: Read from the authoritative MANIFEST (latest MANIFEST file). Treat any files not listed in MANIFEST as orphans and delete them. Read all WAL files you still have on disk, in order, to rebuild the in-memory state. Create the current catalog version from the reconstructed state. Start the background worker. Compaction: Keep the same trigger: Every 10,000 update requests. Behavior: Do not run compaction in the request path. On compaction trigger: Post a notification to an internal queue and return. Worker: A single background thread listens on the queue and runs the actual work. Similar compaction process, except: Do not overwrite the existing WAL file. Instead, create a new file. Create a new catalog that references the new WAL. Keep the same trigger: When the memtable contains 2,000 entries. Behavior: Do not run flush in the request path. On flush trigger: Allocate a new memtable and create a new WAL file for subsequent writes. Post a notification to an internal queue. Return immediately to the caller. Worker: A single background thread listens on the queue and runs the actual work. Similar flush process, except: Do not overwrite the existing MANIFEST file. Instead, create a new file. Create a new catalog referencing the new MANIFEST. Acquire a catalog from the manager. Do the operation using paths/refs from that catalog. Release the catalog. What should you assert for : , , or ? To make validation deterministic, you will handle barriers: all requests before a barrier must finish before starting the next block. You will also relax checks: a is valid if it returns any value written for a key before the last barrier. A similar example with barriers: The first two requests run in parallel. The first barrier waits for both to complete. The first GET should accept either or . The second request should accept only . All the lines between two barriers form a block. On instruction, wait for all in-flight requests in the current block to finish before starting the next block. / lines are issued in parallel within their block. lines are also issued in parallel within their block. means the response must be any one of the list values. Pre-allocate the next memtable in memory. Pre-create/rotate to the next WAL on disk. Started with a memtable (hashtable) and a flush that writes immutable SSTables to disk. Added a WAL for durability. Handled deletes and compaction to reclaim space. Introduced leveling and key-range partitioning to speed up reads. Switched to block-based SSTables with indexing. Added Bloom filters and replaced the memtable with a radix trie for faster lookups. Finally, introduced concurrency: a simple, single-threaded foreground path with flush and compaction running in the background. In this example, the database ends up with as the latest state. This approach works with one node. But what about databases composed of multiple nodes? Say the two requests go to two different nodes at roughly the same time: With multiple nodes, the database must resolve conflicts consistently. There are two common ways: Coordination via a leader (consensus): Route both writes to the same leader node, which solves the conflict and determines the end state. Reconcile with comparable timestamps: Attach a timestamp to each write and store it with the key. By timestamp, we don’t mean relying on wall-clock time but a logical clock, so that “later“ is well-defined across nodes.
Oracle crushed earnings in a way that not only speaks to the secular AI wave they are riding but also to Oracle's strong position
Twenty-five years ago, I captured a screenshot of my FTP client showcasing the download of a SuSE Linux gcc compilation package at the dazzling rate of : Downloading the gcc cross-compiler for s390x through the ftp.belnet.be mirror. Note the then very new Windows XP Olive theme. For some reason, that screenshot must have been relevant, as I found it uploaded as part of my UnionVault.NET museum from 2002. Nowadays, such a download speed can officially be scoffed at as being slower than a snarky snail. Yet in 2000-2002, that was lightning-fast. Perspectives change. In Belgium, telecom company Belgacom introduced ADSL in 1999, significantly boosting our digital lives. No longer did I have to hang up the ISDN line when chatting over ICQ when mom wanted to do a quick phone call to grandma to ask about next week’s party. No longer did we have to listen to squeaky sounds and wait and wait and wait… for an image or file to appear. The future was here! For our family, the future was here a smidge earlier than the average Flemish family as my dad worked very close to the source. He was one of the Belgacom employees responsible for testing out various early ADSL modems at home, so our dialup method changed frequently. I do remember that we too were blessed with “The Frog”: the Alcatel ‘Stingray’ ADSL SpeedTouch USB Modem that looked like a frog or ray, depending on who you’d ask: The first iteration of the Alcatel SpeedTouch modem. That lovely shape was capable of handling at most downstream but our cables/ISP was not ready to handle that just yet. In September 2002, Belgacom announced they would further increased the ADSL bandwidth : Snelheidsverhoging: alle Belgacom ADSL-abonnementen. De maximum downstreamsnelheid bedroeg sinds de lancering 750 Kbit/s (ADSL GO) en 1Mbit/s (ADSL Plus-Pro-Office-Premium). Door de bijkomende investeringen en netwerkaanpassingen van Belgacom zal de meerderheid van de klanten pieksnelheden kunnen halen tot . Deze werkzaamheden zullen vermoedelijk voltooid zijn in het eerste kwartaal van 2003. Three whoppin’ megabits (not bytes) per second! Can you imagine that? I guess you can given the current average download speeds of… Wait, let me check speedtest.net … or, in other words, 93 times faster than the bleeding-edge 2003 speeds 1 . Try streaming your favourite YouTube video with a few megabits per second. YouTube didn’t exist until two years later (2005). Perspectives change. In that statement they mention they have 400k customers. Given the widespread adoption of internet in Belgium, that number can be safely multiplied by ten nowadays. The Skynet ISP that was bought up by Belgacom and hosted our very first personal homes under provided a monthly limit of . According to Belgacom in that same announcement, only a tiny portion of their users effectively hit that limit. Nowadays, everyone is accustomed to “stream whatever, whenever! YOLO!”. Back then, speeds were “high”, but we still had to be mindful of the stuff we downloaded each month, especially when wading through newsgroups looking for shady new releases Perspectives change. I wonder if my dad kept a list of the routing hardware we burned through in those late nineties/early noughties. All I can recall is that it was a lot . Since he was employed by the national telecom company that only really was (and still is) rivalled by a single other company—Telenet—we never tried the alternative. Nowadays, multiple “shadow” ISPs exist like Orange, Mobile Vikings, and Scarlet that hire the Proximus cable network. Proximus is the rebranding and full privatisation of Belgacom that was the rebranding of the institute RTT ( Regie voor Telegraaf en Telefoon —or, as my dad would call it, Rap Terug Thuis ). Unfortunately, the Web Archive never crawled all homes and I neglected to backup whatever my dad uploaded on there so our stuff is forever gone. I regret taking only a single screenshot of my download speed, so I cannot repeat this enough: archive your stuff ! That’s also the oldest screenshot of my machine/OS I have; the other desktop screenshots are from 2004+. This blog post is just an excuse to get that image under the moniker. According to meter.net historical speed tests results , only five years ago, for Belgium, that average was . Does this mean that in five years it’ll be on average ? That’s more than a CD-ROM in less than a second. Perspectives change. In twenty more years, nobody will remember what a CD-ROM even is. ↩︎ Related topics: / adsl / screenshots / By Wouter Groeneveld on 11 March 2026. Reply via email . According to meter.net historical speed tests results , only five years ago, for Belgium, that average was . Does this mean that in five years it’ll be on average ? That’s more than a CD-ROM in less than a second. Perspectives change. In twenty more years, nobody will remember what a CD-ROM even is. ↩︎
Release presentation Welcome to the curlhacker stream at 10:00 CET (09:00 UTC) today March 11, 2026 for a live-streamed presentation of curl 8.19.0. The changes, the security fixes and some bugfixes. the 273rd release 8 changes 63 days (total: 10,712) 264 bugfixes (total: 13,640) 538 commits (total: 38,024) 0 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 0 new curl command line option (total: 273) 77 contributors, 48 new (total: 3,619) 37 authors, 21 new (total: 1,451) 4 security fixes (total: 180) We stopped the bug-bounty but it has not stopped people from finding vulnerabilities in curl. The following upcoming changes might be worth noticing. See the deprecate documentation for details. We plan to ship the next curl release on April 29. See you then! CVE-2026-1965: bad reuse of HTTP Negotiate connection CVE-2026-3783: token leak with redirect and netrc CVE-2026-3784: wrong proxy connection reuse with credentials CVE-2026-3805: use after free in SMB connection reuse We stopped the bug-bounty. It’s worth repeating, even if it was no code change. The cmake build got a option Initial support for MQTTS was merged curl now supports fractions for –limit-rate and –max-filesize curl’s -J option now uses the redirect name as a backup we no longer support OpenSSL-QUIC on Windows, curl can now get built to use the native CA store by default the minimum Windows version curl supports is now Vista (up from XP) NTLM support becomes opt-in RTMP support is getting dropped SMB support becomes opt-in Support for c-ares versions before 1.16 goes away Support for CMake 3.17 and earlier gets dropped TLS-SRP support will be removed
The apartment that I live in right now has two small rooms with doors on them. One of them is like a normal closet and that’s where the ISP passive fiber optic line comes in. The other one is a similarly sized room with a bunch of shelves and ventilation to keep things cool. A pantry, basically. I’ve used the closet as a networking and home server spot for a while now, with a small dedicated shelf holding everything. That shelf was a bit limiting, it could not accommodate all of my hardware that I wanted to play around with, and the closet did not have very good ventilation, which resulted in the top half of the closet getting notably warm during summer months. Not too warm, but warm enough to feel it when opening the closet. One day I decided to realize an idea that had been in my mind ever since I started renovating this apartment: turn the pantry into a server room. Meet the server pantry. It’s cool. It’s the furthest point in the apartment from the bed and my home office setup, so I can put incredibly noisy computers in it. It only has a few spiders. Due to the pantry being cool and dry, it makes for a good spot for hosting food and stuff. Adding <100W of heat output at the top of the room should not be an issue as hot air tends to rise, and if temperatures end up being an issue, I can open one of the two vents that are directly connected to the outside to get some fresh and cool air in. The temperature delta between the bottom and top of the pantry seems to be about 10°C, ranging from 6-16°C. The impact of running things in a slightly chillier room is noticeable on the temperature sensors of the hardware that I put in there. A PC with overkill CPU cooler on an Intel i5-10500 did report 16°C CPU core temperatures at one point when idling, compared to 20-24°C when running in a cold corner in the living room. The LattePanda IOTA hit 30°C on the CPU cores. Hard drives hover around 30-37°C, with a maximum of 40-44°C. It’s pretty cool.
This blog runs on two servers. One is the main PHP blog engine that handles the logic and the database, while the other serves all static files. Many years ago, an article I wrote reached the top position on both Hacker News and Reddit. My server couldn't handle the traffic . I literally had a terminal window open, monitoring the CPU and restarting the server every couple of minutes. But I learned a lot from it. The page receiving all the traffic had a total of 17 assets. So in addition to the database getting hammered, my server was spending most of its time serving images, CSS and JavaScript files. So I decided to set up additional servers to act as a sort of CDN to spread the load. I added multiple servers around the world and used MaxMindDB to determine a user's location to serve files from the closest server . But it was overkill for a small blog like mine. I quickly downgraded back to just one server for the application and one for static files. Ever since I set up this configuration, my server never failed due to a traffic spike. In fact, in 2018, right after I upgraded the servers to Ubuntu 18.04, one of my articles went viral like nothing I had seen before . Millions of requests hammered my server. The machine handled the traffic just fine. It's been 7 years now. I've procrastinated long enough. An upgrade was long overdue. What kept me from upgrading to Ubuntu 24.04 LTS was that I had customized the server heavily over the years, and never documented any of it. Provisioning a new server means setting up accounts, dealing with permissions, and transferring files. All of this should have been straightforward with a formal process. Instead, uploading blog post assets has been a very manual affair. I only partially completed the upload interface, so I've been using SFTP and SCP from time to time to upload files. It's only now that I've finally created a provisioning script for my asset server. I mostly used AI to generate it, then used a configuration file to set values such as email, username, SSH keys, and so on. With the click of a button, and 30 minutes of waiting for DNS to update, I now have a brand new server running Ubuntu 24.04, serving my files via Nginx. Yes, next months Ubuntu 26.04 LTS comes out, and I can migrate it by running the same script. I also built an interface for uploading content without relying on SFTP or SSH, which I'll be publishing on GitHub soon. It's been 7 years running this server. It's older than my kids. Somehow, I feel a pang of emotion thinking about turning it off. I'll do it tonight... But while I'm at it, I need to do something about the 9-year-old and 11-year-old servers that still run some crucial applications.
Microsoft Corp. today pushed security updates to fix at least 77 vulnerabilities in its Windows operating systems and other software. There are no pressing “zero-day” flaws this month (compared to February’s five zero-day treat), but as usual some patches may deserve more rapid attention from organizations using Windows. Here are a few highlights from this month’s Patch Tuesday. Image: Shutterstock, @nwz. Two of the bugs Microsoft patched today were publicly disclosed previously. CVE-2026-21262 is a weakness that allows an attacker to elevate their privileges on SQL Server 2016 and later editions. “This isn’t just any elevation of privilege vulnerability, either; the advisory notes that an authorized attacker can elevate privileges to sysadmin over a network,” Rapid7’s Adam Barnett said. “The CVSS v3 base score of 8.8 is just below the threshold for critical severity, since low-level privileges are required. It would be a courageous defender who shrugged and deferred the patches for this one.” The other publicly disclosed flaw is CVE-2026-26127 , a vulnerability in applications running on .NET . Barnett said the immediate impact of exploitation is likely limited to denial of service by triggering a crash, with the potential for other types of attacks during a service reboot. It would hardly be a proper Patch Tuesday without at least one critical Microsoft Office exploit, and this month doesn’t disappoint. CVE-2026-26113 and CVE-2026-26110 are both remote code execution flaws that can be triggered just by viewing a booby-trapped message in the Preview Pane. Satnam Narang at Tenable notes that just over half (55%) of all Patch Tuesday CVEs this month are privilege escalation bugs, and of those, a half dozen were rated “exploitation more likely” — across Windows Graphics Component, Windows Accessibility Infrastructure, Windows Kernel, Windows SMB Server and Winlogon. These include: – CVE-2026-24291 : Incorrect permission assignments within the Windows Accessibility Infrastructure to reach SYSTEM (CVSS 7.8) – CVE-2026-24294 : Improper authentication in the core SMB component (CVSS 7.8) – CVE-2026-24289 : High-severity memory corruption and race condition flaw (CVSS 7.8) – CVE-2026-25187 : Winlogon process weakness discovered by Google Project Zero (CVSS 7.8). Ben McCarthy , lead cyber security engineer at Immersive , called attention to CVE-2026-21536 , a critical remote code execution bug in a component called the Microsoft Devices Pricing Program. Microsoft has already resolved the issue on their end, and fixing it requires no action on the part of Windows users. But McCarthy says it’s notable as one of the first vulnerabilities identified by an AI agent and officially recognized with a CVE attributed to the Windows operating system. It was discovered by XBOW , a fully autonomous AI penetration testing agent. XBOW has consistently ranked at or near the top of the Hacker One bug bounty leaderboard for the past year. McCarthy said CVE-2026-21536 demonstrates how AI agents can identify critical 9.8-rated vulnerabilities without access to source code. “Although Microsoft has already patched and mitigated the vulnerability, it highlights a shift toward AI-driven discovery of complex vulnerabilities at increasing speed,” McCarthy said. “This development suggests AI-assisted vulnerability research will play a growing role in the security landscape.” Microsoft earlier provided patches to address nine browser vulnerabilities, which are not included in the Patch Tuesday count above. In addition, Microsoft issued a crucial out-of-band (emergency) update on March 2 for Windows Server 2022 to address a certificate renewal issue with passwordless authentication technology Windows Hello for Business. Separately, Adobe shipped updates to fix 80 vulnerabilities — some of them critical in severity — in a variety of products , including Acrobat and Adobe Commerce . Mozilla Firefox v. 148.0.2 resolves three high severity CVEs. For a complete breakdown of all the patches Microsoft released today, check out the SANS Internet Storm Center’s Patch Tuesday post . Windows enterprise admins who wish to stay abreast of any news about problematic updates, AskWoody.com is always worth a visit. Please feel free to drop a comment below if you experience any issues apply this month’s patches.