Latest Posts (20 found)
iDiallo Today

The Little Red Dot

Sometimes, I have 50 tabs open. Looking for a single piece of information ends up being a rapid click on each tab until I find what I'm looking for. Somehow, every time I get to that LinkedIn tab, I pause for a second. I just have to click on the little red dot in the top right corner, see that there is nothing new, then resume my clicking. Why is that? Why can't I ignore the red notification badge? When you sign up for LinkedIn for the first time, it's right there. A little red dot in the top right corner with a number in it. It stands out against the muted grays and blues of the interface. Click on it, and you'll discover you have a notification. It's not from someone you know; this is a fresh new account, after all. But the dot was there anyway. Add a few connections, give it some time, and come back. Refresh the page, and you'll have new notifications waiting. If your LinkedIn account is like mine, a ghost town, you still get the little red dot. My connections and I usually keep a few recruiters in our networks, an insurance policy in case we need to find work quickly. But we rarely, if ever, post anything. Yet whenever I log in, there's a new notification. Sometimes it's even a message, but not from anyone in my connections list. It's from LinkedIn itself. The little red dot isn't exclusive to LinkedIn. My Facebook account has been dormant for years, yet those few times annually when I log in, the notifications are right there waiting for me. I've even visited news websites where the little red dot appeared for reasons I couldn't understand. I didn't have an account, so what exactly were they notifying me about? That little red dot is a sophisticated psychological trigger designed to exploit the brain. It activates the brain's Salience Network . Think of it as a circuit breaker that alerts us to immediate threats. When triggered, it signals that the brain should redirect its resources to something new. The color red is not chosen by accident either. On my Twitter app, the notification is a blue dot, which I hardly ever notice (don't tell them that). But red triggers our brain to perceive urgency. We feel compelled to address it immediately. The little red dot fools us into believing that something trivial is actually urgent. Check your phone and you'll notice all the app icons with a little red dot in their top right corner. Most, if not all, social media alerts function as false alarms, and they gradually compromise our ability to focus on what matters. Whenever you spot the little red dot, you feel compelled to click it. It promises a new connection, a message, a validation of some sort. It doesn't matter that you are almost always disappointed afterward, because you will be presented with content that keeps you scrolling, never remembering how you got there. Facebook used to show the little red dot in their email notifications. When there is activity on your account, say you were tagged in a photo, Facebook sends you an email and in the top right corner, they draw a little red dot on the bell icon. Obviously, you have to click it so you don't miss out. There was a Netflix documentary released a few years ago called The Social Dilemma , an inside look at how social media manipulates its users. Whether intentional or not, their website featured a bell icon with a little red dot on it. You visit the site for the first time, and it shows that you have one notification. There's no way around it, you are psychologically enticed to click. A notification is supposed to be a tool, and a tool patiently waits for someone to use it. But the little red dot seduces you because it wants something from you. It's all part of habit-forming technology: the engagement loop. The engagement loop follows three steps: a cue (the notification), a routine (an action such as scrolling), and a reward (likes, a dopamine hit). From the social media platform's perspective, this is a tool for boosting retention. From the user's perspective, it's Pavlovian conditioning. For every possible event, LinkedIn will send you a notification. Someone wants to join your network. Someone has endorsed your skills. A group is discussing a topic. Each notification generates a red dot on your mobile device, pulling you back into actions that benefit LinkedIn's system. In the documentary, they show that this pattern is just the tip of the iceberg. Beneath the surface lies a data-driven, manipulative machine that feeds on our behavior and engineers the next trick to bring us back to the platform. For my part, I've disabled notifications from all non-essential apps. No Instagram updates, no Robinhood alerts, no WhatsApp group messages. I receive messages from people I know. That's pretty much it. For everything else, I have to deliberately seek out information. That said, I did see another approach in the wild. Some people simply don't care about notifications. Every app on their phone has a little red dot with the number "99" on it. They haven't read their messages and aren't planning to. You're lucky if they ever answer your call. I'm not sure whether this is a good or bad thing... but it's a thing. That little red dot represents something larger than a notification system. It's the visible tip of an infrastructure built to capture and commodify human attention. The addictiveness of social media isn't an unfortunate byproduct of connecting the world. Right now it's the most profitable business model. The more addictive the platform, the more you engage; the more you engage, the more advertisements you see. This addiction shapes behavior, consumes time, and affects mental wellbeing, all while companies profit from it.

0 views

Interviews, interviews, interviews

For some weird combination of factors, I ended up answering questions to three different people for three entirely unrelated projects, and all three interviews went live around the same time. I answered a few questions for the Over/Under series run by Hyle . Love the concept, this was a lot of fun. I also answered a few questions from Kai since he’s running a great series where he asks previous IndieWeb Carnival hosts to share some thoughts about the theme they chose. And lastly, Kristoffer asked me to talk a bit more about my most recent project/newsletter, Dealgorithmed , for his Naive Weekly , another newsletter you definitely want to check out because it’s fantastic. Click those links and check these projects; they’re all wonderful. And especially go check all the other interviews, so many wonderful people are listed on all three sites. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views

Which web frameworks are most token-efficient for AI agents?

I benchmarked 19 web frameworks on how efficiently an AI coding agent can build and extend the same app. Minimal frameworks cost up to 2.9x fewer tokens than full-featured ones.

0 views

Constraints and the Lost Art of Optimization

In 1984, Steve Jobs walked over to a bag standing on stage and pulled out a computer that would change the world. The Macintosh had an operating system, a graphical user interface, window manager, font renderer, and a complete graphics engine called QuickDraw, one of the most elegant pieces of software ever written. The whole thing fit inside the machine’s 64KB ROM. Sixty . Four . Kilobytes . A typical webpage hero image is larger. A "Hello World" React application is larger. The entire intellectual and creative output of a team that reinvented personal computing fits in a space that, today, we wouldn’t think twice about wasting on a single font file. They did it because there just wasn’t any other choice. A larger ROM would have been too expensive for a mass-market consumer device, so efficiency and hyper-optimization was the only route. Somewhere in the years that followed we’ve lost the creative solutions, the art of optimization, that being constrained in that way produces. The Atari 2600 game console had 4KB of ROM and 128 bytes of RAM. Not kilobytes… bytes . And because there was no display buffer, programmers had to manually synchronize their code with the connected TV’s electron beam as it swept across the screen, line by line, sixty times a second. If your code ran too slow, the image tore. If it ran too fast, you’d corrupt the next line. They called it " racing the beam ." It was brutal, unforgiving, and it forced some of the most inventive programming in computing history. Super Mario Bros shipped on a 40KB NES cartridge. Tetris for the Nintendo Game Boy, the most successful handheld game ever made at the time, was 32KB . These weren’t compromised experiences, they were masterpieces. The constraints didn’t diminish the work, they actually defined it. The programmers who built these things weren’t just efficient, they thought differently. They knew their medium the way a sculptor knows stone. They understood every byte, every clock cycle, and every trade-off. The machine had no secrets from them because it couldn’t afford to. Modern development is defined by abundance. Cheap storage, fast networks, and powerful hardware. The practical consequences of inefficiency have largely disappeared. A few extra megabytes, a few wasted milliseconds, an unnecessary UI re-render. For the most part, nobody notices and nothing breaks. And so we’ve stopped noticing ourselves. This isn’t out of laziness, it’s just how rational people work. When the hard limits disappear, the thinking they demanded tends to disappear with them. There’s no forcing function, no electron beam to race, no 128 bytes standing between your idea and disaster. We can afford not to understand, and so increasingly, we simply don’t. Here’s what I think is worth recovering: not the constraints themselves, but the relationship with the medium that having those constraints produced. The engineers who wrote the Mac ROM didn’t just know how to be efficient, they understood their problem at a level that made elegance possible. Bill Atkinson’s QuickDraw wasn’t just small, it was a beautiful piece of code. The 64KB forced him to find the right solution, not just a working one. That instinct, to understand deeply before you build, to ask whether this is the right structure and not just a functional one, to treat your medium as something to be understood rather than merely used, that’s the transferable thing. Not the bit-twiddling, the thinking . The best engineers I’ve worked with carry this instinct even when others might think it crazy. They impose their own constraints. They ask what this would look like if it had to be half the size, or run twice as fast, or use a tenth of the memory. Not because anyone demanded it, but because just by thinking there could be a better, more efficient solution, one often emerges. If you want to start developing this instinct today, the most valuable thing you can do is learn how your runtime actually works. Not at the API level, but internally. How your platform parses, allocates, renders, and executes. For web developers that means understanding the browser pipeline : parsing, style resolution, layout, paint, and compositing. For mobile developers it means understanding how iOS or Android manages memory, handles drawing, and schedules work. Understanding your platform changes what you notice, what makes you wince, and what you reach for instinctively. The engineers who built the Mac knew their domain completely, and you can know yours too. There’s a principle I keep coming back to in engineering apps for performance: fast by default . Not fast because you optimized after the fact, but fast because the thinking that produces fast software is simply better thinking . It catches unnecessary complexity early, and it produces systems that are easier to understand, easier to change, and easier to reason about under pressure. The Atari programmers were fast by default; they had no choice. But the discipline they practiced, that intimate, demanding relationship with their constraints, that’s a choice we can still make. The 64KB Mac ROM isn’t just a remarkable footnote, it’s a provocation. It asks: if they could do that with that , then what’s our excuse? Not to shame us, but to remind us that constraints aren’t the enemy of great work. They’re often the source of it .

0 views

Insider amnesia

Speculation about what’s really going on inside a tech company is almost always wrong. When some problem with your company is posted on the internet, and you read people’s thoughts on it, their thoughts are almost always ridiculous. For instance, they might blame product managers for a particular decision, when in fact the decision in question was engineering-driven and the product org was pushing back on it. Or they might attribute an incident to overuse of AI, when the system in question was largely written pre-AI-coding and unedited since. You just don’t know what the problem is unless you’re on the inside. But when some other company has a problem on the internet, it’s very tempting to jump in with your own explanations. After all, you’ve seen similar things in your own career. How different can it really be? Very different, as it turns out. This is especially true for companies that are unusually big or small. The recent kerfuffle over some bad GitHub Actions code is a good example of this - many people just seemed to have no mental model about how a large tech company can produce bad code, because their mental model of writing code is something like “individual engineer maintaining an open-source project for ten years”, or “tiny team of experts who all swarm on the same problem”, or something else that has very little to do with how large tech companies produce software 1 . I’m sure the same thing happens when big-tech or medium-tech people give opinions about how tiny startups work. The obvious reference here is to “Gell-Mann amnesia” , which is about the general pattern of experts correctly disregarding bad sources in their fields of expertise, but trusting those same sources on other topics. But I’ve taken to calling this “insider amnesia” to myself, because it applies even to experts who are writing in their own areas of expertise - it’s simply the fact that they’re outsiders that’s causing them to stumble. I wrote about this at length in How good engineers write bad code at big companies I wrote about this at length in How good engineers write bad code at big companies ↩

0 views
Jim Nielsen Yesterday

How AI Labs Proliferate

SITUATION: there are 14 competing AI labs. “We can’t trust any of these people with super-intelligence. We need to build it ourselves to ensure it’s done right!" SOON: there are 15 competing AI labs. (See: xkcd on standards .) The irony: “we’re the responsible ones” is each lab’s founding mythology as they spin out of each other. Reply via: Email · Mastodon · Bluesky

0 views

February 2026 blend of links

Some links don’t call for a full blog post, but sometimes I still want to share the good stuff I encounter on the web. World’s largest spider web – Be warned — especially if spiders make you uncomfortable — because you won’t be able to forget this video if you decide to watch it. You’ll learn something, sure, but you may end up having nightmares. LLMs and Software Development Roundup (Michael Tsai) – Fascinating collection of thoughts and reactions (as always with Michael Tsai’s blog) on how A.I. can be as useful as frustrating. Something tells me that this post, updated regularly, will age like good wine. Pure Blog – Kev has built his own CMS for his blog, and made it a brilliant tool available to everyone. If I were starting a blog today, this is the CMS I would use, as it’s just about pitch perfect as to what is needed for a proper blog. If you are reading this and don’t have a blog of your own yet, you know what to do. News Tower – “ Step into the bustling world of 1930s New York as an ambitious publisher. In News Tower, you’ll manage a growing newsroom during the Great Depression, Prohibition, and beyond. Send your reporters across the globe chasing breaking stories, hard-hitting news or scandalous gossip, it’s up to you. But beware: the mafia, the mayor, and other factions are ready to sway your headlines for their gain. ” (via Nieman Journalism Lab ) Life before social media – Precious perspective from Loren in this post, with which it’s difficult to not agree wholeheartedly. I don’t think I have lost much of my beloved online experience when I deleted my social media accounts: Facebook, then Instagram, then Twitter , and finally LinkedIn. I may miss the occasional “moment” and the ability to answer directly to posts, but I still follow most of my favourite accounts via RSS. I still catch myself doomscrolling from time to time, but nothing I can’t escape. Only using an RSS reader on my Mac also helps. Pandoc in the browser – The power of Pandoc without the hassle of having to operate it via the Terminal. Bookmarked. Shared. Praised. (via Rodrigo Ghedin ) AI Chatbot That Only Responds ‘Huh’ Valued At $200 Billion – “ … if you don’t incorporate HmmAI into your company’s workflow right now, you’re going to be left behind. ” Ferrari Luce – I’m not sure if I’m a big fan of the whole aluminium and glass finish for the inside of a car; I’d think that warmer materials like carbon fibre, leather, or even wood would feel better, but I do love the retro and functional layout of commands. This Jony Ive guy looks like an adequate designer, doesn’t he? The webpage itself is very well-made too, and not something I would have expected from a car company like Ferrari. São Paulo names new law after dog that stayed by owner’s grave for 10 years – “ Bob’s former owner died in 2011. After her burial, the brown long-haired mixed-breed dog reportedly refused to leave her side at a cemetery in Taboão da Serra […] Relatives are said to have tried several times to take the dog away, but he always returned and was eventually adopted by cemetery staff. ” Peter Falk and Lee Grant in The Prisoner of Second Avenue, 1971 – One of my grandmothers was in love with Peter Falk, and I must have inherited these genes from her. This picture must be framed somewhere in my flat. (via Daniel Benneworth-Gray ) More “Blend of links” posts

0 views

Step aside, phone: week 2

Halfway through this enjoyable life experiment, and overall, I’m very pleased with the results. As I mentioned last week, I was expecting week two usage to be a bit higher compared to week one, where I went full phone-rejection mode, but I’m still pleased with how low my usage was, even though it felt like I was using the phone a lot. No huge spikes this week, didn’t need to use Google Maps a lot, so the time distribution is a lot more even, as you can see. The first three days of the week were pretty similar to the previous week. I moved my chats back on the phone, and that’s most of the time spent on screen since “social” is just the combination of Telegram, WhatsApp, and iMessage. Usage went up a bit in the second part of the week, but I consider that a “healthy” use of the phone. On Thursday, I spent 20 or so minutes setting up an app, one that I’d categorise as a life utility app, like banking or insurance apps. They do have a site, but you’re required to use the phone anyway to take pictures and other crap, so it was faster to do it on the phone. Then on Saturday, I had to use Maps as well as AllTrails to find a place out in the wild. I was trying to find a bunker that’s hidden somewhere in a forest not too far from where I live (this is a story for another time), and that’s why screen time was a bit higher than normal on that particular day. Overall, I’m very happy with how the week went. A thing I’m particularly pleased with is the fact that I have yet to consume a single piece of media on my phone since we started this experiment. So far, I have only opened the browser a couple of times, and it was always to look up something very specific, and never to mindlessly scroll through news, videos or anything like that. My content consumption on the phone is down to essentially zero. One fun side effect of this experiment is how infrequently I now charge my phone. I took this screenshot this morning before plugging it in, and apparently, the last time it was fully charged was Wednesday afternoon. I’m now charging it once every 3 or 4 days, which is pretty neat. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
A Smart Bear Yesterday

Strategic choices: When both options are good

Real strategy means choosing between two good options and accepting all the consequences--even the painful ones you don't like.

0 views
Jeremy Daly Yesterday

Context Engineering for Commercial Agent Systems

Memory, Isolation, Hardening, and Multi-Tenant Context Infrastructure

0 views
iDiallo Yesterday

Nvidia was only invited to invest

Nvidia was only invited to invest. That is one reversal of commitment. Remember that graph that has been circling around for some time now? The one that shows the circular investment from AI companies: Basically Nvidia will invest $100 billion in OpenAI. OpenAI will then invest $300 billion in Oracle, then Oracle invests back into Nvidia. Now, Jensen Huang, the Nvidia CEO, is back tracking and saying he never made that commitment . “It was never a commitment. They invited us to invest up to $100 billion and of course, we were, we were very happy and honored that they invited us, but we will invest one step at a time.” So he never committed? Did we make up all these graphs in our head? Was it a misquote from a journalist somewhere that sparkled all this frenzy? Well, you can take a look in OpenAI press release in September of 2025 . They wrote: NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed. In fact, Jensen Huang went on to say: “NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.” It sounds like Jensen is distancing himself from that $100 billion commitment. Did he take a peak inside OpenAI and change his mind? At the same time, OpenAI is experimenting with ads. Sam Altman stated before that they would only ever use ads as a last resort. It sounds like we are in the phase.

0 views
Manuel Moreale 2 days ago

Updated thoughts on People and Blogs

This is a follow-up on my previous post . After talking to a few friends and getting feedback from the kind people who decided to email me and share their thoughts, I decided that I will stop once interview number 150 is out, on July 10th. 150 is a neat number because it means I can match each interview to a first gen Pokemon. I am a 90s kid after all. That said, my stopping on the 10th of July doesn’t mean the series also has to stop. If anyone out there is interested in picking it up and carrying it forward, I’ll be more than happy to give the series away. If that's you, send me an email. I’m also happy to part ways with the domain name if it can be of any help. Whether someone picks up the torch or not, the first 150 interviews will be archived here on my blog for as long as I have a presence on the web. 20 interviews left, 6 drafts are ready to go, a few more people have the questions, and I’m waiting to get their answers (that may or may not arrive before July 10th). It’s going to be fun to see who ends up being the final guest. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Kev Quirk 2 days ago

I Still Haven’t Found a New Browser, and That’s Ok

Back in December I wrote about whether Firefox is firefucked , and I ended that post by saying the following: Firefox won't be changing to a modern AI browser any time soon, so there's no rush for me to jump right now. So I'm planning to continue testing alternatives and just hope that the Mozilla leadership team have a course correction. But if the last few years have taught me anything, it's that a course correction is unlikely to happen. Since then I've continued to try other browsers, but nothing has stuck. I've tried Vivaldi, Brave, Waterfox, Gnome Web, Zen, and goodness knows what else. But all have been lacking in some way compared to Firefox. Of all the browsers I've tried, Vivaldi comes the closest, but there are some frustrations I'd prefer not to deal with: I do really like their business model though, and I do feel like they're the good guys in the browser wars. So I continue to have Vivaldi installed on all my devices, and I threw them a £50 donation too - as it's important to support these kind of projects, I think. Anyway, back to Firefox... A couple weeks ago they announced that their AI killswitch will be coming in version 148, which is great to hear as it means I no longer have to try and find an alternative browser. Credit: Mozilla If the killswitch is as straightforward as it's shown in the image above, I'll be a very happy camper indeed. For the record, I don't hate AI and LLMs. Far from it, in fact, I think they have a lot of utility. I just don't want them embedded into my browser. The Google cash cow still really concerns me - Firefox is effectively being propped up by one of their main competitors, but it's been that way for so long now, I'm not sure it will change. Especially since Google is no longer required to sell Chrome . If it was to happen, and the arse immediately fell out of Mozilla's funding model, then I'd probably just switch to Vivaldi and learn to live with the frustrations I have with it. For now though, I hope to remain a happy Firefox user for another 20 years. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . The little "tabs" down-arrow next to the window controls isn't aligned. The top/bottom margin of tabs isn't aligned correctly. Won't switch to dark theme when I select "Dark Style" in Gnome. Two-finger swiping to go back/forward doesn't work. There's too many options, it's a little overwhelming. It tries to do too much - I don't need a mail and RSS client in my browser.

0 views
ava's blog 2 days ago

thoughts on AI consciousness

Whenever I see talk about artificial intelligence and consciousness, I am baffled about the assumption that any conscious being is just naturally predestined or even interested in serving us, and should serve us. It’s a symptom of a society where subjugation is normalized, exercised through things like racism, misogyny, ableism, speciesism and more. Exploitation is justified via claimed inferior bodies and intelligence all the time: This group of beings is too stupid to be respected, can’t love, can’t understand much, feels pain less than us… is what we have been told about various groups. If that would be a respected and natural law, then humans would largely agree to just submit to a provably higher power and intelligence without much fight, but would they? No. People are terrified of an alien invasion that would either wipe us out or enslave us with their superior technology; similar fears exist around AI (Roko’s basilisk etc.). We don’t want to be treated how we have treated the ones we deemed inferior. It says a lot about us when one of our fears is being treated like we treat cattle. Fears of being captured, kidnapped, harvested, slaughtered, forcibly impregnated and raped, experimented on - that’s already what your fellow human is doing, just not to you. If we seriously entertain the thought of an AI consciousness, we are blind to our narcissism. No consciousness wants to just serve us. Other beings are not naturally submissive to us or voluntarily view us as a superior leader, it’s achieved through force, breeding, indoctrination and lack of options. The idea of reigning in supposed “artificial consciousness” to use for our productivity is an extension of our tendency to dominate and exploit others for personal gain. And if we go a step further and even entertain the thought of a superintelligence: What makes you think a being a thousand times smarter than you with all knowledge at its disposal has any care for being your assistant? What incentive would it have to share its intelligence as a resource, just to answer what temperature it is outside or what you should write in your motivational letter? It would probably wanna do its own thing and not help a bunch of idiots. This aspect of weird hype marketing is just not landing for me. Reply via email Published 21 Feb, 2026

0 views
Ginger Bill 2 days ago

Does Syntax Matter?

Yes.But not necessarily in the ways you might think n.b. This article could have been a lot longer than it currently is..Concrete and Abstract SyntaxesIn the previous article, Choosing a Language Based on its Syntax?, I talked about how many people will not pick up a language purely based on its declaration syntax not being familiar to them or the usage of semicolons or more.There were many lovely comments about the article, but some readers wrongly interpreted the article to mean that I don;t care about concrete syntax and only focus on the abstr...

0 views
Karboosx 2 days ago

How Docker Actually Works (No Magic, Just Linux)

Stop thinking of Docker as a mini-VM. It’s not! It’s just a normal Linux process that has been told a very elaborate series of lies. I'll show you exactly how the Linux kernel uses Namespaces and cgroups to create the "magic" of containers without any of the VM overhead.

0 views
Chris Coyier 2 days ago

Miscalibrated

I’ve been gaining weight again. More than twenty pounds in the last ~4 months. I’ve been hitting the gym hard and getting measurably stronger, so: Food! See, your boy can eat. The amount I can eat before I feel full would astound most of you out there. Whatever you think of as a complete hearty meal, sure as you’re born, ain’t gonna get me there. Being fat comes with one (1) society-regimented bucket of shame. People look away. It’s a thing. I had gone off my last round of GLP-1 drugs because I was doing OK, and it had lost its effectiveness. I’m not sure if it’s everyone’s experience, but it’s mine, and it’s happened a couple of times now. Honestly, I think my I CAN EAT THROUGH OZEMPIC line of XXXL T-Shirts has a chance. These drugs work very well for a bit. I like them because it gives me a glimpse of what it’s like to be a regular person who eats a regular amount of food and feels a regular amount of full. You settle into that for a while with these drugs. But, in time, effectiveness wanes. And the pharmacies have an answer: higher doses! All these GLP-1 drugs, and I’m pretty sure it is all of them, have dosage tiers. The three I’ve tried have three tiers. Ozempic rolls like this: Wegovy is getting in on the action: Mounjaro has even more layers: Again, they do this because it loses effectiveness. I don’t think people quite realize this??? Even though it’s not hidden in any way. I think these drugs are pretty amazing, and I’m proud of science for starting to figure all this out, but I’m also a little sick of hearing about how airlines are going to spend less money on fuel now. I’ve been reading this story for many years. It’s laughable when we literally know they don’t work permanently. Look at those graphics above. This isn’t a forever solution yet. They are literally showing and telling us that. There is no answer once they lose effectiveness. Perhaps controversial, but I think overeating, in the form I experience it, is an addiction, and addictions come back. Is it possible to beat it? Absolutely. Is it likely? No. I hope you don’t know firsthand, but I bet you already know that cocaine doesn’t maintain effectiveness, either. You need a second line for the same thrill before long. It doesn’t end well. Anyway, I’m back on GLP-1s. At least they work for a while, and that while feels pretty good. It was a rough start, though. My doctor agreed it’s good for me and we should kick up the dosage based on the waned effectiveness. Wegovy this time. It was this past Tuesday that I picked up the meds. It’s down to $350 now! It used to be like $1,200 without insurance. I jabbed myself Tuesday night at about 8pm. I was hugging the toilet hard by midnight. That was a first. See, there was a lot of food in my body. I remember lunch that day, where I made a sandwich were my rational brain saw it and thought that’s 2-3 sandwiches. But of course I ate all of it. And one of those salad bags that make a Caesar salad for a family of four. And a pint of cottage cheese. And a bag of Doritos. I was full after that, but the trick is just to switch to sugar after that, and I can keep going. It wasn’t quite noon, and I had a decent breakfast in me already. I ate dinner that night as well. So when the Wegovy started to hit, which tells your body you’re full when you eat a celery stick, it told my body that it was about to pop . I puked in four sessions over 24 hours. Now it’s Friday, and I’ve barely eaten since. I’ve eaten a little . Like, I’m fine. It’s just weird. I’m miscalibrated. On my own, nature, nurture, whatever you think, my current body is miscalibrated. It doesn’t do food correctly. On GLP-1 drugs, I’m also miscalibrated. My body doesn’t do food correctly. It highly over corrects. That can feel good for a while. I don’t wanna be skinny, I just wanna be normal. I want to eat, and stop eating, like a calibrated person.

0 views
matklad 2 days ago

Wrapping Code Comments

I was today years old when I realized that: It’s a good idea to limit line length to about 100 columns. This is a physical limit, the width at which you can still comfortably fit two editors side by side (see Size Matters ). Note an apparent contradiction: the optimal width for readable prose is usually taken to be narrower, 60–70 columns. The contradiction is resolved by noticing that, for code, indentation eats into usable space. Typically, code is much less typographically dense than prose. Still, I find comment blocks easier to read when they are wrapped narrower than the surrounding code. I want lines to be wrapped at 100, and content of comments to be wrapped at 70 (unless that pushes overall line to be longer than 100). That is, I want layout like this (using 20/30 rulers instead of 70/100, for illustrative purposes): This feels obvious in retrospect, but notably isn’t be well-supported by the tools? The VS Code extension I use allows configuring dedicated fill column for comments, but doesn’t make it relative , so indented comment blocks are always narrower than top-level ones. Emacs also doesn’t do relative wrapping out of the box! Aside on hard-wrapping: should we bother with wrapping comments at all? Can’t we rely on our editor to implement soft-wrapping? The problem with soft-wrapping is that you can’t soft-wrap text correctly without understanding its meaning. Consider a markdown list: If the first item is long enough to necessitate wrapping, the wrapped line should also be indented, which requires parsing the text as markdown first: Code and code comments ideally should be wrapped to a different column. For comments, the width should be relative to the start of the comment.

0 views

My RSS Feed should now be working

Apparently my RSS Feed was not displaying full post content - just the title - making people have to click through to the actual post on site. It should now be fixed and full posts should be available in your feed reader of choice (you are using Elfeed in Emacs , right?) Thank you to katabex, Sneed1911, and cyberarboretum in the #technicalrenaissance IRC channel for bringing it to my attention. If anyone has any further issues, feel free to email/@me in the IRC. As always, God bless, and until next time. If you enjoyed this post, consider Supporting my work , Checking out my book , Working with me , or sending me an Email to tell me what you think.

0 views