Posts in Science (20 found)

Down The Atomic Rabbit Hole

Over the years, I’ve been chewing on media related to nuclear weapons. This is my high-level, non-exhaustive documentation of my consumption — with links! This isn’t exhaustive, but if you’ve got recommendations I didn’t mention, send them my way. Reply via: Email · Mastodon · Bluesky 📖 The Making of the Atomic Bomb by Richard Rhodes. This is one of those definitive histories (it’s close to 1,000 pages and won a Pulitzer Prize). It starts with the early discoveries in physics, like the splitting of the atom, and goes up to the end of WWII. I really enjoyed this one. A definite recommendation. 📖 Dark Sun: The Making of the Hydrogen Bomb by Richard Rhodes is the sequel. If you want to know how we went from atomic weapons to thermonuclear ones, I think this one will do it. It was a harder read for me though. It got into a lot of the politics and espionage of the Cold War and I fizzled out on it (plus my library copy had to be returned, somebody else had it on hold). I’ll probably go pick it up again though and finish it — eventually. 📖 The Bomb: A Life by Gerard J. DeGroot This one piqued my interest because it covers more history of the bomb after its first use, including the testing that took place in Nevada not far from where I grew up. Having had a few different friends growing up whose parents died of cancer that was attributed to being “downwinders” this part of the book hit close to home. Which reminds me of: 🎥 Downwinders & The Radioactive West from PBS. Again, growing up amongst locals who saw some of the flashes of light from the tests and experienced the fallout come down in their towns, this doc hit close to home. I had two childhood friends who lost their Dads to cancer (and their families received financial compensation from the gov. for it). 📖 Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety by Eric Schlosser Read this one years ago when it first came out. It’s a fascinating look at humans bumbling around with terrible weapons. 🎥 Command and Control from PBS is the documentary version of the book. I suppose watch this first and if you want to know more, there’s a whole book for you. 📖 Nuclear War: A Scenario by Annie Jacobsen Terrifying. 🎥 House of Dynamite just came out on Netlify and is basically a dramatization of aspects of this book. 📖 The Button: The New Nuclear Arms Race and Presidential Power from Truman to Trump by William J. Perry and Tom Z. Collina How did we get to a place where a single individual has sole authority to destroy humanity at a moment’s notice? Interesting because it’s written by former people in Washington, like the Sec. of Defense under Clinton, so you get a taste of the bureaucracy that surrounds the bomb. 🎧 Hardcore History 59 – The Destroyer of Worlds by Dan Carlin First thing I’ve really listened to from Dan. It’s not exactly cutting-edge scholarship and doesn’t have academic-level historical rigor, but it’s a compelling story around how humans made something they’ve nearly destroyed themselves with various times. The part in here about the cuban missile crisis is wild. It led me to: 📖 Nuclear Folly: A History of the Cuban Missile Crisis by Serhii Plokhy is a deep look at the Cuban Missile crisis. This is a slow burning audiobook I’m still chewing through. You know how you get excited about a topic and you’re like “I’m gonna learn all about that thing!” And then you start and it’s way more than you wanted to know so you kinda back out? That’s where I am with this one. 🎥 The Bomb by PBS. A good, short primer on the bomb. It reminds me of: 🎥 Turning Point: The Bomb and the Cold War on Netflix which is a longer, multi-episode look at the bomb during the Cold War. 📝 Last, but not least, I gotta include at least one blog! Alex Wellerstein, a historian of science and creator of the nukemap , blogs at Doomsday Machines if you want something for your RSS reader.

0 views

Converting hot dog plasma video to sound with OpenCV

When you ground a hot dog to an AM radio tower, it generates plasma. While the hot dog's flesh is getting vaporized, a tiny plasma arc moves the air around it back and forth. And because this tower is an AM tower, it uses Amplitude Modulation , where a transmitter changes the amplitude of a carrier wave up and down. Just like a speaker cone moving up and down, the plasma arc from the hot dog turns that modulation into audible sound.

0 views
The Coder Cafe 1 weeks ago

Horror Coding Stories: Therac-25

📅 Last updated: March 9, 2025 🎃 Welcome to The Coder Cafe! Today, we examine the Therac-25 accidents, where design and software failures resulted in multiple radiation overdoses and deaths. Make sure to check the Explore Further section to see if you’re able to reproduce the deadly issue. Get cozy, grab a pumpkin spice latte, and let’s begin! Therac-25 Treating cancers used to require a mix of machines, depending on tumor depth: shallow or deep. In the early 1980s, a new generation promised both from a single system. That was a big deal for hospitals: one machine instead of several meant lower maintenance and fewer systems to manage. That was the case with the Therac-25. The Therac-25 offered two therapies with selectable modes: Electron beam: Low-energy electrons for shallow tumors (e.g., skin cancer). X-ray photons: High-energy radiation for deep tumors (e.g., lung cancer). Earlier Therac models allowed switching modes with hardware circuits and physical interlocks. The new version was smaller, cheaper, and computer-controlled. Less hardware and fewer parts meant lower costs. However, what no one realized soon enough: it also removed an independent safety net. On a routine day, a radiology technologist sat at the console and began entering a plan: By habit, she selected X-ray (deep mode). Then she immediately corrected it for Electron (shallow mode) and hit start. The machine halted with a message. The operator’s manual didn’t explain the code. Service materials listed the number but gave no useful guidance, so she resumed and triggered the radiation. The patient was receiving his ninth treatment. Immediately, he knew something was different. He reported a buzzing sound, later recognized as the accelerator pouring out radiation at maximum. The pain came fast; paralysis followed. He later died from radiation injury. Weeks later, a second patient endured the same incident on the same model. Initially, the radiology technologist entered ‘ ’ for X-ray (‘▮’ is the cursor and ‘ ’ are other fields): She immediately hit Cursor Up to go back and correct the field to ‘ ’: After a rapid sequence of Return presses, she moved back down to the command area: From her perspective, the screen showed the corrected mode, so she hit return and started the treatment: Behind the scenes, the Therac-25 software ran several concurrent tasks: Data-entry task : Monitored operator inputs and edited a shared treatment-setup structure. Hardware-control task : On a periodic loop, snapshotted that same structure and positioned the turntable and magnets based on user input. Because both tasks read the same memory with no mutual exclusion, there was a short window (on the order of seconds) in which the hardware-control task used a different value than the one displayed on the screen. As a result: The UI showed Electron mode, which looked correct to the operator. The hardware-control task had snapshotted stale data and marked the system as ready even though critical elements (e.g., turntable position, scanning magnets/accessories) were not yet aligned with electron mode. When treatment was started, the machine delivered an effectively unscanned, high-intensity electron beam, causing a massive overdose. This is a race condition example: the outcome depends on the timing of events, here, the input cadence of the technologist. Depending on the timing, the system could enter a fatal state, with one process seeing ‘ ’ while another saw ‘ ’. The manufacturer later confirmed the error could not be reproduced reliably in testing. The timing had to line up just right, which made the bug elusive. They initially misdiagnosed it as a hardware fault and applied only minor fixes. Unfortunately, the speed of operator editing was the key trigger that exposed this software race. The problem could have stopped here, but it didn’t. Months later, another fatal overdose occurred, this time caused by a different software defect. It wasn’t a timing race. This time, the issue was a counter overflow within the control program. The software used an internal counter to track how many times certain setup operations ran. After the counter exceeded its maximum value, it wrapped back to zero. That arithmetic overflow created a window where a critical safety check was bypassed, allowing the beam to turn on without the proper accessories in place. Again, the Therac-25 fired a high-intensity beam without the proper hardware configuration. Both the race condition and the counter overflow stemmed from the same design flaw: the belief that software alone could enforce safety. The Therac-25 showed, in tragic terms, that without independent safeguards, small coding errors can have catastrophic consequences. We should know that whether it’s software, hardware, or a human process, every single safeguard has inherent flaws. Therefore, in complex systems, safety should be layered, as illustrated by the Swiss cheese model: Credits In total, there were six known radiation overdoses involving the Therac-25, and at least three were fatal. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. Adaptive LIFO Resilient, Fault-tolerant, Robust, or Reliable? Lurking Variables The Worst Computer Bugs in History: Race conditions in Therac-25 Killed By A Machine: The Therac-25 An Investigation of the Therac-25 Accidents I created a Docker image based on a C implementation from an MIT course simulating the operator console of the Therac-25 interface: You can run the UI using Docker: Simulator commands: Beam Type: ‘ ’ or ‘ ’ Command: ‘ ’ for beam on, or ‘ ’ to quit the simulator. 👉 Try to trigger the error based on the scenario discussed. ❤️ If you enjoyed this post, please hit the like button. 💬 Any other horror coding stories you want to share? Leave a comment Therac-25 Treating cancers used to require a mix of machines, depending on tumor depth: shallow or deep. In the early 1980s, a new generation promised both from a single system. That was a big deal for hospitals: one machine instead of several meant lower maintenance and fewer systems to manage. That was the case with the Therac-25. The Therac-25 offered two therapies with selectable modes: Electron beam: Low-energy electrons for shallow tumors (e.g., skin cancer). X-ray photons: High-energy radiation for deep tumors (e.g., lung cancer). By habit, she selected X-ray (deep mode). Then she immediately corrected it for Electron (shallow mode) and hit start. Data-entry task : Monitored operator inputs and edited a shared treatment-setup structure. Hardware-control task : On a periodic loop, snapshotted that same structure and positioned the turntable and magnets based on user input. The UI showed Electron mode, which looked correct to the operator. The hardware-control task had snapshotted stale data and marked the system as ready even though critical elements (e.g., turntable position, scanning magnets/accessories) were not yet aligned with electron mode. When treatment was started, the machine delivered an effectively unscanned, high-intensity electron beam, causing a massive overdose. Credits In total, there were six known radiation overdoses involving the Therac-25, and at least three were fatal. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. Resources More From the Reliability Category Adaptive LIFO Resilient, Fault-tolerant, Robust, or Reliable? Lurking Variables The Worst Computer Bugs in History: Race conditions in Therac-25 Killed By A Machine: The Therac-25 An Investigation of the Therac-25 Accidents You can run the UI using Docker: Simulator commands: Beam Type: ‘ ’ or ‘ ’ Command: ‘ ’ for beam on, or ‘ ’ to quit the simulator.

0 views

Undersense

James Hillman does not want you to interpret your dreams: Analytical tearing apart is one thing, and conceptual interpretation another. We can have analysis without interpretation. Interpretations turn dream into its meaning. Dream is replaced with translation. But dissection cuts into the flesh and bone of the image, examining the tissue of its internal connections, and moves around among its bits, though the body of the dream is still on the table. We haven’t asked what does it mean, but who and what and how it is. That is, to interpret the dream is to exploit it, as a capitalist exploits a vein of coal, transforming those fossilized remains into a commodity, something that can be measured, evaluated, bought and sold. Hillman is demanding that you not turn the dream into something else but that you let it be what it is, that you approach it as keen and attentive observer, not trying to transform it but accepting it, acknowledging it, living with it. (As I read this, I had a sharp image of Rowan in The Lost Steersman , dissecting the body of a creature from the outer lands, finding organs and tissues whose purpose she could not fathom but could—and did— describe in intricate detail.) There’s an attitude here that I think can be expanded to any work in which observation, noticing, witnessing what is before us is privileged over trying to make it into something else. There is a fundamental humility to working in this way, to acknowledging that our understanding of the world around us is always incomplete. This is an incompleteness without judgment: not incomplete as inferior or flawed but incomplete as open-ended, infinite, wondrous. We can move in this direction by means of hermeneutics, following Plato’s idea of hyponoia , “undersense,” “deeper meaning,” which is an ancient way of putting Freud’s idea of “latent.” The search for undersense is what we express in common speech as the desire to understand. We want to get below what is going on and see its basis, its fundamentals, how and where it is grounded. The need to understand more deeply, this search for deeper grounding, is like a call from Hades to move toward his deeper intelligence. All these movements of hyponoia , leading toward an understanding that gains ground and makes matter, are work. Work is the making of matter, the movement of energy from one system to another. The work of making sense, of digging for undersense, is work that matters. I take undersense to mean, in part, a kind of feeling or exploration, of reaching your hands into the dirt, of tearing apart the body of the dream with no preconceived notions of what you will find. And not only dreams. The search for undersense is worthy also of the waking world, the world of daylight. In a world in which the creation and persistence of knowledge is threatened and fragile, we need under sense more than under standing , the exploration and observation that gains ground and makes matter. There’s an argument here for the kind of knowledge that you feel in your bones, that gets under your fingernails, that can’t be lifted away and perverted by a thieving bot. Knowledge that is steady, solid, rooted in the way roots hold tightly to the earth, defended from rain and flood, from being washed away with each passing storm. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
iDiallo 1 weeks ago

Galactic Timekeeping

Yes, I loved Andor. It was such a breath of fresh air in the Star Wars universe. The kind of storytelling that made me feel like a kid again, waiting impatiently for my father to bring home VHS tapes of Episodes 5 and 6. I wouldn't call myself a die-hard fan, but I've always appreciated the original trilogy. After binging both seasons of Andor, I immediately rewatched Rogue One , which of course meant I had to revisit A New Hope again. And through it all, one thing kept nagging at me. One question I had. What time is it? In A New Hope , Han Solo, piloting the Millennium Falcon through hyperspace, casually mentions: "We should be at Alderaan about 0200 hours." And they are onto the next scene with R2D2. Except I'm like, wait a minute. What does "0200 hours" actually mean in an intergalactic civilization? When you're travelling through hyperspace between star systems, each with their own planets spinning at different rates around different suns, what does "2:00 AM" even refer to? Bear with me, I'm serious. Time is fundamentally local. Here on Earth, we define a "day" by our planet's rotation relative to the Sun. One complete spin gives us 24 hours. A "year" is one orbit around our star. These measurements are essentially tied to our specific solar neighborhood. So how does time work when you're hopping between solar systems as casually as we hop between time zones? Before we go any further into a galaxy far, far away, let's look at how we're handling timekeeping right now as we begin exploring our own solar system. NASA mission controllers for the Curiosity rover famously lived on "Mars Time" during their missions . A Martian day, called a "sol", is around 24 hours and 40 minutes long. To stay synchronized with the rovers' daylight operations, mission control teams had their work shifts start 40 minutes later each Earth day. They wore special watches that displayed time in Mars sols instead of Earth hours. Engineers would arrive at work in California at what felt like 3:00 AM one week, then noon the next, then evening, then back to the middle of the night. All while technically working the "same" shift on Mars. Families were disrupted. Sleep schedules were destroyed. And of course, "Baby sitters don't work on Mars time." And this was just for one other planet in our own solar system. One team member described it as living " perpetually jet-lagged ." After several months, NASA had to abandon pure Mars time because it was simply unsustainable for human biology. Our circadian rhythms can only be stretched so much. With the Artemis missions planning to establish a continuous human presence on the Moon, NASA and international space agencies are now trying to define an even more complicated system: Lunar Standard Time. A lunar "day", from one sunrise to the next, lasts about 29.5 Earth days. That's roughly 14 Earth days of continuous sunlight followed by 14 Earth days of darkness. You obviously can't work for two weeks straight and then hibernate for two more. But that's not all. On the moon, time itself moves differently. Because of the moon's weaker gravity and different velocity relative to Earth, clocks on the Moon tick at a slightly different rate than clocks on Earth. It's a microscopic difference (about 56 microseconds per day), but for precision navigation, communication satellites, and coordinated operations, it matters. NASA is actively working to create a unified timekeeping framework that accounts for these relativistic effects while still allowing coordination between lunar operations and Earth-based mission control. And again, this is all within our tiny Earth-Moon system, sharing the same star. If we're struggling to coordinate time between two bodies in the same gravitational system, how would an entire galaxy manage it? In Star Wars the solution, according to the expanded universe lore , is this: "A standard year, also known more simply as a year or formally as Galactic Standard Year, was a standard measurement of time in the galaxy. The term year often referred to a single revolution of a planet around its star, the duration of which varied between planets; the standard year was specifically a Coruscant year, which was the galactic standard. The Coruscant solar cycle was 368 days long with a day consisting of 24 standard hours." So the galaxy has standardized on Coruscant, the political and cultural capital, as the reference point for time. We can think of it as Galactic Greenwich Mean Time, with Coruscant serving as the Prime Meridian of the galaxy. This makes a certain amount of political and practical sense. Just as we arbitrarily chose a line through Greenwich, England, as the zero point for our time zones, a galactic civilization would need to pick some reference frame. Coruscant, as the seat of government for millennia, is a logical choice. But I'm still not convinced that it is this simple. Are those "24 standard hours" actually standard everywhere, or just on Coruscant? Let's think through what Galactic Standard Time would actually require: Tatooine has a different rotation period than Coruscant. Hoth probably has a different day length than Bespin. Some planets might have extremely long days (like Venus, which takes 243 Earth days to rotate once). Some might rotate so fast that "days" are meaningless. Gas giants like Bespin might not have a clear surface to even define rotation against. For local populations who never leave their planet, this is fine. They just live by their star's rhythm. But the moment you have interplanetary travel, trade, and military coordination, you need a common reference frame. This was too complicated for me to fully grasp, but here is how I understood it. The theory of relativity tells us that time passes at different rates depending on your velocity and the strength of the gravitational field you're in. We see this in our own GPS satellites. They experience time about 38 microseconds faster per day than clocks on Earth's surface because they're in a weaker gravitational field, even though they're also moving quickly (which slows time down). Both effects must be constantly corrected or GPS coordinates would drift by kilometers each day. Now imagine you're the Empire trying to coordinate an attack. One Star Destroyer has been orbiting a high-gravity planet. Another has been traveling at relativistic speeds through deep space. A third has been in hyperspace. When they all rendezvous, their clocks will have drifted. How much? Well, we don't really know the physics of hyperspace or the precise gravitational fields involved, so we can't say. But it wouldn't be trivial. Even if you had perfectly synchronized clocks, there's still the problem of knowing what time it is elsewhere. Light takes time to travel. A lot of time. Earth is about 8 light-minutes from the Sun. Meaning if the Sun exploded right now, we wouldn't know for 8 minutes. Voyager 1, humanity's most distant spacecraft, is currently over 23 light-hours away. A signal from there takes nearly a full Earth day to reach us. The Star Wars galaxy is approximately 120,000 light-years in diameter (according to the lore again). Even with the HoloNet (their faster-than-light communication system), there would still be transmission delays, signal degradation, and the fundamental question of "which moment in time are we synchronizing to?" If Coruscant sends out a time signal, and a planet on the Outer Rim receives it three days later, whose "now" are they synchronizing to? In relativity, there is no universal "now." Time is not an absolute, objective thing that ticks uniformly throughout the universe. It's relative to your frame of reference. On Earth, we all roughly share the same frame of reference, so we can agree on UTC and time zones. But in a galaxy with millions of worlds, each moving at different velocities relative to each other, each in different gravitational fields, with ships constantly jumping through hyperspace. Which frame of reference do you pick? You could arbitrarily say "Coruscant's reference frame is the standard," but that doesn't make the physics go away. A ship traveling at near-light-speed would still experience time differently. Any rebel operation requiring split-second timing would fall apart. Despite all this complexity, the characters in Star Wars behave as if time is simple and universal. They "seem" to use a dual-time system: This would be for official, galaxy-wide coordination: When Mon Mothma coordinates with Rebel cells across the galaxy in Andor , they're almost certainly using GST. When an X-Wing pilot gets a mission briefing, the launch time is in GST so the entire fleet stays synchronized. This is for daily life: The workday on Ferrix follows Ferrix's sun. A cantina on Tatooine opens when Tatooine's twin suns rise. A farmer on Aldhani plants crops according to Aldhani's seasons. A traveler would need to track both. Like we carry smartphones with clocks showing both home time and local time. An X-Wing pilot might wake up at 0600 LPT (local dawn on Yavin 4) for a mission launching at 1430 GST (coordinated across the fleet). This is something I couldn't let go when watching the show. In Andor, Cassian often references "night" and "day". Saying things like "we'll leave in the morning" or "it's the middle of the night." When someone on a spaceship says "it's the middle of the night," or even "Yesterday," what do they mean? There's no day-night cycle in space. They're not experiencing a sunset. The most logical explanation is that they've internalized the 24-hour Coruscant cycle as their personal rhythm. "Night" means the GST clock reads 0200, and the ship's lights are probably dimmed to simulate a diurnal cycle, helping regulate circadian rhythms. "Morning" means 0800 GST, and the lights brighten. Space travelers have essentially become Coruscant-native in terms of their biological and cultural clock, regardless of where they actually are. It's an artificial rhythm, separate from any natural cycle, but necessary for maintaining order and sanity in an artificial environment. I really wanted to present this in a way that makes sense. But the truth is, realistic galactic timekeeping would be mind-numbingly complex. You'd somehow need: It would make our International Telecommunication Union's work on UTC look like child's play. But Star Wars isn't hard science fiction. It's a fairy tale set in space. A story about heroes, empires, and rebellions. The starfighters make noise in the vacuum of space. The ships bank and turn like WWII fighters despite having no air resistance. Gravity works the same everywhere regardless of planet size. So when Han Solo says "0200 hours," just pretend he is in Kansas. We accept that somewhere, somehow, the galaxy has solved this complex problem. Maybe some genius inventor in the Old Republic created a McGuffin that uses hyperspace itself as a universal reference frame, keeping every clock in the galaxy in perfect sync through some exotic quantum effect. Maybe the most impressive piece of technology in the Star Wars universe isn't the Death Star, which blows up. Or the hyperdrive, which seems to fail half the time. The true technological and bureaucratic marvel is the invisible, unbelievably complex clock network that must be running flawlessly, constantly behind the scene across 120,000 light years. It suggests deep seated control, stability and sheer organizational power for the empire. That might be the real foundation of real galactic power hidden right there in plain sight. ... or maybe the Force did it! Maybe I took this a bit too seriously. But along the way, I was having too much fun reading about how NASA deals with time, and the deep lore behind Star Wars. I'm almost starting to understand why the Empire is trying to keep those pesky rebels at bay. I enjoyed watching Andor. Remember, Syril is a villain. Yes, you are on his side sometimes, they made him look human, but he is still a bad guy. There I said it. They can't make a third season because Rogue One is what comes next. But I think I've earned the right to just enjoy watching Cassian Andor glance at his chrono and say "We leave at dawn", wherever and whenever that is. A clock on a planet with stronger gravity runs slower than one on a planet with weaker gravity A clock on a fast-moving ship runs slower than one on a stationary planet Hyperspace travel, which somehow exceeds the speed of light, would create all kinds of relativistic artifacts Military operations ("All fighters, attack formation at 0430 GST") Senate sessions and government business Hyperspace travel schedules Banking and financial markets HoloNet news broadcasts Work schedules Sleep cycles Business hours Social conventions ("let's meet for lunch") Relativistic corrections for every inhabited world's gravitational field Constant recalibration for ships entering and exiting hyperspace A faster-than-light communication network that somehow maintains causality Atomic clock networks distributed across the galaxy, all quantum-entangled or connected through some exotic physics Sophisticated algorithms running continuously to keep everything synchronized Probably a dedicated branch of the Imperial bureaucracy just to maintain the Galactic Time Standard

0 views
ava's blog 1 weeks ago

pain management

Allow me to crash out for a second. Since roughly a month, I’m experiencing a flareup in my spondyloarthritis (Ankylosing Spondylitis or Bechterew’s disease…). This is a type of arthritis that primarily affects the spine and usually some other joints. I first noticed it in the base of my right thumb that was painful and a bit stiff (this has now mostly resolved) and plantar fasciitis (the fascia in your foot arch, basically; my body loves attacking in this area for some reason, as I used to have frequent Achilles tendonitis as a teen). This first caused unexpected pain in some moments of walking and also resulted in issues using my phone, using a controller, and every day stuff that needs thumb mobility and pressure on the thumb. I also noticed general aches especially after resting and following some exercise. One example was having weirdly stiff elbows and shoulders after indoor cycling, which I hadn’t had in quite a while after treatment worked. This was followed by sacroiliitis (inflammation where hip and spine meet im the lower back) first on the right and now on both sides, and sharp pain in the upper thoracic spine (between the shoulder blades). That means while walking, sitting, and lying down, I have pain in the whole area of my lower back and hips, and as I breathe and my upper spine moves, I am in pain as well. Every time I breathe in, there’s a knife in my back. As nerves are affected too, I have shooting pains down my legs and into my shoulders and neck. My right leg occasionally randomly collapses away from under me due to this, but I haven’t fallen yet. Unfortunately, everything gets worse with rest (both sitting and lying down) but obviously, I can’t exercise 24/7. It’s generally difficult to hit the sweet spot each day where exercise helps and doesn’t further aggravate everything. I recently had such a great workout (30 minutes treadmill, 20 minutes cycling, 20 mins mix of yoga and pilates) that made me feel as if I had just gotten a dose of heavy painkillers, but that relief only lasted for about two hours max. I still need to sleep, study, and do an office job. I tried to go back to a low dose of Prednisone and it obviously helps a bit, but I don’t wanna be on it - I was on 80mg last year, tapered down to 50mg, and then couldn’t go lower for months until new treatment worked. I had the whole experience of side effects, even medically induced Cushing’s Disease and issues with my blood sugar. When I recently tried between 2mg-4mg, I was immediately back with the constant thirst and peeing (= blood sugar issues). It was so disrupting I had to stop. It’s sad seeing everything fall apart again. I see it in the way more stuff is lying around in the apartment than usual. Chores take longer or get procrastinated on. I am low energy. I barely go to the gym anymore and prefer to exercise at home. I heat up a heating pad for my back like 4 times a day, it’s not more than that only because I’m often too lazy and stubborn to do it more often. I try so hard not to take painkillers. You aren’t supposed to take ibuprofen with Crohn’s disease, but I have to sometimes. But when I max out my limit for it, I add paracetamol, which works less well but helps at least some. I’m especially careful with that so I don’t harm my liver. So it all becomes this big monster of trying to get the energy to exercise and making time for it in my day, then holding myself over with heating pads and stretches and distractions, before turning to painkillers as a last resort, and alternating/mixing them. I almost treat it like a luxury good, something to indulge in, because of weird shaming around it. I remember this absolutely disrespectful interview with a doctor I read this year in which he was clutching his pearls about people taking ibuprofen and that it’s so dangerous and poisonous and that people should just stop. He talked about it as if people just take these for fun over a papercut. I wish I could shit on his doormat. Peak example of a healthy and non-menstruating person with zero empathy. So every couple days, I allow myself to take them, and my inner monologue is really like “Oh well, I deserve this. I’m splurging on it. It’s okay for today, I held out long enough. But it is kind of extra. Maybe I could have skipped this one too. Is it even bad enough?” And then they kick in and I truly realize how bad it was. You get used to it after a while, your brain kind of tuning out some of it, but it’s still this constant static sound in the background that ruins everything. Realistically, if I’m being honest, I would need painkillers every morning and evening every single day. And if we’re being even more real, they would not be the freely available pills, but the highly controlled patches. But that also opens up a whole lot of other possible issues. It sucks! It fucking sucks. I throw myself into my studies, into my volunteer work, into lengthy blog posts and anything like that so there is finally some focus away from my body. If I’m in a flow state, I don’t have to be in here, I don’t have to witness this. I love slowly getting tired on the sofa and falling asleep while doing something else (like watching something) and I love being busy with something (like studying late) until I’m dead tired and then crashing into bed, falling asleep quickly. Because the alternative is going to bed in a timely manner and lying awake, being hyperaware of everything that hurts, and it starts hurting more and more as time goes on, and I’m lying there wondering how I can possibly manage the next 30 years like this, wishing it was over. I don’t have to endure this forever, of course. This flareup just needs to pass, or I need to switch medications, or I finally try and get a proper pain management going for these phases, and then everything goes back to normal. But in these moments, none of that matters. I just want it to be over. Every morning I get teleported back into this hurtful mess, and everything that would help causes more issues. It makes me angry and close to tears all the time, and makes me worry if I’ve developed antibodies to infliximab. My injection this week changed nothing. Next week will be super busy with traveling and attending events, and I’m tired of portioning out the relief. I’ll take what I need to make it, and I hope the rheumatology appointment the week after will be helpful. If anyone takes anything away from this, it should be the obvious fact that not all pain can be successfully treated with lifestyle changes and people aren’t necessarily taking “the easy way out” with painkillers. And if you look at people and think you know what causes their pain, you should consider that you never know what came first - the pain or the other things. With pain like that, it’s no wonder many people choose to avoid exercise, eat to feel happy, or self-medicate with drugs that are easier to get than a fent patch; and if people regularly get stuck on months of Prednisone, that does not help. My usually ~58kg self ballooned up to 75kg on ~6 months of Prednisone. After a year off, I’m 10kg down, 7 more to go. Reply via email Published 26 Oct, 2025

1 views
Maurycy 2 weeks ago

Some hot rocks:

I recently went on a rock collecting trip, but apart from the usual — quartz, K feldspar crystals, garnet, etc — I found some slightly radioactive rocks: All of these were found using my prospecting scintillator , but I took measurements with a Radiacode 102 — a very common hobbyist detector — so that other people can compare readings. Despite being small, it is still a gamma scintillator, so the count rates are much higher then any G-M tube. None of these are crazy hot, but they were all collected off the surface: I didn’t bring any good digging equipment on the trip. (Really should have considering how my detector is able to pick up deeply buried specimens) The biggest hazard with my rocks is dropping them on your toes. Even if you were to grind them up and inhale the dust, the host rock is much more of a danger then the radioactivity. I’ve personally been in multiple residential and office buildings that are more radioactive then my specimens because of the stone that was used to construct them. Also, if you have any “Anti-Radiation” or “Bio Energy” or “Quantum Energy” wellness products: they are quite the opposite. (and many are spicier then my rocks.) … or how about some nice decorative glass ? It glows

0 views
iDiallo 2 weeks ago

Why We Don't Have Flying Cars

Imagine this: You walk up to your driveway where your car is parked. You reach for the handle that automatically senses your presence, confirms your identity, and opens to welcome you in. You sit down, the controls appear in front of you, and your seatbelt secures itself around your waist. Instead of driving forward onto the pavement, you take off. You soar into the skies like an eagle and fly to your destination. This is what technology promises: freedom, power, and something undeniably cool. The part we fail to imagine is what happens when your engine sputters before takeoff. What happens when you reach the sky and there are thousands of other vehicles in the air, all trying to remain in those artificial lanes? How do we deal with traffic? Which directions are we safely allowed to go? And how high? We have flying cars today. They're called helicopters. In understanding the helicopter, we understand why our dream remains a dream. There's nothing romantic about helicopters. They're deafeningly loud and incredibly expensive to buy and maintain. They require highly skilled pilots, are dangerously vulnerable to engine failure, and present a logistical nightmare of three-dimensional traffic control. I can't even picture what a million of them buzzing between skyscrapers would look like. Chaos, noise pollution, and a new form of gridlock in the sky. Even with smaller drones, as the technology evolves and becomes familiar, cities are creating regulations around them, sucking all the fun and freedom out in favor of safety and security. This leads me to believe that the whole idea of flying cars and drones is more about freedom than practicality. And unregulated freedom is impossible. This isn't limited to flying cars. The initial, pure idea is always intoxicating. But the moment we build a prototype, we're forced to confront the messy reality. In 1993, a Japanese man brought a video phone to demo for my father as a new tech to adopt in our embassy. I was only a child, but I remember the screen lighting up with a video feed of the man sitting right next to my father. I could only imagine the possibilities. It was something I thought only existed in sci-fi movies. If this was possible, teleportation couldn't be too far away. In my imagined future, we'd sit at a table with life-like projections of colleagues from across the globe, feeling as if we were in the same room. It would be the end of business travel, a world without borders. But now that the technology is ubiquitous, the term "Zoom fatigue" is trending. It's ironic when I get on a call and see that 95% of my colleagues have their cameras turned off. In movies, communication was spontaneous. You press a button, your colleauge appears as a hologram, and you converse. In reality, there's a calendar invite, a link, and the awkward "you're on mute!" dance. It's a scheduled performance, not an organic interaction. And then there are people who have perfect lighting, high-speed internet, and a quiet home office. And those who don't. Video calls have made us realize the importance of physical space and connection. Facebook's metaverse didn't resolve this. Imagine having a device that holds all of human knowledge at the click of a button. For generations, this was the ultimate dream of librarians and educators. It would create a society of enlightened, informed citizens. And we got the smartphone. Despite being a marvel of technology, the library of the world at your fingertips, it hasn't ushered us into utopia. The attention economy it brought along has turned it into a slot machine designed to hijack our dopamine cycles. You may have Wikipedia open in one tab, but right next to it is TikTok. The medium has reshaped the message from "seek knowledge" to "consume content." While you have access to information, misinformation is just as rampant. The constant stimulation kills moments of quiet reflection, which are often the birthplace of creativity and deep thought. In The Machine Stops by E.M. Forster, every desire can be delivered by pulling a lever on the machine. Whether it's food, a device, or toilet paper. The machine delivers everything. With Amazon, we've created a pretty similar scenario. I ordered replacement wheels for my trash bin one evening, expecting them to arrive after a couple of days. The very next morning, they were waiting at my doorstep. Amazing. But this isn't magical. Behind it are real human workers who labor without benefits, job security, or predictable income. They have an algorithmic boss that can be more demanding than a human one. That promise of instant delivery has created a shadow workforce of people dealing with traffic, poor weather, and difficult customers, all while racing against a timer. The convenience for the user is built on the stress of the driver. The dream of a meal from anywhere didn't account for the reality of our cities now being clogged with double-parked delivery scooters and a constant stream of gig workers. Every technological dream follows the same pattern. The initial vision is pure, focusing only on the benefit. The freedom, the convenience, the power. But reality is always a compromise, a negotiation with physics, economics, and most importantly, human psychology and society. We wanted flying cars. We understood the problems. And we got helicopters with a mountain of regulations instead. That's probably for the best. The lesson isn't to stop dreaming or stop innovating. It's to dream with our eyes open. When we imagine the future, we need to ask not just "what will this enable?" but also "what will this cost?" Not in dollars, but in human terms. In stress, inequality, unintended consequences, and the things we'll lose along the way. We're great at imagining benefits and terrible at predicting costs. And until we get better at the second part, every flying car we build will remain grounded by the weight of what we failed to consider.

0 views
iDiallo 3 weeks ago

5 Years Away

AGI has been "5 years away" for the past decade. The Tesla Roadster? Five years away since 2014. Tesla's Level 5 self-driving? Promised by 2017, then quietly pushed into the perpetual five-year window. If you've been paying attention, you've probably noticed this pattern extends far beyond Silicon Valley. Why do we keep landing on this specific timeframe? Psychologically, five years is close enough to feel relevant. We can easily imagine ourselves five years from now, still affected by these innovations. Yet it's distant enough to seem plausible for ambitious goals. More importantly, it's far enough away that by the time five years passes, people have often moved on or forgotten the original prediction. This isn't limited to consumer electronics. Medical breakthroughs regularly make headlines with the same promise: a revolutionary cancer treatment, five years away. Carbon nanotubes will transform renewable energy, five years. Solid-state batteries will solve range anxiety... five years. Andrew Ng, in his course "AI for Everyone," offered the most honest perspective on AGI I've ever read. He suggested it might be decades, hundreds, or even thousands of years away. Why? Before we can meaningfully predict a timeline, we need several fundamental technological breakthroughs that we haven't achieved yet. Without those foundational advances, there's nothing to build on top of. With all the "5 years away" predictions there is an assumption of linear progress. But the reality is that any transformative technologies require non-linear leaps. They require discoveries we can't yet foresee, solving problems we don't yet fully understand. If we want our predictions to mean something, we need a clearer framework. At a minimum, a technology can be labeled "5 years away" only if we have a demonstrated proof of concept, even if it is at a small scale. We need to have identified the major engineering challenges remaining. There should be reasonable pathways to overcome those challenges with existing knowledge. And finally, there needs to be a semblance of economic viability on the horizon. Anything less than this is speculation dressed up as prediction. If I say that "we've built a prototype that works in the lab, now we need to scale manufacturing", this may in fact be five years away. But when I say "We need multiple fundamental breakthroughs in physics before this is even possible." Here, I am in a science fiction timeline. It's not entirely harmless to have inflated predictions. Government policy may be planned around those decisions, they can distort investment decisions, and worse it gives the public false expectations. When we promise self-driving cars by 2017 and fail to deliver, it erodes trust not just in that company, but in the entire field. When every medical breakthrough is "5 years away," people become cynical about real advances. The "5 years away" framing can make us complacent. If fusion power is always just around the corner, why invest heavily in less glamorous but available renewable technologies today? If AGI will solve everything soon, why worry about the limitations and harms of current AI systems? It's not the most pressing problem in the world, but wouldn't it be better to have more realistic predictions? When reading news articles about any technology, try to distinguish the difference between engineering challenges and scientific unknowns. A realistic prediction will be explicit by saying things like "This will be ready in 5 years, assuming we solve X, Y, and Z." The public needs to learn to celebrate incremental progress as well. When all you read about is moonshots, you dismiss important work being done to improve our everyday lives. And of course, the public should also learn to ignore engagement baits . Real innovation is hard enough without pretending we can see further into the future than we actually can. Five years is a number. What matters is the foundation beneath it. Without that foundation, we're not counting down to anything. We're just repeating a comfortable fiction that lets us feel like the future is closer than it really is. The most honest answer to "When will this technology arrive?" is often the least satisfying: "We don't know yet, but here's what needs to happen first." That answer respects both the complexity of innovation and the intelligence of the audience. Maybe it's time we used it more often.

0 views
Sean Goedecke 3 weeks ago

We are in the "gentleman scientist" era of AI research

Many scientific discoveries used to be made by amateurs. William Herschel , who discovered Uranus, was a composer and an organist. Antoine Lavoisier , who laid the foundation for modern chemistry, was a politician. In one sense, this is a truism. The job of “professional scientist” only really appeared in the 19th century, so all discoveries before then logically had to have come from amateurs, since only amateur scientists existed. But it also reflects that any field of knowledge gets more complicated over time . In the early days of a scientific field, discoveries are simple: “air has weight”, “white light can be dispersed through a prism into different colors”, “the mass of a burnt object is identical to its original mass”, and so on. The way you come up with those discoveries is also simple: observing mercury in a tall glass tube, holding a prism up to a light source, weighing a sealed jar before and after incinerating it, and so on. The 2025 Nobel prize in physics was just awarded “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit”. The press release gallantly tries to make this discovery understandable to the layman, but it’s clearly much more complicated than the examples I listed above. Even understanding the terms involved would take years of serious study. If you wanted to win the 2026 Nobel prize in physics, you have to be a physicist : not a musician who dabbles in physics, or a politician who has a physics hobby in your spare time. You have to be fully immersed in the world of physics 1 . AI research is not like this. We are very much in the “early days of science” category. At this point, a critical reader might have two questions. How can I say that when many AI papers look like this ? 2 Alternatively, how can I say that when the field of AI research has been around for decades, and is actively pursued by many serious professional scientists? First, because AI research discoveries are often simpler than they look . This dynamic is familiar to any software engineer who’s sat down and tried to read a paper or two: the fearsome-looking mathematics often contains an idea that would be trivial to express in five lines of code. It’s written this way because (a) researchers are more comfortable with mathematics, and so genuinely don’t find it intimidating, and (b) mathematics is the lingua franca of academic research, because researchers like to write to far-future readers for whom Python syntax may be as unfamiliar as COBOL is to us. Take group-relative policy optimization, or GRPO, introduced in a 2024 DeepSeek paper . This has been hugely influential for reinforcement learning (which in turn has been the driver behind much LLM capability improvement in the last year). Let me try and explain the general idea. When you’re training a model with reinforcement learning, you might naively reward success and punish failure (e.g. how close the model gets to the right answer in a math problem). The problem is that this signal breaks down on hard problems. You don’t know if the model is “doing well” without knowing how hard the math problem is, which is itself a difficult qualitative assessment. The previous state-of-the art was to train a “critic model” that makes this “is the model doing well” assessment for you. Of course, this brings a whole new set of problems: the critic model is hard to train and verify, costs much more compute to run inside the training loop, and so on. Enter GRPO. Instead of a critic model, you gauge how well the model is doing by letting it try the problem multiple times and computing how well it does on average . Then you reinforce the model attempts that were above average and punish the ones that were below average. This gives you good signal even on very hard prompts, and is much faster than using a critic model. The mathematics in the paper looks pretty fearsome, but the idea itself is surprisingly simple. You don’t need to be a professional AI researcher to have had it. In fact, GRPO is not necessarily that new of an idea. There is discussion of normalizing the “baseline” for RL as early as 1992 (section 8.3), and the idea of using the model’s own outputs to set that baseline was successfully demonstrated in 2016 . So what was really discovered in 2024? I don’t think it was just the idea of “averaging model outputs to determine a RL baseline”. I think it was that that idea works great on LLMs as well . As far as I can tell, this is a consistent pattern in AI research. Many of the big ideas are not brand new or even particularly complicated. They’re usually older ideas or simple tricks, applied to large language models for the first time. Why would that be the case? If deep learning wasn’t a good subject for the amateur scientist ten years ago, why would the advent of LLMs change that? Suppose someone discovered that a rubber-band-powered car - like the ones at science fair competitions - could output as much power as a real combustion engine, so long as you soaked the rubber bands in maple syrup beforehand. This would unsurprisingly produce a revolution in automotive (and many other) engineering fields. But I think it would also “reset” scientific progress back to something like the “gentleman scientist” days, where you could productively do it as a hobby. Of course, there’d be no shortage of real scientists doing real experiments on the new phenomenon. However, there’d also be about a million easy questions to answer. Does it work with all kinds of maple syrup? What if you soak it for longer? What if you mixed in some maple-syrup-like substances? You wouldn’t have to be a real scientist in a real lab to try your hand at some of those questions. After a decade or so, I’d expect those easy questions to have been answered, and for rubber-band engine research to look more like traditional science. But that still leaves a long window for the hobbyist or dilettante scientist to ply their trade. The success of LLMs is like the rubber-band engine. A simple idea that anyone can try 3 - train a large transformer model on a ton of human-written text - produces a surprising and transformative technology. As a consequence, many easy questions have become interesting and accessible subjects of scientific inquiry, alongside the normal hard and complex questions that professional researchers typically tackle. I was inspired to write this by two recent pieces of research: Anthropic’s “skills” product and the Recursive Language Models paper . Both of these present new and useful ideas, but they’re also so simple as to be almost a joke. “Skills” are just markdown files and scripts on-disk that explain to the agent how to perform a task. Recursive language models are just agents with direct code access to the entire prompt via a Python REPL. There, now you can go and implement your own skills or RLM inference code. I don’t want to undersell these ideas. It is a genuinely useful piece of research for Anthropic to say “hey, you don’t really need actual tools if the LLM has shell access, because it can just call whatever scripts you’ve defined for it on disk”. Giving the LLM direct access to its entire prompt via code is also (as far as I can tell) a novel idea, and one with a lot of potential. We need more research like this! Strong LLMs are so new, and are changing so fast, that their capabilities are genuinely unknown 4 . For instance, at the start of this year, it was unclear whether LLMs could be “real agents” (i.e. whether running with tools in a loop would be useful for more than just toy applications). Now, with Codex and Claude Code, I think it’s pretty clear that they can. Many of the things we learn about AI capabilities - like o3’s ability to geolocate photos - come from informal user experimentation. In other words, they come from the AI research equivalent of 17th century “gentleman science”. Incidentally, my own field - analytic philosophy - is very much the same way. Two hundred years ago, you could publish a paper with your thoughts on “what makes a good act good”. Today, in order to publish on the same topic, you have to deeply engage with those two hundred years of scholarship, putting the conversation out of reach of all but professional philosophers. It is unclear to me whether that is a good thing or not. Randomly chosen from recent AI papers on arXiv . I’m sure you could find a more aggressively-technical paper with a bit more effort, but it suffices for my point. Okay, not anyone can train a 400B param model. But if you’re willing to spend a few hundred dollars - far less than Lavoisier spent on his research - you can train a pretty capable language model on your own. In particular, I’d love to see more informal research on making LLMs better at coming up with new ideas. Gwern wrote about this in LLM Daydreaming , and I tried my hand at it in Why can’t language models come up with new ideas? . Incidentally, my own field - analytic philosophy - is very much the same way. Two hundred years ago, you could publish a paper with your thoughts on “what makes a good act good”. Today, in order to publish on the same topic, you have to deeply engage with those two hundred years of scholarship, putting the conversation out of reach of all but professional philosophers. It is unclear to me whether that is a good thing or not. ↩ Randomly chosen from recent AI papers on arXiv . I’m sure you could find a more aggressively-technical paper with a bit more effort, but it suffices for my point. ↩ Okay, not anyone can train a 400B param model. But if you’re willing to spend a few hundred dollars - far less than Lavoisier spent on his research - you can train a pretty capable language model on your own. ↩ In particular, I’d love to see more informal research on making LLMs better at coming up with new ideas. Gwern wrote about this in LLM Daydreaming , and I tried my hand at it in Why can’t language models come up with new ideas? . ↩

0 views

Why formalize mathematics - more than catching errors

I read a good post by one of the authors of the Isabelle theorem prover, that got me thinking. The author, Lawrence Paulson, observed that most math proofs are trivial, but writing them (preferably with a proof assistant) is a worthwhile activity, for reasons similar to safety checklists - “Not every obvious statement is true.” As I have been a bit obsessed with doing formalized mathematics, this got me thinking about why I am excited to spend many hours recently writing formalized proofs in Lean for exercises from Tao’s Real Analysis (along with this recent attempt to write a companion to Riehl’s Category Theory In Context ). On a very personal level, I just like math, computers and puzzles, and writing Lean proofs feels like doing all three at once. But I do believe formalization is important beyond nerd-snipping folks like me.

0 views
Maurycy 4 weeks ago

Please don't give Reflect Orbital money:

There’s this company promising to generate solar power at night using space based mirrors to bounce sunlight down to solar farms. This is the single dumbest startup I’ve ever seen… and people are actually giving them money. “We could do solar power at night.” There stated plan is to produce a ground brightness roughly equivalent to the full moon, but that’s not nearly enough for anything. The full moon is around a million times dimmer then the sun — 0.3 lux compared to 1,000,000 lux 1 — so a solar farm that normally produces 5 megawatts (enough to power a small town) would produce only 5 watts: About enough for a small lightbulb or to run a single cell phone. A single AA battery can produce around 3 watts: your TV remote has access to more power then a million dollar solar farm with this Sunlight-as-a-Service. They would gain hundreds of times more from installing a single panel: … and that’s if you believe the marketing claims: Doing some basic math using the published parameters — a 10 by 10 meter reflector 625 km above the ground — the maximum possible brightness is one 288,000th 2 of the sun’s. But if you’ve ever seen reflective mylar, you will know it’s far from an optically perfect mirror: The actual spot size will be hundreds of times theoretical, so “full moon” brightness is quite the stretch. “… but once the satellite’s up there, we an use it forever.” No. The planned 625 km altitude is well within Low Earth orbit, and is not fully outside of the earth’s atmosphere. The proposed satellite will be lightweight and with a huge surface area: It’s a sail in 30,000 kilometer per hour winds 3 trying bring it crashing down to earth. Without continual refueling, it will deorbit in somewhere between a few weeks to few months. Predicting the exact time is hard without knowing more details, but it won’t take very long. “… but search and rescue and all that stuff.” A bigger problem is that Low Earth orbit is, well, low. If you don’t have sunlight, odds are, the satellite doesn’t either: They will be useless at night. “Assume a spherical cow in vacuum that isotropically emits…” Ok, let’s give the satellites thrusters with infinite fuel, portals so they have sunlight when behind the earth, and use thermodynamics violating million percent efficient solar panels. Even if everything magically works, it still wouldn’t be a good idea. You can’t turn a mirror off, and satellites travel over the surface at 8 kilometers per second. The inevitable result of this is random flashes of light all over the earth. These flashes would only be about as bright as the full moon, but because they come from a point source, they will be dangerous for the same reasons lasers are: A 5 watt light bulb is kinda dim, but a 5 watt laser is a retina destroying beast that can cause instant blindness if mishandled. If you happen to be looking in the same area of the sky, these satellite-flares-from-hell could damage your eyes. If observed though optical aids like binoculars or telescopes, they could blind for much the same reason a looking at a solar eclipse can. … and I don’t think I have to explain how big of a problem this would be for anyone (or any animal) trying to get a good night’s sleep. “…” Right now, the company is likely an outright scam: It’s making impossible promises and has an impossible plan. These are not “we don’t have the technology yet” problems, but it’s a “the earth isn’t transparent” problem. However, people throw enough money at them, they will try to do something, and it won’t end well. In either case: Do not give them any of your money, and don’t trust it to anyone who will. [1]: Yes, I know that 100,000 / 0.3 is just above 300 thousand, not a million. But this is not a calculation where a factor of 3 in either direction would change anything. This is the same reason you can safely ignore anyone talking about “percentage increases” in solar panel efficiency: It’s not a matter of a few percent, but that the idea won’t work by a factor of several hundred thousand. [2]: Per the conservation of etendue, the spot size produced by a perfect mirror is limited by the angular size of the light source: The rest is just some simple geometry: [3]: Satellites don’t stay up because they are too far to experience gravity: They stay up because they are moving so fast that they fall and miss the ground . The lower a satellite is, the less time it has before hitting the atmosphere, so the faster it has to move. In Low Earth orbit, the needed speed is around 8 kilometers per second, or 30,000 kilometers per hour.

0 views
iDiallo 4 weeks ago

Designing Behavior with Music

A few years back, I had a ritual. I'd walk to the nearest Starbucks, get a coffee, and bury myself in work. I came so often that I knew all the baristas and their schedules. I also started noticing the music. There were songs I loved but never managed to catch the name of, always playing at the most inconvenient times for me to Shazam them. It felt random, but I began to wonder: Was this playlist really on shuffle? Or was there a method to the music? I never got a definitive answer from the baristas, but I started to observe a pattern. During the morning rush, around 8:30 AM when I'd desperately need to take a call, the music was always higher-tempo and noticeably louder. The kind of volume that made phone conversations nearly impossible. By mid-day, the vibe shifted to something more relaxed, almost lofi. The perfect backdrop for a deep, focused coding session when the cafe had thinned out and I could actually hear myself think. Then, after 5 PM, the "social hour" began. The music became familiar pop, at a volume that allowed for easy conversation, making the buzz of surrounding tables feel part of the atmosphere rather than a distraction. The songs changed daily, but the strategy was consistent. The music was subtly, or not so subtly, encouraging different behaviors at different times of day. It wasn't just background noise; it was a tool. And as it turns out, my coffee-fueled hypothesis was correct. This isn't just a Starbucks quirk; it's a science-backed strategy used across the hospitality industry. The music isn't random. It's designed to influence you. Research shows that we can broadly group cafe patrons into three archetypes, each responding differently to the sonic environment. Let's break them down. This is you and me, with a laptop, hoping to grind through a few hours of work. Our goal is focus, and the cafe's goal is often to prevent us from camping out all day on a single coffee. What the Research Says: A recent field experiment confirmed that fast-tempo music leads to patrons leaving more quickly. Those exposed to fast-tempo tracks spent significantly less time in the establishment than those who heard slow-tempo music or no music at all. For the solo worker, loud or complex music creates a higher "cognitive load," making sustained concentration difficult. That upbeat, intrusive morning music isn't an accident; it's a gentle nudge to keep the line moving. When you're trying to write code or draft an email and the music suddenly shifts to something with a driving beat and prominent vocals, your brain has to work harder to filter it out. Every decision, from what variable to name to which sentence structure to use, becomes just a little more taxing. I'm trying to write a function and a song is stuck in my head. "I just wanna use your love tonight!" After an hour or two of this cognitive friction, packing up and heading somewhere quieter starts to feel like a relief rather than an inconvenience. This pair is there for conversation. You meet up with a friend you haven't seen in some time. You want to catch up, and the music acts as a double-edged sword. What the Research Says: The key here is volume. Very loud music can shorten a visit because it makes conversing difficult. You have to lean in, raise your voice, and constantly ask "What?" Research on acoustic comfort in cafes highlights another side: music at a moderate level acts as a "sonic privacy blanket." It masks their conversation from neighboring tables better than silence, making the pair feel more comfortable and less self-conscious. I've experienced this myself. When catching up with a friend over coffee, there's an awkward awareness in a silent cafe that everyone can hear your conversation. Are you talking too loud about that work drama? Can the person at the next table hear you discussing your dating life? But add a layer of moderate background music, and suddenly you feel like you're in your own bubble. You can speak freely without constantly monitoring your volume or censoring yourself. The relaxed, mid-day tempo isn't just for solo workers. It's also giving pairs the acoustic privacy to linger over a second latte, perhaps order a pastry, and feel comfortable enough to stay for another thirty minutes. The group of three or more is there for the vibe. Their primary goal is to connect with each other, and the music is part of the experience. What the Research Says: Studies on background music and consumer behavior show that for social groups, louder, more upbeat music increases physiological arousal, which translates into a sense of excitement and fun. This positive state is directly linked to impulse purchases, and a longer stay. "Let's get another round!" The music effectively masks the group's own noise, allowing them to be loud without feeling disruptive. The familiar pop tunes of the evening are an invitation to relax, stay, and spend. That energy translates into staying longer, ordering another drink, maybe splitting some appetizers. The music gives permission for the group to match its volume and enthusiasm. If the cafe is already vibrating with sound, your group's laughter doesn't feel excessive, it feels appropriate. The music is not random, it's calculated. I have a private office in a coworking space. What I find interesting is that whenever I go to the common area, where most people work, there's always music blasting. Not just playing. Blasting . You couldn't possibly get on a meeting call in the common area, even though this is basically a place of work. For that, there are private rooms that you can rent by the minute. Let that sink in for a moment. In a place of work, it's hard to justify music playing in the background loud enough to disrupt actual work. Unless it serves a very specific purpose: getting you to rent a private room. The economics makes sense. I did a quick count on my floor. The common area has thirty desks but only eight private rooms. If everyone could take calls at their desks, those private rooms would sit empty. But crank up the music to 75 decibels, throw in some upbeat electronic tracks with prominent basslines, and suddenly those private rooms are booked solid at $5 per 15 minutes. That's $20 per hour, per room, eight rooms, potentially running 10 hours a day. The music isn't there to help people focus. It's a $1,600 daily revenue stream disguised as ambiance. And the best, or worse, part is that nobody complains. Because nobody wants to be the person who admits they need silence to think. We've all internalized the idea that professionals should be able to work anywhere, under any conditions. So we grimace, throw on noise-canceling headphones, and when we inevitably need to take a Zoom call, we sheepishly book a room and swipe our credit card. Until now, this process has been relatively manual. A manager chooses a playlist or subscribes to a service (like Spotify's "Coffee House" or "Lofi Beats") and hopes it has the desired effect. It's a best guess based on time of day and general principles. But what if a cafe could move from curating playlists to engineering soundscapes in real-time? This is where generative AI will play a part. Imagine a system where: Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. This isn't science fiction. The technology exists today. We already have: Any day now, you'll see a start up providing this service. Where the ambiance of a space is not just curated, but designed. A cafe could have a "High Turnover Morning" mode, a "Linger-Friendly Afternoon" mode, and a "High-Spend Social Evening" mode, with the AI seamlessly transitioning between them by generating the perfect, adaptive soundtrack. One thing that I find frustrating with AI is that when we switch to these types of systems, you never know. The music would always feel appropriate, never obviously manipulative. It would be perfectly calibrated to nudge you in the desired direction while remaining just below the threshold of conscious awareness. A sonic environment optimized not for your experience, but for the business's metrics. When does ambiance become manipulation? There's a difference between playing pleasant background music and deploying an AI system that continuously analyzes your behavior and adjusts the environment to influence your decisions. One is hospitality; the other is something closer to behavioral engineering. And unlike targeted ads online, which we're at least somewhat aware of and can block, this kind of environmental manipulation is invisible, unavoidable, and operates on a subconscious level. You can't install an ad blocker for the physical world. I don't have answers here, only questions. Should businesses be required to disclose when they're using AI to manipulate ambiance? Is there a meaningful difference between a human selecting a playlist to achieve certain outcomes and an AI doing the same thing more effectively? Does it matter if the result is that you leave a cafe five minutes sooner than you otherwise would have? These are conversations we need to have as consumers, as business owners, as a society. Now we know that the quiet background music in your local cafe has never been just music. It's a powerful, invisible architect of behavior. And it's about to get a whole lot smarter. Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. Generative AI that can create music in any style ( MusicLM , MusicGen ) Computer vision that can anonymously track occupancy and behavior Point-of-sale systems that track every metric in real-time Machine learning systems that can optimize for complex, multi-variable outcomes

1 views
Jeff Geerling 1 months ago

How much radiation can a Pi handle in space?

Late in the cycle while researching CubeSats using Pis in space , I got in touch with Ian Charnas 1 , the chief engineer for the Mark Rober YouTube channel. Earlier this year, Crunchlabs launched SatGus , which is currently orbiting Earth taking 'space selfies'.

0 views
A Working Library 1 months ago

Beyond credibility

In the 1880s, a French neurologist named Jean-Martin Charcot became famous for hosting theatrical public lectures in which he put young, “hysterical” women in a hypnotic trance and then narrated the symptoms of the attacks that followed. Charcot’s focus was on documenting and classifying these symptoms, but he had few theories as to their source. A group of Charcot’s followers—among them Pierre Janet, Joseph Breuer, and Sigmund Freud—would soon eagerly compete to be the first to discover the cause of this mysterious affliction. Where Charcot showed intense interest in the expression of hysteria, he had no curiosity for women’s own testimony; he dismissed their speech as “vocalizations.” But Freud and his compatriots landed on the novel idea of talking to the women in question. What followed were years in which they talked to many women regularly, sometimes for hours a day, in what can only be termed a collaboration between themselves and their patients. That collaboration revealed that hysteria was a condition brought about by trauma. In 1896, Freud published The Aetiology of Hysteria, asserting: I therefore put forward the thesis that at the bottom of every case of hysteria there are one or more occurrences of premature sexual experiences , occurrences which belong to the earliest years of childhood, but which can be reproduced through the work of psycho-analysis in spite of the intervening decades. I believe that this is an important finding, the discovery of a caput Nili in neuropathology. Judith Herman, in Trauma and Recovery , notes that The Aetiology remains one of the great texts on trauma; she describes Freud’s writing as rigorous and empathetic, his analysis largely in accord with present-day thinking about how sexual abuse begets trauma and post-traumatic symptoms, and with methods that effect treatment. But a curious thing happened once this paper was published: Freud began to furiously backpedal from his claims. [Freud’s] correspondence makes clear that he was increasingly troubled by the radical social implications of his hypothesis. Hysteria was so common among women that if his patients’ stories were true, and if his theory were correct, he would be forced to conclude that what he called “perverted acts against children” were endemic, not only among the proletariat of Paris, where he had first studied hysteria, but also among the respectable bourgeois families of Vienna, where he had established his practice. This idea was simply unacceptable. It was beyond credibility. Faced with this dilemma, Freud stopped listening to his female patients. The turning point is documented in the famous case of Dora. This, the last of Freud’s case studies on hysteria, reads more like a battle of wits than a cooperative venture. The interaction between Freud and Dora has been described as an “emotional combat.” In this case Freud still acknowledged the reality of his patient’s experience: the adolescent Dora was being used as a pawn in her father’s elaborate sex intrigues. Her father had essentially offered her to his friends as a sexual toy. Freud refused, however, to validate Dora’s feelings of outrage and humiliation. Instead, he insisted upon exploring her feelings of erotic excitement, as if the exploitative situation were a fulfillment of her desire. In an act Freud viewed as revenge, Dora broke off the treatment. That is, faced with the horror of women’s experience, Freud rejected the evidence in front of him. Rather than believe the women he had collaborated with, and so be forced to revise his image of the respectable men in his midst, he chose to maintain that respectability by refusing the validity of his own observations. He would go on to develop theories of human psychology that presumed women’s inferiority and deceitfulness—in a way, projecting his own lies onto his patients. Is this not how all supremacy thinking works? To believe that one people are less human or less intelligent or less capable is to refuse to see what’s right in front of you, over and over and over again. In order to recant his own research, Freud had to cleave his mind in two. We must refuse to tolerate supremacists in our midst because their beliefs do real and lasting harm, because their speech gives rise to terrible violence. But we must also refuse them because they are compromised. They cannot trust their own minds. And so cannot be trusted in turn. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
A Working Library 1 months ago

Hurry-up-quick!

I’ve written before about the Army intelligence tests: an experiment in which millions of Army recruits were subject to an early version of the IQ test. As Stephen Jay Gould documents , the tests were chaotically—almost deliriously—managed. Illiterate recruits were given a version of the test in which proctors walked around yelling inscrutable instructions and pointing at pictures on sheets of paper; many of these recruits did not speak English as their first language, and had never before used a pencil. Gould shares some of the instructions given to the proctors: The idea of working fast must be impressed upon the men during the maze test. Examiner and orderlies walk around the room, motioning men who are not working, and saying, “Do it, do it, hurry up, quick.” At the end of 2 minutes, examiner says, “Stop! Turn over the page to test 2.” This is, as Gould notes, diabolical. How could a test given under these conditions possibly evaluate some innate quality of “intelligence”? But the designers of the test were so enamored of their theories of racial hierarchy that they either couldn’t perceive the irrationality of their own design, or else they knew it for a facade. The practice of the eugenicist is invariably that of the error or the con. But that phrase, hurry up, quick, struck a bell—I had heard it before. In Le Guin’s The Word for World Is Forest, human colonizers arrive on the planet Athshea, seven lightyears from Earth and rich in trees—a rarity on their deforested home world. The Athshean people are small, furred, and green; the humans name them “creechies,” deem them to be of lesser intelligence (an error, as it turns out), and proceed to enslave them, rape them, and kill them with impunity. In the opening pages, we see the Captain of New Tahiti Colony rise in the morning, and yell to an Athshean: “Ben!” he roared, sitting up and swinging his bare feet onto the bare floor. “Hot water get-ready, hurry-up-quick!” Le Guin’s concatenation of the phrase transforms it from merely extreme into something sinister: the way the words roll out all together escalates the inane redundancy, the empty urgency. Speed is not useful to the task at hand; the hurried pot does not boil faster. Rather, the purpose of the haste is to prevent any semblance of rest, to prohibit even a moment of peace. But rest is reserved for those deemed sufficiently wise, and sufficiently human. The Captain will eventually learn that Ben’s ingenuity far exceeds his own—a lesson that comes at a very steep price for them both. Whether our present-day and present-Earth supremacists will ever learn remains to be seen. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
A Working Library 2 months ago

Ammonite

Marguerite (“Marghe”) Taishan is about to step foot on the planet Jeep when she receives a warning: if she goes on, she will never come back. But she’s come too far, and worked too hard, and Jeep is too interesting for her to turn back now: across its continents lives a scattered human colony, forgotten for centuries, but apparently thriving. Which might be unremarkable except for the fact that all the people are women. Marghe’s job is to investigate how they have survived, and to test a vaccine against the virus that killed the men. But her own survival, and the planet’s, are more precarious, and more intertwined, than she predicts. Nicola Griffith’s first novel is about making a home, and remembering the past, and the impossible beauty and danger of knowing women are human. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
Maurycy 3 months ago

Let's take a photo:

It’s been almost 200 years since the oldest surviving photograph was taken: This isn’t a description of reality, like a painting or a sculpture. This is a piece of reality caught in a trap and pinned up for viewing – even two centuries later. To take our own pictures, we’ll need a light sensitive material: I recommend using silver chloride or iron citrate because they are relatively forgiving and don’t require any super nasty chemicals. One catch is that a lot of nicer paper has a base added, which can interfere with the chemistry: If your paper says something about being “buffered” or “archival”, add some citric acid to the first solution or soak the paper in vinegar before using it. For the silver chloride process, brush some 10% (by weight) silver nitrate solution onto watercolor paper, let it dry, and apply 3% table salt solution: The two salts react to white colored silver chloride, but when exposed to light, the precipitate turns black due to the formation of finely powdered silver: To end the exposure, wash the paper in water followed by a 5% sodium thiosulfate solution to remove remaining silver chloride: I recommend doing a final wash with water to remove any residual chemicals. Everything between applying the table salt and thiosulfate should be done in a dimly lit environment to avoid unintended darkening. The sensitivity is quite low by photographic standards, so you don’t need a dark room, but having a dim room is a good idea. If washed to remove residual silver nitrate, and protected from light, the sensitized paper will stay usable for years. Just don’t let it directly touch any metals… and mark which side was treated, because the front and back look identical once dry. For the iron based process, paint a solution containing 5 parts ferric ammonium citrate and 2 parts potassium ferricyanide onto paper: When exposed to light, the citrate reduces the iron from +3 to +2 ions, which react with the ferricyanide to from Prussian blue: Because the Prussian blue is insoluble, the residual chemicals can be removed by washing the paper in water: When overexposed, some of the Prussian blue can be reduced, bleaching the color. In this case, the blue can be restored with dilute hydrogen peroxide or by waiting a few days for the air to do its thing. The ferric ammonium citrate must be protected from light, even while it’s still in the bottle. If kept in the dark, the chemicals should last for years, but the solutions can develop mold. The paper is highly variable, and can last anywhere from days to years depending on what’s in it. The easiest way to record an object is to place it directly on the sensitive paper and shine a light on it: For the light source, I recommend the sun (fastest), UV lamps or bright white lights (slowest). You can also print out inverted image onto transparency sheet and expose though it: (This also works with film negatives if you have any) Another option is to draw something onto clear glass or plastic, and use the paper to make copies. Doing this was actually quite popular before computers and it’s why so many old technical diagrams are blue. If you’ve played with a magnifying glass, you’ve probably seen a lens projecting an image – if not, you’re part of today’s lucky ten thousand : Hold a lens parallel to a piece of white paper, and adjust the distance until an image of what the lens is facing forms on the paper. I find this works the best when pointing the lens out a window on a sunny day. … so now we have a way to project an image of an object and a way to permanently record an image falling on paper: The key design parameter is the distance between the lens and photographic paper: if it’s wrong, everything will be out of focus. You’ll need to measure the a good distance for your lens, and leave some adjustability for focusing. Before taking a photo, go somewhere dark, load in a piece of treated paper, close the camera and cover the lens. When your ready to take a picture, just uncover the lens and wait. Once the exposure is done, cover the lens and take it somewhere dark to process the paper: You really need a lot of light for this to work. Direct sunlight or bright long wave UV illumination is best. Lenses with a longer focal length will produce a larger image, and can record more detail, but the light will be spread out. However, physically larger lenses will catch more light. Avoid lenses that are small and have a long focal length, because those will need very long exposures. For my lens, I used a 25mm Plössl telescope eyepiece, which is small, but has a correspondingly short focal length. Even so, I still had to leave the camera for 45 minutes in direct sunlight. On the bright side, I didn’t need a viewfinder because I could just look at the photographic paper to check on focus and framing. If your images have too much contrast, you can try pre-flashing, where the whole paper is exposed to a bit of light outside the camera. When done right, this gets the paper out of the flat region of the transfer function and prevents the darker areas of the image from being clipped.

0 views