Posts in Science (20 found)
DYNOMIGHT 3 days ago

Underrated reasons to be thankful V

That your dog, while she appears to love you only because she’s been adapted by evolution to appear to love you, really does love you. That if you’re a life form and you cook up a baby and copy your genes to them, you’ll find that the genes have been degraded due to oxidative stress et al., which isn’t cause for celebration, but if you find some other hopefully-hot person and randomly swap in half of their genes, your baby will still be somewhat less fit compared to you and your hopefully-hot friend on average, but now there is variance, so if you cook up several babies, one of them might be as fit or even fitter than you, and that one will likely have more babies than your other babies have, and thus complex life can persist in a universe with increasing entropy. That if we wanted to, we surely could figure out which of the 300-ish strains of rhinovirus are circulating in a given area at a given time and rapidly vaccinate people to stop it and thereby finally “cure” the common cold, and though this is too annoying to pursue right now, it seems like it’s just a matter of time. That if you look back at history, you see that plagues went from Europe to the Americas but not the other way, which suggests that urbanization and travel are great allies for infectious disease, and these both continue today but are held in check by sanitation and vaccines even while we have lots of tricks like UVC light and high-frequency sound and air filtration and waste monitoring and paying people to stay home that we’ve barely even put in play. That while engineered infectious diseases loom ever-larger as a potential very big problem, we also have lots of crazier tricks we could pull out like panopticon viral screening or toilet monitors or daily individualized saliva sampling or engineered microbe-resistant surfaces or even dividing society into cells with rotating interlocks or having people walk around in little personal spacesuits, and while admittedly most of this doesn’t sound awesome, I see no reason this shouldn’t be a battle that we would win. That clean water, unlimited, almost free. That dentistry. That tongues. That radioactive atoms either release a ton of energy but also quickly stop existing—a gram of Rubidium-90 scattered around your kitchen emits as much energy as ~200,000 incandescent lightbulbs but after an hour only 0.000000113g is left—or don’t put out very much energy but keep existing for a long time—a gram of Carbon-14 only puts out the equivalent of 0.0000212 light bulbs but if you start with a gram, you’ll still have 0.999879g after a year—so it isn’t actually that easy to permanently poison the environment with radiation although Cobalt-60 with its medium energy output and medium half-life is unfortunate, medical applications notwithstanding I still wish Cobalt-60 didn’t exist, screw you Cobalt-60. That while curing all cancer would only increase life expectancy by ~3 years and curing all heart disease would only increase life expectancy by ~3 years, and preventing all accidents would only increase life expectancy by ~1.5 years, if we did all of these at the same time and then a lot of other stuff too, eventually the effects would go nonlinear, so trying to cure cancer isn’t actually a waste of time, thankfully. That the peroxisome, while the mitochondria and their stupid Krebs cycle get all the attention, when a fatty-acid that’s too long for them to catabolize comes along, who you gonna call. That we have preferences, that there’s no agreed ordering of how good different things are, which is neat, and not something that would obviously be true for an alien species, and given our limited resources probably makes us happier on net. That cardamom, it is cheap but tastes expensive, if cardamom cost 1000× more, people would brag about how they flew to Sri Lanka so they could taste chai made with fresh cardamom and swear that it changed their whole life. That Gregory of Nyssa, he was right. That Grandma Moses, it’s not too late. That sleep, that probably evolution first made a low-energy mode so we don’t starve so fast and then layered on some maintenance processes, but the effect is that we live in a cycle and when things aren’t going your way it’s comforting that reality doesn’t stretch out before you indefinitely but instead you can look forward to a reset and a pause that’s somehow neither experienced nor skipped. That, glamorous or not, comfortable or not, cheap or not, carbon emitting or not, air travel is very safe. That, for most of the things you’re worried about, the markets are less worried than you and they have the better track record, though not the issue of your mortality. That sexual attraction to romantic love to economic unit to reproduction, it’s a strange bundle, but who are we to argue with success. That every symbolic expression recursively built from differentiable elementary functions has a derivative that can also be written as a recursive combination of elementary functions, although the latter expression may require vastly more terms. That every expression graph built from differentiable elementary functions and producing a scalar output has a gradient that can itself be written as an expression graph, and furthermore that the latter expression graph is always the same size as the first one and is easy to find, and thus that it’s possible to fit very large expression graphs to data. That, eerily, biological life and biological intelligence does not appear to make use of that property of expression graphs. That if you look at something and move your head around, you observe the entire light field, which is a five-dimensional function of three spatial coordinates and two angles, and yet if you do something fancy with lasers, somehow that entire light field can be stored on a single piece of normal two-dimensional film and then replayed later. That, as far as I can tell, the reason five-dimensional light fields can be stored on two-dimensional film simply cannot be explained without quite a lot of wave mechanics, a vivid example of the strangeness of this place and proof that all those physicists with their diffractions and phase conjugations really are up to something. That disposable plastic, littered or not, harmless when consumed as thousands of small particles or not, is popular for a reason. That disposable plastic, when disposed of correctly, is literally carbon sequestration, and that if/when air-derived plastic replaces dead-plankton-derived plastic, this might be incredibly convenient, although it must be said that currently the carbon in disposable plastic only represents a single-digit percentage of total carbon emissions. That rocks can be broken into pieces and then you can’t un-break the pieces but you can check that they came from the same rock, it’s basically cryptography. That the deal society has made is that if you have kids then everyone you encounter is obligated to chip in a bit to assist you, and this seems to mostly work without the need for constant grimy negotiated transactions as Econ 101 would suggest, although the exact contours of this deal seem to be a bit murky. That of all the humans that have ever lived, the majority lived under some kind of autocracy, with the rest distributed among tribal bands, chiefdoms, failed states, and flawed democracies, and only something like 1% enjoyed free elections and the rule of law and civil liberties and minimal corruption, yet we endured and today that number is closer to 10%, and so if you find yourself outside that set, do not lose heart. That if you were in two dimensions and you tried to eat something then maybe your body would split into two pieces since the whole path from mouth to anus would have to be disconnected, so be thankful you’re in three dimensions, although maybe you could have some kind of jigsaw-shaped digestive tract so your two pieces would only jiggle around or maybe you could use the same orifice for both purposes, remember that if you ever find yourself in two dimensions, I guess. ( previously , previously , previously , previously ) That your dog, while she appears to love you only because she’s been adapted by evolution to appear to love you, really does love you. That if you’re a life form and you cook up a baby and copy your genes to them, you’ll find that the genes have been degraded due to oxidative stress et al., which isn’t cause for celebration, but if you find some other hopefully-hot person and randomly swap in half of their genes, your baby will still be somewhat less fit compared to you and your hopefully-hot friend on average, but now there is variance, so if you cook up several babies, one of them might be as fit or even fitter than you, and that one will likely have more babies than your other babies have, and thus complex life can persist in a universe with increasing entropy. That if we wanted to, we surely could figure out which of the 300-ish strains of rhinovirus are circulating in a given area at a given time and rapidly vaccinate people to stop it and thereby finally “cure” the common cold, and though this is too annoying to pursue right now, it seems like it’s just a matter of time. That if you look back at history, you see that plagues went from Europe to the Americas but not the other way, which suggests that urbanization and travel are great allies for infectious disease, and these both continue today but are held in check by sanitation and vaccines even while we have lots of tricks like UVC light and high-frequency sound and air filtration and waste monitoring and paying people to stay home that we’ve barely even put in play. That while engineered infectious diseases loom ever-larger as a potential very big problem, we also have lots of crazier tricks we could pull out like panopticon viral screening or toilet monitors or daily individualized saliva sampling or engineered microbe-resistant surfaces or even dividing society into cells with rotating interlocks or having people walk around in little personal spacesuits, and while admittedly most of this doesn’t sound awesome, I see no reason this shouldn’t be a battle that we would win. That clean water, unlimited, almost free. That dentistry. That tongues. That radioactive atoms either release a ton of energy but also quickly stop existing—a gram of Rubidium-90 scattered around your kitchen emits as much energy as ~200,000 incandescent lightbulbs but after an hour only 0.000000113g is left—or don’t put out very much energy but keep existing for a long time—a gram of Carbon-14 only puts out the equivalent of 0.0000212 light bulbs but if you start with a gram, you’ll still have 0.999879g after a year—so it isn’t actually that easy to permanently poison the environment with radiation although Cobalt-60 with its medium energy output and medium half-life is unfortunate, medical applications notwithstanding I still wish Cobalt-60 didn’t exist, screw you Cobalt-60. That while curing all cancer would only increase life expectancy by ~3 years and curing all heart disease would only increase life expectancy by ~3 years, and preventing all accidents would only increase life expectancy by ~1.5 years, if we did all of these at the same time and then a lot of other stuff too, eventually the effects would go nonlinear, so trying to cure cancer isn’t actually a waste of time, thankfully. That the peroxisome, while the mitochondria and their stupid Krebs cycle get all the attention, when a fatty-acid that’s too long for them to catabolize comes along, who you gonna call. That we have preferences, that there’s no agreed ordering of how good different things are, which is neat, and not something that would obviously be true for an alien species, and given our limited resources probably makes us happier on net. That cardamom, it is cheap but tastes expensive, if cardamom cost 1000× more, people would brag about how they flew to Sri Lanka so they could taste chai made with fresh cardamom and swear that it changed their whole life. That Gregory of Nyssa, he was right. That Grandma Moses, it’s not too late. That sleep, that probably evolution first made a low-energy mode so we don’t starve so fast and then layered on some maintenance processes, but the effect is that we live in a cycle and when things aren’t going your way it’s comforting that reality doesn’t stretch out before you indefinitely but instead you can look forward to a reset and a pause that’s somehow neither experienced nor skipped. That, glamorous or not, comfortable or not, cheap or not, carbon emitting or not, air travel is very safe. That, for most of the things you’re worried about, the markets are less worried than you and they have the better track record, though not the issue of your mortality. That sexual attraction to romantic love to economic unit to reproduction, it’s a strange bundle, but who are we to argue with success. That every symbolic expression recursively built from differentiable elementary functions has a derivative that can also be written as a recursive combination of elementary functions, although the latter expression may require vastly more terms. That every expression graph built from differentiable elementary functions and producing a scalar output has a gradient that can itself be written as an expression graph, and furthermore that the latter expression graph is always the same size as the first one and is easy to find, and thus that it’s possible to fit very large expression graphs to data. That, eerily, biological life and biological intelligence does not appear to make use of that property of expression graphs. That if you look at something and move your head around, you observe the entire light field, which is a five-dimensional function of three spatial coordinates and two angles, and yet if you do something fancy with lasers, somehow that entire light field can be stored on a single piece of normal two-dimensional film and then replayed later. That, as far as I can tell, the reason five-dimensional light fields can be stored on two-dimensional film simply cannot be explained without quite a lot of wave mechanics, a vivid example of the strangeness of this place and proof that all those physicists with their diffractions and phase conjugations really are up to something. That disposable plastic, littered or not, harmless when consumed as thousands of small particles or not, is popular for a reason. That disposable plastic, when disposed of correctly, is literally carbon sequestration, and that if/when air-derived plastic replaces dead-plankton-derived plastic, this might be incredibly convenient, although it must be said that currently the carbon in disposable plastic only represents a single-digit percentage of total carbon emissions. That rocks can be broken into pieces and then you can’t un-break the pieces but you can check that they came from the same rock, it’s basically cryptography. That the deal society has made is that if you have kids then everyone you encounter is obligated to chip in a bit to assist you, and this seems to mostly work without the need for constant grimy negotiated transactions as Econ 101 would suggest, although the exact contours of this deal seem to be a bit murky. That of all the humans that have ever lived, the majority lived under some kind of autocracy, with the rest distributed among tribal bands, chiefdoms, failed states, and flawed democracies, and only something like 1% enjoyed free elections and the rule of law and civil liberties and minimal corruption, yet we endured and today that number is closer to 10%, and so if you find yourself outside that set, do not lose heart. That if you were in two dimensions and you tried to eat something then maybe your body would split into two pieces since the whole path from mouth to anus would have to be disconnected, so be thankful you’re in three dimensions, although maybe you could have some kind of jigsaw-shaped digestive tract so your two pieces would only jiggle around or maybe you could use the same orifice for both purposes, remember that if you ever find yourself in two dimensions, I guess.

0 views

Tech predictions for 2026 and beyond

We’ve caught glimpses of a future that values autonomy, empathy, and individual expertise. Where interdisciplinary cooperation influences discovery and creation at an unrelenting pace. In the coming year, we will begin the transition into a new era of AI in the human loop, not the other way around. This cycle will create massive opportunities to solve problems that truly matter.

0 views

Manifesto: AI (as a term and field) should subsume CS

In French the term “informatique” feels slightly better, as a label to describe the field, than “Computer Science” feels in English. But this is a rare occurrence for French, because most of the other terms, like “technologie de l’information”, and “science des données”, feel awkward and far from their “real” cultural counterpart, the thing in itself that we do, when we do it.

0 views
Christian Jauvin 1 weeks ago

We didn't get the AI failure modes that philosophy anticipated

The original idea of AI, that we got mostly through science-fiction, and also a little from the philosophy of mind and logic, imagined an entity that would implement idealized and mechanical notions of thoughts, reasoning and logic. Such an entity would of course know everything there is to know about such topics, and its behavior would thus be rooted in them. Although this would mean that the entity would generally behave in impressive and powerful ways, it was also implicitly understood that sometimes this “perfection” would lead to paradoxical behaviors and “errors”: the robot stuck in a circle in the Asimov story is the quintessential example.

0 views
Raph Koster 1 weeks ago

Looking back at a pandemic simulator

It’s been six years now since the early days of the Covid pandemic. People who were paying super close attention started hearing rumors about something going on in China towards the end of 2019 — my earliest posts about it on Facebook were from November that year. Even at the time, people were utterly clueless about the mathematics of how a highly infectious virus spread. I remember spending hours writing posts on various different social media sites explaining that the Infection Fatality Rates and the R value were showing that we could be looking at millions dead. People didn’t tend to believe me: “SEVERAL MILLION DEAD! Okay, I’m done. No one is predicting that. But you made me laugh. Thanks.” You can do the math yourself. Use a low average death estimate of 0.4%. Assume 60% of the population catches it and then we reach herd immunity (which is generous): But that’s with low assumptions… It was like typing to a wall. In fact, it’s pretty likely that it still is, since these days, the discourse is all about how bad the economic and educational impact of lockdowns was — and not about the fact that if the world had acted in concert and forcefully, we could have had a much better outcome than we did. The health response was too soft , the lockdown too lenient, and as a result, we took all the hits. Of course, these days people also forget just how deadly it was and how many died, and so on. We now know that the overall IFR was probably higher than 0.4%, but very strongly tilted towards older people and those with comorbidities. We also now know that herd immunity was a pipe dream — instead we managed to get vaccines out in record time and the ordinary course of viral evolution ended up reducing the death rate until now we behave as if Covid is just a deadlier flu (it isn’t, that thinking ignores long-term impact of the disease). The upshot: my math was not that far off — the estimated toll in the US ended up being 1.2 to 1.4 million souls, and worldwide it’s estimated as between 15 and 28.5 million dead. Plenty of denial of this, these days, and plenty of folks blaming the vaccines for what are most likely issues caused by the disease in the first place. Anyway, in the midst of it all, tired of running math in my spreadsheets (yeah, I was tracking it all in spreadsheets, what can I say?), I started thinking about why only a few sorts of people were wrapping their heads around the implications. The thing they all had in common was that they lived with exponential curves. Epidemiologists, Wall Street quants, statisticians… and game designers. Could we get more people to feel the challenges in their bones? So… I posted this to Facebook on March 24th, 2020: Three weeks ago I was idly thinking of how someone ought to make a little game that shows how the coronavirus spreads, how testing changes things, and how social distancing works. The sheer number of people who don’t get it — numerate people, who ought to be able to do math — is kind of shocking. I couldn’t help worrying at it, and have just about a whole design in my head. But I have to admit, I kinda figured someone would have made it by now. But they haven’t. It’s not even a hard game to make. Little circles on a plain field. Each circle simply bounces around. They are generated each with an age, a statistically real chance of having a co-morbid condition (diabetes, hypertension, immunosuppressed, pulmonary issues…), and crucially, a name out of a baby book. They can be in one of these states: In addition, there’s a diagnosed flag. We render asymptomatic the same as healthy. We render each of the other states differently, depending on whether the diagnosed flag is set. They show as healthy until dead, if not diagnosed. If diagnosed, you can see what stage they are in (icon or color change). The circles move and bounce. If an asymptomatic one touches a healthy one, they have a statistically valid chance of infecting. Circles progress through these states using simple stats. We track current counts on all of these, and show a bar graph. Yes, that means players can see that people are getting sick, but don’t know where. The player has the following buttons. The game ticks through days at an accelerated pace. It runs for 18 months worth of days. At the end of it, you have a vaccine, and the epidemic is over. Then we tell you what percentage of your little world died. Maybe with a splash screen listing every name and age of everyone who died. And we show how much money you spent. Remember, you can go negative, and it’s OK. That’s it. Ideally, it runs in a webpage. Itch.io maybe. Or maybe I have a friend with unlimited web hosting. Luxury features would be a little ini file or options screen that lets you input real world data for your town or country: percent hypertensive, age demographics, that sort of thing. Or maybe you could crowdsource it, so it’s a pulldown… Each weekend I think about building this. So far, I haven’t, and instead I try to focus on family and mental health and work. But maybe someone else has the energy. I suspect it might persuade and save lives. Some things about this that I want to point out in hindsight. Per the American Heart Association, among adults age 20 and older in the United States, the following have high blood pressure: Per the American Diabetes Association, Per studies in JAMA, Next, realize that because the disease spreads mostly inside households (where proximity means one case tends to infect others), this means that protecting the above extremely large slices of the population means either isolating them away from their families, or isolating the entire family and other regular contacts. People tend to think the at-risk population is small. It’s not. The response, for Facebook, was pretty surprising. The post was re-shared a lot, and designers from across the industry jumped in with tweaks to the rules. Some folks re-posted it to large groups about public initiatives, etc. There was also, of course, plenty of skepticism that something like this would make any difference at all. The first to take up the challenge was John Albano, who had his game Covid Ops up and running on itch.io a mere six days later . You can still play it there! Stuck in the house and looking for things to do. Soooo, when a fellow game dev suggested a game idea and basic ruleset along with “I wish someone would make a game like this,” I took that as a challenge to try. Tonight (this morning?), the first release of COVID OPS has been published. John’s game was pretty faithful to the sketch. You can see the comorbidities over on the left, and the way the player has clicked on 72 year old Rowan — who probably isn’t going to make it. As he updated it, he added in more detailed comorbidity data, and (unfortunately, as it turns out) made it so that people were immune after recovering from infection. And of course, like the next one I’ll talk about, John made a point of including real world resource links so that people could take action. By April 6th, another team led by Khail Santia had participated in Jamdemic 2020 and developed the first version of In the Time of Pandemia. He wrote, The compound I stay at is about to be cordoned. We’ve been contact-traced by the police, swabbed by medical personnel covered in protective gear. One of our housemates works at a government hospital and tested positive for antibodies against SARS-CoV-2. The pandemic closes in from all sides. What can a game-maker do in a time like this? I’ve been asking myself this question since the beginning of community quarantine. I’m based in Cebu City, now the top hotspot for COVID-19 in the Philippines in terms of incidence proportion. This game would go on to be completed by a fuller team including a mathematical epidemiologist, and In the Time of Pandemia eventually ended up topping charts on Newgrounds when it launched there in July of 2020. This game went viral and got a ton of press across the Pacific Rim . The team worked closely with universities and doctors in the Philippines and validated all the numbers. They added local flavor to their levels representing cities and neighborhoods that their local players would know. Gregg Victor Gabison, dean of the University of San Jose-Recoletos College of Information, Computer & Communications Technology, whose students play-tested the game, said, “This is the kind of game that mindful individuals would want to check out. It has substance and a storyline that connects with reality, especially during this time of pandemic.” Not only does the game have to work on a technical basis, it has to communicate how real a crisis the pandemic is in a simple, digestible manner. Dr. Mariane Faye Acma, resident physician at Medidas Medical Clinic in Valencia, Bukidnon, was consulted to assess the game’s medical plausibility. She enumerated critical thinking, analysis, and multitasking as skills developed through this game. “You decide who are the high risks, who needs to be tested and isolated, where to focus, [and] how much funds to allocate….The game will make players realize how challenging the work of the health sector is in this crisis.” “Ultimately, the game’s purpose is to give players a visceral understanding of what it takes to flatten the curve,” Santia said. I think most people have no idea that any of this happened or that I was associated with it. I only posted the design sketch on Facebook; it got reshared across a few thousand people. It wasn’t on social media, I didn’t talk about it elsewhere, and for whatever reason, I didn’t blog about it. I have had both these games listed on my CV for a while. Oh, I didn’t do any of the heavy lifting… all credit goes to the developers for that. There’s no question that way more than 95% of the work comes after the high-level design spec. But both games do credit me, and I count them as games I worked on. A while back, someone on Reddit said it was pathetic that I listed these. I never quite know what to make of comments like that (troll much?!?). No offense, but I’m proud of what a little design sketch turned into, and proud of the work that these teams did, and proud that one of the games got written up in the press so much; ended up being used in college classrooms; was vetted and validated by multiple experts in the field; and made a difference however slight. Peak Covid was a horrendous time. Horrendous enough that we have kind of blocked it from our memories. But I lost friends and colleagues. I still remember. Back then I wrote, This is the largest event in your lifetime. It is our World War, our Great Depression. We need to rise the occasion, and think about how we change. There is no retreat to how it used to be. There is only through. A year later, the vaccine gave us that path through, and here we are now. But as I write this, we have the first human case of H5N5 bird flu; it was only a matter of time. Maybe these games helped a few people get through it all. They were played by tens of thousands, after all. Maybe they will help next time. I know that the fact that they were made helped me get through, that making them helped John get through, helped Khail get through — in his own words: In the end, the attempt to articulate a game-maker’s perspective on COVID-19 has enabled me to somehow transcend the chaos outside and the turmoil within. It’s become a welcome respite from isolation, a thread connecting me to a diversity of talents who’ve been truly generous with their expertise and encouragement. As incidences continue to rise here and in many parts of the world, our hope is that the game will be of some use in showing what it takes to flatten the curve and in advocating for communities most in need. So… at minimum, they made a real difference to at least three people. And that’s not a bad thing for a game to aspire to. 328 million people in the US. 60% of that is 196 million catch it. 0.4% of that is 780,000 dead. asymptomatic but contagious symptomatic 70% of asymptomatic cases turn symptomatic after 1d10+5 days. The others stay sick for the full 21 days. Percent chance of moving from symptomatic to severe is based on comorbid conditions, but the base chance is 1 in 5 after some amount of days. Percent chance of moving from severe to critical is 1 in 4, modified by age and comorbidities, if in hospital. Otherwise, it’s double. Percent chance of moving from critical to dead is something like 1 in 5, modified by age and comorbidities, if in hospital. Otherwise, it’s double. Symptomatic, severe, and critical circles that do not progress to dead move to ‘recovered’ after 21 days since reaching symptomatic. Severe and critical circles stop moving. Hover on a circle, and you see the circle’s name and age and any comorbidities (“Alison, 64, hypertension.”) Test . This lets them click on a circle. If the circle is asymptomatic or worse, it gets the diagnosed flag. But it costs you one test. Isolate . This lets them click on a circle, and freezes them in place. Some visual indicator shows they are isolated. Note that isolated cases still progress. Hospitalize . This moves the circle to hospital. Hospital only has so many beds. Clicking on a circle already in hospital drops the circle back out in the world. Circles in hospital have half the chance or progressing to the next stage. Buy test . You only have so many tests. You have to click this button to buy more. Buy bed . You only have this many beds. You have to click this button to buy more. Money goes up when circles move. But you are allowed to go negative for money . Lockdown. Lastly, there is a global button that when pressed, freezes 80% of all circles. But it gradually ticks down and circles individually start to move again, and the button must be pressed again from time to time. While lockdown is running, it costs money as well as not generating it. If pressed again, it lifts the lockdown and all circles can move again. At the time that I posted, I could tell that people were desperately unwilling to enter lockdown for any extended period of time; but “The Hammer and the Dance” strategy of pulsed lockdown periods was still very much in our future. I wanted a mechanic that showed population non-compliance. There was also quite a lot of obsessing over case counts at the time, and one of the things that I really wanted to get across was that our testing was so incredibly inadequate that we really had little idea of how many cases we were dealing with and therefore what the IFR (infection fatality rate) actually was. That’s why tests are limited in the design sketch. I was also trying to get across that money was not a problem in dealing with this. You could take the money value negative because governments can choose to do that. I often pointed out in those days that if the government chose, it could send a few thousand dollars to every household every few weeks for the duration of lockdown. It would likely have been less impact to the GDP and the debt than what we actually did. I wanted names. I wanted players to understand the human cost, not just the statistics. Today, I might even suggest that an LLM generate a little biography for every fatality. Another thing that was constantly missed was the impact of comorbidities. To this day, I hear people say “ah, it only affected the old and the ill, so why not have stayed open?” To which I would reply with: For non-Hispanic whites, 33.4 percent of men and 30.7 percent of women. For non-Hispanic Blacks, 42.6 percent of men and 47.0 percent of women. For Mexican Americans, 30.1 percent of men and 28.8 percent of women. 34.2 million Americans, or 10.5% of the population, have diabetes. Nearly 1.6 million Americans have type 1 diabetes, including about 187,000 children and adolescents 4.2% of of the population of the USA has been diagnosed as immunocompromised by their doctor

1 views
Jim Nielsen 3 weeks ago

Down The Atomic Rabbit Hole

Over the years, I’ve been chewing on media related to nuclear weapons. This is my high-level, non-exhaustive documentation of my consumption — with links! This isn’t exhaustive, but if you’ve got recommendations I didn’t mention, send them my way. Reply via: Email · Mastodon · Bluesky 📖 The Making of the Atomic Bomb by Richard Rhodes. This is one of those definitive histories (it’s close to 1,000 pages and won a Pulitzer Prize). It starts with the early discoveries in physics, like the splitting of the atom, and goes up to the end of WWII. I really enjoyed this one. A definite recommendation. 📖 Dark Sun: The Making of the Hydrogen Bomb by Richard Rhodes is the sequel. If you want to know how we went from atomic weapons to thermonuclear ones, I think this one will do it. It was a harder read for me though. It got into a lot of the politics and espionage of the Cold War and I fizzled out on it (plus my library copy had to be returned, somebody else had it on hold). I’ll probably go pick it up again though and finish it — eventually. 📖 The Bomb: A Life by Gerard J. DeGroot This one piqued my interest because it covers more history of the bomb after its first use, including the testing that took place in Nevada not far from where I grew up. Having had a few different friends growing up whose parents died of cancer that was attributed to being “downwinders” this part of the book hit close to home. Which reminds me of: 🎥 Downwinders & The Radioactive West from PBS. Again, growing up amongst locals who saw some of the flashes of light from the tests and experienced the fallout come down in their towns, this doc hit close to home. I had two childhood friends who lost their Dads to cancer (and their families received financial compensation from the gov. for it). 📖 Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety by Eric Schlosser Read this one years ago when it first came out. It’s a fascinating look at humans bumbling around with terrible weapons. 🎥 Command and Control from PBS is the documentary version of the book. I suppose watch this first and if you want to know more, there’s a whole book for you. 📖 Nuclear War: A Scenario by Annie Jacobsen Terrifying. 🎥 House of Dynamite just came out on Netlify and is basically a dramatization of aspects of this book. 📖 The Button: The New Nuclear Arms Race and Presidential Power from Truman to Trump by William J. Perry and Tom Z. Collina How did we get to a place where a single individual has sole authority to destroy humanity at a moment’s notice? Interesting because it’s written by former people in Washington, like the Sec. of Defense under Clinton, so you get a taste of the bureaucracy that surrounds the bomb. 🎧 Hardcore History 59 – The Destroyer of Worlds by Dan Carlin First thing I’ve really listened to from Dan. It’s not exactly cutting-edge scholarship and doesn’t have academic-level historical rigor, but it’s a compelling story around how humans made something they’ve nearly destroyed themselves with various times. The part in here about the cuban missile crisis is wild. It led me to: 📖 Nuclear Folly: A History of the Cuban Missile Crisis by Serhii Plokhy is a deep look at the Cuban Missile crisis. This is a slow burning audiobook I’m still chewing through. You know how you get excited about a topic and you’re like “I’m gonna learn all about that thing!” And then you start and it’s way more than you wanted to know so you kinda back out? That’s where I am with this one. 🎥 The Bomb by PBS. A good, short primer on the bomb. It reminds me of: 🎥 Turning Point: The Bomb and the Cold War on Netflix which is a longer, multi-episode look at the bomb during the Cold War. 📝 Last, but not least, I gotta include at least one blog! Alex Wellerstein, a historian of science and creator of the nukemap , blogs at Doomsday Machines if you want something for your RSS reader.

0 views
Jeff Geerling 3 weeks ago

Converting hot dog plasma video to sound with OpenCV

When you ground a hot dog to an AM radio tower, it generates plasma. While the hot dog's flesh is getting vaporized, a tiny plasma arc moves the air around it back and forth. And because this tower is an AM tower, it uses Amplitude Modulation , where a transmitter changes the amplitude of a carrier wave up and down. Just like a speaker cone moving up and down, the plasma arc from the hot dog turns that modulation into audible sound.

1 views
Rik Huijzer 3 weeks ago

Some Thoughts on High Frequency Currents and Oxidative Stres...

s As I wrote before, there is much research that shows that electromagnetic radiation causes oxidative stress. This means that electromagnetic radiation can knock an electron out of a molecule, which is called ionizing radiation. Some people say that electromagnetic radiation from radio frequencies is non-ionizing, but this contradicts the fact that many studies have shown that radio frequencies also produce oxidative stress. The higher the frequency, the more ionizing (read: damaging) the radiation is. Similarly, it is known that x-rays at wavelengths below 10 nm will destroy cells. Electro...

0 views
Rik Huijzer 3 weeks ago

Cell and DNA Damage and Repair from Different Kinds of Radia...

tion A very interesting fact from the book _Light: The Medicine of the Future_ (1990) by Jacob Liberman is that far-ultraviolet radiation (UV-C), 100-280 nm, causes DNA damage and shortens cells' life spans, according to a study by Dr. Smith-Sonneborn. However, she found that near-ultraviolet radiation (UV-A), 320-400 nm, repaired cells and reverses aging. Even more, it repairs DNA while damaged DNA is known to cause cancer. This suggests that sunlight can repair cells and reverse aging since sunlight that reaches the earth is mostly above 270 nm, according to Wikipedia: !spectrum of solar...

0 views
Rik Huijzer 3 weeks ago

Solar Cell Spectral Sensitivity

I just came across the book _Solar Secrets_ (2014) by Peter Lindemann. It observes that most solar panels are optimized to perform on bright sunny days whereas they barely perform on cloudy days. This while there are panels that capture light of lower wavelengths and are therefore much less affected by clouds. The author shows this in the following figure from page 17: ![Solar Cell Spectral Sensitivity a-Si solar versus c-Si solar](/files/e452ae9cbcc36dba) Here it can be seen that Amorphous Silicon (a-Si) solar panels produce energy from the 500 to 700 nm wave length range while Crystalline...

0 views
The Coder Cafe 1 months ago

Horror Coding Stories: Therac-25

📅 Last updated: March 9, 2025 🎃 Welcome to The Coder Cafe! Today, we examine the Therac-25 accidents, where design and software failures resulted in multiple radiation overdoses and deaths. Make sure to check the Explore Further section to see if you’re able to reproduce the deadly issue. Get cozy, grab a pumpkin spice latte, and let’s begin! Therac-25 Treating cancers used to require a mix of machines, depending on tumor depth: shallow or deep. In the early 1980s, a new generation promised both from a single system. That was a big deal for hospitals: one machine instead of several meant lower maintenance and fewer systems to manage. That was the case with the Therac-25. The Therac-25 offered two therapies with selectable modes: Electron beam: Low-energy electrons for shallow tumors (e.g., skin cancer). X-ray photons: High-energy radiation for deep tumors (e.g., lung cancer). Earlier Therac models allowed switching modes with hardware circuits and physical interlocks. The new version was smaller, cheaper, and computer-controlled. Less hardware and fewer parts meant lower costs. However, what no one realized soon enough: it also removed an independent safety net. On a routine day, a radiology technologist sat at the console and began entering a plan: By habit, she selected X-ray (deep mode). Then she immediately corrected it for Electron (shallow mode) and hit start. The machine halted with a message. The operator’s manual didn’t explain the code. Service materials listed the number but gave no useful guidance, so she resumed and triggered the radiation. The patient was receiving his ninth treatment. Immediately, he knew something was different. He reported a buzzing sound, later recognized as the accelerator pouring out radiation at maximum. The pain came fast; paralysis followed. He later died from radiation injury. Weeks later, a second patient endured the same incident on the same model. Initially, the radiology technologist entered ‘ ’ for X-ray (‘▮’ is the cursor and ‘ ’ are other fields): She immediately hit Cursor Up to go back and correct the field to ‘ ’: After a rapid sequence of Return presses, she moved back down to the command area: From her perspective, the screen showed the corrected mode, so she hit return and started the treatment: Behind the scenes, the Therac-25 software ran several concurrent tasks: Data-entry task : Monitored operator inputs and edited a shared treatment-setup structure. Hardware-control task : On a periodic loop, snapshotted that same structure and positioned the turntable and magnets based on user input. Because both tasks read the same memory with no mutual exclusion, there was a short window (on the order of seconds) in which the hardware-control task used a different value than the one displayed on the screen. As a result: The UI showed Electron mode, which looked correct to the operator. The hardware-control task had snapshotted stale data and marked the system as ready even though critical elements (e.g., turntable position, scanning magnets/accessories) were not yet aligned with electron mode. When treatment was started, the machine delivered an effectively unscanned, high-intensity electron beam, causing a massive overdose. This is a race condition example: the outcome depends on the timing of events, here, the input cadence of the technologist. Depending on the timing, the system could enter a fatal state, with one process seeing ‘ ’ while another saw ‘ ’. The manufacturer later confirmed the error could not be reproduced reliably in testing. The timing had to line up just right, which made the bug elusive. They initially misdiagnosed it as a hardware fault and applied only minor fixes. Unfortunately, the speed of operator editing was the key trigger that exposed this software race. The problem could have stopped here, but it didn’t. Months later, another fatal overdose occurred, this time caused by a different software defect. It wasn’t a timing race. This time, the issue was a counter overflow within the control program. The software used an internal counter to track how many times certain setup operations ran. After the counter exceeded its maximum value, it wrapped back to zero. That arithmetic overflow created a window where a critical safety check was bypassed, allowing the beam to turn on without the proper accessories in place. Again, the Therac-25 fired a high-intensity beam without the proper hardware configuration. Both the race condition and the counter overflow stemmed from the same design flaw: the belief that software alone could enforce safety. The Therac-25 showed, in tragic terms, that without independent safeguards, small coding errors can have catastrophic consequences. We should know that whether it’s software, hardware, or a human process, every single safeguard has inherent flaws. Therefore, in complex systems, safety should be layered, as illustrated by the Swiss cheese model: Credits In total, there were six known radiation overdoses involving the Therac-25, and at least three were fatal. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. Adaptive LIFO Resilient, Fault-tolerant, Robust, or Reliable? Lurking Variables The Worst Computer Bugs in History: Race conditions in Therac-25 Killed By A Machine: The Therac-25 An Investigation of the Therac-25 Accidents I created a Docker image based on a C implementation from an MIT course simulating the operator console of the Therac-25 interface: You can run the UI using Docker: Simulator commands: Beam Type: ‘ ’ or ‘ ’ Command: ‘ ’ for beam on, or ‘ ’ to quit the simulator. 👉 Try to trigger the error based on the scenario discussed. ❤️ If you enjoyed this post, please hit the like button. 💬 Any other horror coding stories you want to share? Leave a comment Therac-25 Treating cancers used to require a mix of machines, depending on tumor depth: shallow or deep. In the early 1980s, a new generation promised both from a single system. That was a big deal for hospitals: one machine instead of several meant lower maintenance and fewer systems to manage. That was the case with the Therac-25. The Therac-25 offered two therapies with selectable modes: Electron beam: Low-energy electrons for shallow tumors (e.g., skin cancer). X-ray photons: High-energy radiation for deep tumors (e.g., lung cancer). By habit, she selected X-ray (deep mode). Then she immediately corrected it for Electron (shallow mode) and hit start. Data-entry task : Monitored operator inputs and edited a shared treatment-setup structure. Hardware-control task : On a periodic loop, snapshotted that same structure and positioned the turntable and magnets based on user input. The UI showed Electron mode, which looked correct to the operator. The hardware-control task had snapshotted stale data and marked the system as ready even though critical elements (e.g., turntable position, scanning magnets/accessories) were not yet aligned with electron mode. When treatment was started, the machine delivered an effectively unscanned, high-intensity electron beam, causing a massive overdose. Credits In total, there were six known radiation overdoses involving the Therac-25, and at least three were fatal. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. Resources More From the Reliability Category Adaptive LIFO Resilient, Fault-tolerant, Robust, or Reliable? Lurking Variables The Worst Computer Bugs in History: Race conditions in Therac-25 Killed By A Machine: The Therac-25 An Investigation of the Therac-25 Accidents You can run the UI using Docker: Simulator commands: Beam Type: ‘ ’ or ‘ ’ Command: ‘ ’ for beam on, or ‘ ’ to quit the simulator.

0 views
A Working Library 1 months ago

Undersense

James Hillman does not want you to interpret your dreams: Analytical tearing apart is one thing, and conceptual interpretation another. We can have analysis without interpretation. Interpretations turn dream into its meaning. Dream is replaced with translation. But dissection cuts into the flesh and bone of the image, examining the tissue of its internal connections, and moves around among its bits, though the body of the dream is still on the table. We haven’t asked what does it mean, but who and what and how it is. That is, to interpret the dream is to exploit it, as a capitalist exploits a vein of coal, transforming those fossilized remains into a commodity, something that can be measured, evaluated, bought and sold. Hillman is demanding that you not turn the dream into something else but that you let it be what it is, that you approach it as keen and attentive observer, not trying to transform it but accepting it, acknowledging it, living with it. (As I read this, I had a sharp image of Rowan in The Lost Steersman , dissecting the body of a creature from the outer lands, finding organs and tissues whose purpose she could not fathom but could—and did— describe in intricate detail.) There’s an attitude here that I think can be expanded to any work in which observation, noticing, witnessing what is before us is privileged over trying to make it into something else. There is a fundamental humility to working in this way, to acknowledging that our understanding of the world around us is always incomplete. This is an incompleteness without judgment: not incomplete as inferior or flawed but incomplete as open-ended, infinite, wondrous. We can move in this direction by means of hermeneutics, following Plato’s idea of hyponoia , “undersense,” “deeper meaning,” which is an ancient way of putting Freud’s idea of “latent.” The search for undersense is what we express in common speech as the desire to understand. We want to get below what is going on and see its basis, its fundamentals, how and where it is grounded. The need to understand more deeply, this search for deeper grounding, is like a call from Hades to move toward his deeper intelligence. All these movements of hyponoia , leading toward an understanding that gains ground and makes matter, are work. Work is the making of matter, the movement of energy from one system to another. The work of making sense, of digging for undersense, is work that matters. I take undersense to mean, in part, a kind of feeling or exploration, of reaching your hands into the dirt, of tearing apart the body of the dream with no preconceived notions of what you will find. And not only dreams. The search for undersense is worthy also of the waking world, the world of daylight. In a world in which the creation and persistence of knowledge is threatened and fragile, we need under sense more than under standing , the exploration and observation that gains ground and makes matter. There’s an argument here for the kind of knowledge that you feel in your bones, that gets under your fingernails, that can’t be lifted away and perverted by a thieving bot. Knowledge that is steady, solid, rooted in the way roots hold tightly to the earth, defended from rain and flood, from being washed away with each passing storm. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
iDiallo 1 months ago

Galactic Timekeeping

Yes, I loved Andor. It was such a breath of fresh air in the Star Wars universe. The kind of storytelling that made me feel like a kid again, waiting impatiently for my father to bring home VHS tapes of Episodes 5 and 6. I wouldn't call myself a die-hard fan, but I've always appreciated the original trilogy. After binging both seasons of Andor, I immediately rewatched Rogue One , which of course meant I had to revisit A New Hope again. And through it all, one thing kept nagging at me. One question I had. What time is it? In A New Hope , Han Solo, piloting the Millennium Falcon through hyperspace, casually mentions: "We should be at Alderaan about 0200 hours." And they are onto the next scene with R2D2. Except I'm like, wait a minute. What does "0200 hours" actually mean in an intergalactic civilization? When you're travelling through hyperspace between star systems, each with their own planets spinning at different rates around different suns, what does "2:00 AM" even refer to? Bear with me, I'm serious. Time is fundamentally local. Here on Earth, we define a "day" by our planet's rotation relative to the Sun. One complete spin gives us 24 hours. A "year" is one orbit around our star. These measurements are essentially tied to our specific solar neighborhood. So how does time work when you're hopping between solar systems as casually as we hop between time zones? Before we go any further into a galaxy far, far away, let's look at how we're handling timekeeping right now as we begin exploring our own solar system. NASA mission controllers for the Curiosity rover famously lived on "Mars Time" during their missions . A Martian day, called a "sol", is around 24 hours and 40 minutes long. To stay synchronized with the rovers' daylight operations, mission control teams had their work shifts start 40 minutes later each Earth day. They wore special watches that displayed time in Mars sols instead of Earth hours. Engineers would arrive at work in California at what felt like 3:00 AM one week, then noon the next, then evening, then back to the middle of the night. All while technically working the "same" shift on Mars. Families were disrupted. Sleep schedules were destroyed. And of course, "Baby sitters don't work on Mars time." And this was just for one other planet in our own solar system. One team member described it as living " perpetually jet-lagged ." After several months, NASA had to abandon pure Mars time because it was simply unsustainable for human biology. Our circadian rhythms can only be stretched so much. With the Artemis missions planning to establish a continuous human presence on the Moon, NASA and international space agencies are now trying to define an even more complicated system: Lunar Standard Time. A lunar "day", from one sunrise to the next, lasts about 29.5 Earth days. That's roughly 14 Earth days of continuous sunlight followed by 14 Earth days of darkness. You obviously can't work for two weeks straight and then hibernate for two more. But that's not all. On the moon, time itself moves differently. Because of the moon's weaker gravity and different velocity relative to Earth, clocks on the Moon tick at a slightly different rate than clocks on Earth. It's a microscopic difference (about 56 microseconds per day), but for precision navigation, communication satellites, and coordinated operations, it matters. NASA is actively working to create a unified timekeeping framework that accounts for these relativistic effects while still allowing coordination between lunar operations and Earth-based mission control. And again, this is all within our tiny Earth-Moon system, sharing the same star. If we're struggling to coordinate time between two bodies in the same gravitational system, how would an entire galaxy manage it? In Star Wars the solution, according to the expanded universe lore , is this: "A standard year, also known more simply as a year or formally as Galactic Standard Year, was a standard measurement of time in the galaxy. The term year often referred to a single revolution of a planet around its star, the duration of which varied between planets; the standard year was specifically a Coruscant year, which was the galactic standard. The Coruscant solar cycle was 368 days long with a day consisting of 24 standard hours." So the galaxy has standardized on Coruscant, the political and cultural capital, as the reference point for time. We can think of it as Galactic Greenwich Mean Time, with Coruscant serving as the Prime Meridian of the galaxy. This makes a certain amount of political and practical sense. Just as we arbitrarily chose a line through Greenwich, England, as the zero point for our time zones, a galactic civilization would need to pick some reference frame. Coruscant, as the seat of government for millennia, is a logical choice. But I'm still not convinced that it is this simple. Are those "24 standard hours" actually standard everywhere, or just on Coruscant? Let's think through what Galactic Standard Time would actually require: Tatooine has a different rotation period than Coruscant. Hoth probably has a different day length than Bespin. Some planets might have extremely long days (like Venus, which takes 243 Earth days to rotate once). Some might rotate so fast that "days" are meaningless. Gas giants like Bespin might not have a clear surface to even define rotation against. For local populations who never leave their planet, this is fine. They just live by their star's rhythm. But the moment you have interplanetary travel, trade, and military coordination, you need a common reference frame. This was too complicated for me to fully grasp, but here is how I understood it. The theory of relativity tells us that time passes at different rates depending on your velocity and the strength of the gravitational field you're in. We see this in our own GPS satellites. They experience time about 38 microseconds faster per day than clocks on Earth's surface because they're in a weaker gravitational field, even though they're also moving quickly (which slows time down). Both effects must be constantly corrected or GPS coordinates would drift by kilometers each day. Now imagine you're the Empire trying to coordinate an attack. One Star Destroyer has been orbiting a high-gravity planet. Another has been traveling at relativistic speeds through deep space. A third has been in hyperspace. When they all rendezvous, their clocks will have drifted. How much? Well, we don't really know the physics of hyperspace or the precise gravitational fields involved, so we can't say. But it wouldn't be trivial. Even if you had perfectly synchronized clocks, there's still the problem of knowing what time it is elsewhere. Light takes time to travel. A lot of time. Earth is about 8 light-minutes from the Sun. Meaning if the Sun exploded right now, we wouldn't know for 8 minutes. Voyager 1, humanity's most distant spacecraft, is currently over 23 light-hours away. A signal from there takes nearly a full Earth day to reach us. The Star Wars galaxy is approximately 120,000 light-years in diameter (according to the lore again). Even with the HoloNet (their faster-than-light communication system), there would still be transmission delays, signal degradation, and the fundamental question of "which moment in time are we synchronizing to?" If Coruscant sends out a time signal, and a planet on the Outer Rim receives it three days later, whose "now" are they synchronizing to? In relativity, there is no universal "now." Time is not an absolute, objective thing that ticks uniformly throughout the universe. It's relative to your frame of reference. On Earth, we all roughly share the same frame of reference, so we can agree on UTC and time zones. But in a galaxy with millions of worlds, each moving at different velocities relative to each other, each in different gravitational fields, with ships constantly jumping through hyperspace. Which frame of reference do you pick? You could arbitrarily say "Coruscant's reference frame is the standard," but that doesn't make the physics go away. A ship traveling at near-light-speed would still experience time differently. Any rebel operation requiring split-second timing would fall apart. Despite all this complexity, the characters in Star Wars behave as if time is simple and universal. They "seem" to use a dual-time system: This would be for official, galaxy-wide coordination: When Mon Mothma coordinates with Rebel cells across the galaxy in Andor , they're almost certainly using GST. When an X-Wing pilot gets a mission briefing, the launch time is in GST so the entire fleet stays synchronized. This is for daily life: The workday on Ferrix follows Ferrix's sun. A cantina on Tatooine opens when Tatooine's twin suns rise. A farmer on Aldhani plants crops according to Aldhani's seasons. A traveler would need to track both. Like we carry smartphones with clocks showing both home time and local time. An X-Wing pilot might wake up at 0600 LPT (local dawn on Yavin 4) for a mission launching at 1430 GST (coordinated across the fleet). This is something I couldn't let go when watching the show. In Andor, Cassian often references "night" and "day". Saying things like "we'll leave in the morning" or "it's the middle of the night." When someone on a spaceship says "it's the middle of the night," or even "Yesterday," what do they mean? There's no day-night cycle in space. They're not experiencing a sunset. The most logical explanation is that they've internalized the 24-hour Coruscant cycle as their personal rhythm. "Night" means the GST clock reads 0200, and the ship's lights are probably dimmed to simulate a diurnal cycle, helping regulate circadian rhythms. "Morning" means 0800 GST, and the lights brighten. Space travelers have essentially become Coruscant-native in terms of their biological and cultural clock, regardless of where they actually are. It's an artificial rhythm, separate from any natural cycle, but necessary for maintaining order and sanity in an artificial environment. I really wanted to present this in a way that makes sense. But the truth is, realistic galactic timekeeping would be mind-numbingly complex. You'd somehow need: It would make our International Telecommunication Union's work on UTC look like child's play. But Star Wars isn't hard science fiction. It's a fairy tale set in space. A story about heroes, empires, and rebellions. The starfighters make noise in the vacuum of space. The ships bank and turn like WWII fighters despite having no air resistance. Gravity works the same everywhere regardless of planet size. So when Han Solo says "0200 hours," just pretend he is in Kansas. We accept that somewhere, somehow, the galaxy has solved this complex problem. Maybe some genius inventor in the Old Republic created a McGuffin that uses hyperspace itself as a universal reference frame, keeping every clock in the galaxy in perfect sync through some exotic quantum effect. Maybe the most impressive piece of technology in the Star Wars universe isn't the Death Star, which blows up. Or the hyperdrive, which seems to fail half the time. The true technological and bureaucratic marvel is the invisible, unbelievably complex clock network that must be running flawlessly, constantly behind the scene across 120,000 light years. It suggests deep seated control, stability and sheer organizational power for the empire. That might be the real foundation of real galactic power hidden right there in plain sight. ... or maybe the Force did it! Maybe I took this a bit too seriously. But along the way, I was having too much fun reading about how NASA deals with time, and the deep lore behind Star Wars. I'm almost starting to understand why the Empire is trying to keep those pesky rebels at bay. I enjoyed watching Andor. Remember, Syril is a villain. Yes, you are on his side sometimes, they made him look human, but he is still a bad guy. There I said it. They can't make a third season because Rogue One is what comes next. But I think I've earned the right to just enjoy watching Cassian Andor glance at his chrono and say "We leave at dawn", wherever and whenever that is. A clock on a planet with stronger gravity runs slower than one on a planet with weaker gravity A clock on a fast-moving ship runs slower than one on a stationary planet Hyperspace travel, which somehow exceeds the speed of light, would create all kinds of relativistic artifacts Military operations ("All fighters, attack formation at 0430 GST") Senate sessions and government business Hyperspace travel schedules Banking and financial markets HoloNet news broadcasts Work schedules Sleep cycles Business hours Social conventions ("let's meet for lunch") Relativistic corrections for every inhabited world's gravitational field Constant recalibration for ships entering and exiting hyperspace A faster-than-light communication network that somehow maintains causality Atomic clock networks distributed across the galaxy, all quantum-entangled or connected through some exotic physics Sophisticated algorithms running continuously to keep everything synchronized Probably a dedicated branch of the Imperial bureaucracy just to maintain the Galactic Time Standard

0 views
ava's blog 1 months ago

pain management

Allow me to crash out for a second. Since roughly a month, I’m experiencing a flareup in my spondyloarthritis (Ankylosing Spondylitis or Bechterew’s disease…). This is a type of arthritis that primarily affects the spine and usually some other joints. I first noticed it in the base of my right thumb that was painful and a bit stiff (this has now mostly resolved) and plantar fasciitis (the fascia in your foot arch, basically; my body loves attacking in this area for some reason, as I used to have frequent Achilles tendonitis as a teen). This first caused unexpected pain in some moments of walking and also resulted in issues using my phone, using a controller, and every day stuff that needs thumb mobility and pressure on the thumb. I also noticed general aches especially after resting and following some exercise. One example was having weirdly stiff elbows and shoulders after indoor cycling, which I hadn’t had in quite a while after treatment worked. This was followed by sacroiliitis (inflammation where hip and spine meet im the lower back) first on the right and now on both sides, and sharp pain in the upper thoracic spine (between the shoulder blades). That means while walking, sitting, and lying down, I have pain in the whole area of my lower back and hips, and as I breathe and my upper spine moves, I am in pain as well. Every time I breathe in, there’s a knife in my back. As nerves are affected too, I have shooting pains down my legs and into my shoulders and neck. My right leg occasionally randomly collapses away from under me due to this, but I haven’t fallen yet. Unfortunately, everything gets worse with rest (both sitting and lying down) but obviously, I can’t exercise 24/7. It’s generally difficult to hit the sweet spot each day where exercise helps and doesn’t further aggravate everything. I recently had such a great workout (30 minutes treadmill, 20 minutes cycling, 20 mins mix of yoga and pilates) that made me feel as if I had just gotten a dose of heavy painkillers, but that relief only lasted for about two hours max. I still need to sleep, study, and do an office job. I tried to go back to a low dose of Prednisone and it obviously helps a bit, but I don’t wanna be on it - I was on 80mg last year, tapered down to 50mg, and then couldn’t go lower for months until new treatment worked. I had the whole experience of side effects, even medically induced Cushing’s Disease and issues with my blood sugar. When I recently tried between 2mg-4mg, I was immediately back with the constant thirst and peeing (= blood sugar issues). It was so disrupting I had to stop. It’s sad seeing everything fall apart again. I see it in the way more stuff is lying around in the apartment than usual. Chores take longer or get procrastinated on. I am low energy. I barely go to the gym anymore and prefer to exercise at home. I heat up a heating pad for my back like 4 times a day, it’s not more than that only because I’m often too lazy and stubborn to do it more often. I try so hard not to take painkillers. You aren’t supposed to take ibuprofen with Crohn’s disease, but I have to sometimes. But when I max out my limit for it, I add paracetamol, which works less well but helps at least some. I’m especially careful with that so I don’t harm my liver. So it all becomes this big monster of trying to get the energy to exercise and making time for it in my day, then holding myself over with heating pads and stretches and distractions, before turning to painkillers as a last resort, and alternating/mixing them. I almost treat it like a luxury good, something to indulge in, because of weird shaming around it. I remember this absolutely disrespectful interview with a doctor I read this year in which he was clutching his pearls about people taking ibuprofen and that it’s so dangerous and poisonous and that people should just stop. He talked about it as if people just take these for fun over a papercut. I wish I could shit on his doormat. Peak example of a healthy and non-menstruating person with zero empathy. So every couple days, I allow myself to take them, and my inner monologue is really like “Oh well, I deserve this. I’m splurging on it. It’s okay for today, I held out long enough. But it is kind of extra. Maybe I could have skipped this one too. Is it even bad enough?” And then they kick in and I truly realize how bad it was. You get used to it after a while, your brain kind of tuning out some of it, but it’s still this constant static sound in the background that ruins everything. Realistically, if I’m being honest, I would need painkillers every morning and evening every single day. And if we’re being even more real, they would not be the freely available pills, but the highly controlled patches. But that also opens up a whole lot of other possible issues. It sucks! It fucking sucks. I throw myself into my studies, into my volunteer work, into lengthy blog posts and anything like that so there is finally some focus away from my body. If I’m in a flow state, I don’t have to be in here, I don’t have to witness this. I love slowly getting tired on the sofa and falling asleep while doing something else (like watching something) and I love being busy with something (like studying late) until I’m dead tired and then crashing into bed, falling asleep quickly. Because the alternative is going to bed in a timely manner and lying awake, being hyperaware of everything that hurts, and it starts hurting more and more as time goes on, and I’m lying there wondering how I can possibly manage the next 30 years like this, wishing it was over. I don’t have to endure this forever, of course. This flareup just needs to pass, or I need to switch medications, or I finally try and get a proper pain management going for these phases, and then everything goes back to normal. But in these moments, none of that matters. I just want it to be over. Every morning I get teleported back into this hurtful mess, and everything that would help causes more issues. It makes me angry and close to tears all the time, and makes me worry if I’ve developed antibodies to infliximab. My injection this week changed nothing. Next week will be super busy with traveling and attending events, and I’m tired of portioning out the relief. I’ll take what I need to make it, and I hope the rheumatology appointment the week after will be helpful. If anyone takes anything away from this, it should be the obvious fact that not all pain can be successfully treated with lifestyle changes and people aren’t necessarily taking “the easy way out” with painkillers. And if you look at people and think you know what causes their pain, you should consider that you never know what came first - the pain or the other things. With pain like that, it’s no wonder many people choose to avoid exercise, eat to feel happy, or self-medicate with drugs that are easier to get than a fent patch; and if people regularly get stuck on months of Prednisone, that does not help. My usually ~58kg self ballooned up to 75kg on ~6 months of Prednisone. After a year off, I’m 10kg down, 7 more to go. Reply via email Published 26 Oct, 2025

1 views
Maurycy 1 months ago

Some hot rocks:

I recently went on a rock collecting trip, but apart from the usual — quartz, K feldspar crystals, garnet, etc — I found some slightly radioactive rocks: All of these were found using my prospecting scintillator , but I took measurements with a Radiacode 102 — a very common hobbyist detector — so that other people can compare readings. Despite being small, it is still a gamma scintillator, so the count rates are much higher then any G-M tube. None of these are crazy hot, but they were all collected off the surface: I didn’t bring any good digging equipment on the trip. (Really should have considering how my detector is able to pick up deeply buried specimens) The biggest hazard with my rocks is dropping them on your toes. Even if you were to grind them up and inhale the dust, the host rock is much more of a danger then the radioactivity. I’ve personally been in multiple residential and office buildings that are more radioactive then my specimens because of the stone that was used to construct them. Also, if you have any “Anti-Radiation” or “Bio Energy” or “Quantum Energy” wellness products: they are quite the opposite. (and many are spicier then my rocks.) … or how about some nice decorative glass ? It glows

0 views
iDiallo 1 months ago

Why We Don't Have Flying Cars

Imagine this: You walk up to your driveway where your car is parked. You reach for the handle that automatically senses your presence, confirms your identity, and opens to welcome you in. You sit down, the controls appear in front of you, and your seatbelt secures itself around your waist. Instead of driving forward onto the pavement, you take off. You soar into the skies like an eagle and fly to your destination. This is what technology promises: freedom, power, and something undeniably cool. The part we fail to imagine is what happens when your engine sputters before takeoff. What happens when you reach the sky and there are thousands of other vehicles in the air, all trying to remain in those artificial lanes? How do we deal with traffic? Which directions are we safely allowed to go? And how high? We have flying cars today. They're called helicopters. In understanding the helicopter, we understand why our dream remains a dream. There's nothing romantic about helicopters. They're deafeningly loud and incredibly expensive to buy and maintain. They require highly skilled pilots, are dangerously vulnerable to engine failure, and present a logistical nightmare of three-dimensional traffic control. I can't even picture what a million of them buzzing between skyscrapers would look like. Chaos, noise pollution, and a new form of gridlock in the sky. Even with smaller drones, as the technology evolves and becomes familiar, cities are creating regulations around them, sucking all the fun and freedom out in favor of safety and security. This leads me to believe that the whole idea of flying cars and drones is more about freedom than practicality. And unregulated freedom is impossible. This isn't limited to flying cars. The initial, pure idea is always intoxicating. But the moment we build a prototype, we're forced to confront the messy reality. In 1993, a Japanese man brought a video phone to demo for my father as a new tech to adopt in our embassy. I was only a child, but I remember the screen lighting up with a video feed of the man sitting right next to my father. I could only imagine the possibilities. It was something I thought only existed in sci-fi movies. If this was possible, teleportation couldn't be too far away. In my imagined future, we'd sit at a table with life-like projections of colleagues from across the globe, feeling as if we were in the same room. It would be the end of business travel, a world without borders. But now that the technology is ubiquitous, the term "Zoom fatigue" is trending. It's ironic when I get on a call and see that 95% of my colleagues have their cameras turned off. In movies, communication was spontaneous. You press a button, your colleauge appears as a hologram, and you converse. In reality, there's a calendar invite, a link, and the awkward "you're on mute!" dance. It's a scheduled performance, not an organic interaction. And then there are people who have perfect lighting, high-speed internet, and a quiet home office. And those who don't. Video calls have made us realize the importance of physical space and connection. Facebook's metaverse didn't resolve this. Imagine having a device that holds all of human knowledge at the click of a button. For generations, this was the ultimate dream of librarians and educators. It would create a society of enlightened, informed citizens. And we got the smartphone. Despite being a marvel of technology, the library of the world at your fingertips, it hasn't ushered us into utopia. The attention economy it brought along has turned it into a slot machine designed to hijack our dopamine cycles. You may have Wikipedia open in one tab, but right next to it is TikTok. The medium has reshaped the message from "seek knowledge" to "consume content." While you have access to information, misinformation is just as rampant. The constant stimulation kills moments of quiet reflection, which are often the birthplace of creativity and deep thought. In The Machine Stops by E.M. Forster, every desire can be delivered by pulling a lever on the machine. Whether it's food, a device, or toilet paper. The machine delivers everything. With Amazon, we've created a pretty similar scenario. I ordered replacement wheels for my trash bin one evening, expecting them to arrive after a couple of days. The very next morning, they were waiting at my doorstep. Amazing. But this isn't magical. Behind it are real human workers who labor without benefits, job security, or predictable income. They have an algorithmic boss that can be more demanding than a human one. That promise of instant delivery has created a shadow workforce of people dealing with traffic, poor weather, and difficult customers, all while racing against a timer. The convenience for the user is built on the stress of the driver. The dream of a meal from anywhere didn't account for the reality of our cities now being clogged with double-parked delivery scooters and a constant stream of gig workers. Every technological dream follows the same pattern. The initial vision is pure, focusing only on the benefit. The freedom, the convenience, the power. But reality is always a compromise, a negotiation with physics, economics, and most importantly, human psychology and society. We wanted flying cars. We understood the problems. And we got helicopters with a mountain of regulations instead. That's probably for the best. The lesson isn't to stop dreaming or stop innovating. It's to dream with our eyes open. When we imagine the future, we need to ask not just "what will this enable?" but also "what will this cost?" Not in dollars, but in human terms. In stress, inequality, unintended consequences, and the things we'll lose along the way. We're great at imagining benefits and terrible at predicting costs. And until we get better at the second part, every flying car we build will remain grounded by the weight of what we failed to consider.

0 views
iDiallo 1 months ago

5 Years Away

AGI has been "5 years away" for the past decade. The Tesla Roadster? Five years away since 2014. Tesla's Level 5 self-driving? Promised by 2017, then quietly pushed into the perpetual five-year window. If you've been paying attention, you've probably noticed this pattern extends far beyond Silicon Valley. Why do we keep landing on this specific timeframe? Psychologically, five years is close enough to feel relevant. We can easily imagine ourselves five years from now, still affected by these innovations. Yet it's distant enough to seem plausible for ambitious goals. More importantly, it's far enough away that by the time five years passes, people have often moved on or forgotten the original prediction. This isn't limited to consumer electronics. Medical breakthroughs regularly make headlines with the same promise: a revolutionary cancer treatment, five years away. Carbon nanotubes will transform renewable energy, five years. Solid-state batteries will solve range anxiety... five years. Andrew Ng, in his course "AI for Everyone," offered the most honest perspective on AGI I've ever read. He suggested it might be decades, hundreds, or even thousands of years away. Why? Before we can meaningfully predict a timeline, we need several fundamental technological breakthroughs that we haven't achieved yet. Without those foundational advances, there's nothing to build on top of. With all the "5 years away" predictions there is an assumption of linear progress. But the reality is that any transformative technologies require non-linear leaps. They require discoveries we can't yet foresee, solving problems we don't yet fully understand. If we want our predictions to mean something, we need a clearer framework. At a minimum, a technology can be labeled "5 years away" only if we have a demonstrated proof of concept, even if it is at a small scale. We need to have identified the major engineering challenges remaining. There should be reasonable pathways to overcome those challenges with existing knowledge. And finally, there needs to be a semblance of economic viability on the horizon. Anything less than this is speculation dressed up as prediction. If I say that "we've built a prototype that works in the lab, now we need to scale manufacturing", this may in fact be five years away. But when I say "We need multiple fundamental breakthroughs in physics before this is even possible." Here, I am in a science fiction timeline. It's not entirely harmless to have inflated predictions. Government policy may be planned around those decisions, they can distort investment decisions, and worse it gives the public false expectations. When we promise self-driving cars by 2017 and fail to deliver, it erodes trust not just in that company, but in the entire field. When every medical breakthrough is "5 years away," people become cynical about real advances. The "5 years away" framing can make us complacent. If fusion power is always just around the corner, why invest heavily in less glamorous but available renewable technologies today? If AGI will solve everything soon, why worry about the limitations and harms of current AI systems? It's not the most pressing problem in the world, but wouldn't it be better to have more realistic predictions? When reading news articles about any technology, try to distinguish the difference between engineering challenges and scientific unknowns. A realistic prediction will be explicit by saying things like "This will be ready in 5 years, assuming we solve X, Y, and Z." The public needs to learn to celebrate incremental progress as well. When all you read about is moonshots, you dismiss important work being done to improve our everyday lives. And of course, the public should also learn to ignore engagement baits . Real innovation is hard enough without pretending we can see further into the future than we actually can. Five years is a number. What matters is the foundation beneath it. Without that foundation, we're not counting down to anything. We're just repeating a comfortable fiction that lets us feel like the future is closer than it really is. The most honest answer to "When will this technology arrive?" is often the least satisfying: "We don't know yet, but here's what needs to happen first." That answer respects both the complexity of innovation and the intelligence of the audience. Maybe it's time we used it more often.

0 views
Sean Goedecke 1 months ago

We are in the "gentleman scientist" era of AI research

Many scientific discoveries used to be made by amateurs. William Herschel , who discovered Uranus, was a composer and an organist. Antoine Lavoisier , who laid the foundation for modern chemistry, was a politician. In one sense, this is a truism. The job of “professional scientist” only really appeared in the 19th century, so all discoveries before then logically had to have come from amateurs, since only amateur scientists existed. But it also reflects that any field of knowledge gets more complicated over time . In the early days of a scientific field, discoveries are simple: “air has weight”, “white light can be dispersed through a prism into different colors”, “the mass of a burnt object is identical to its original mass”, and so on. The way you come up with those discoveries is also simple: observing mercury in a tall glass tube, holding a prism up to a light source, weighing a sealed jar before and after incinerating it, and so on. The 2025 Nobel prize in physics was just awarded “for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit”. The press release gallantly tries to make this discovery understandable to the layman, but it’s clearly much more complicated than the examples I listed above. Even understanding the terms involved would take years of serious study. If you wanted to win the 2026 Nobel prize in physics, you have to be a physicist : not a musician who dabbles in physics, or a politician who has a physics hobby in your spare time. You have to be fully immersed in the world of physics 1 . AI research is not like this. We are very much in the “early days of science” category. At this point, a critical reader might have two questions. How can I say that when many AI papers look like this ? 2 Alternatively, how can I say that when the field of AI research has been around for decades, and is actively pursued by many serious professional scientists? First, because AI research discoveries are often simpler than they look . This dynamic is familiar to any software engineer who’s sat down and tried to read a paper or two: the fearsome-looking mathematics often contains an idea that would be trivial to express in five lines of code. It’s written this way because (a) researchers are more comfortable with mathematics, and so genuinely don’t find it intimidating, and (b) mathematics is the lingua franca of academic research, because researchers like to write to far-future readers for whom Python syntax may be as unfamiliar as COBOL is to us. Take group-relative policy optimization, or GRPO, introduced in a 2024 DeepSeek paper . This has been hugely influential for reinforcement learning (which in turn has been the driver behind much LLM capability improvement in the last year). Let me try and explain the general idea. When you’re training a model with reinforcement learning, you might naively reward success and punish failure (e.g. how close the model gets to the right answer in a math problem). The problem is that this signal breaks down on hard problems. You don’t know if the model is “doing well” without knowing how hard the math problem is, which is itself a difficult qualitative assessment. The previous state-of-the art was to train a “critic model” that makes this “is the model doing well” assessment for you. Of course, this brings a whole new set of problems: the critic model is hard to train and verify, costs much more compute to run inside the training loop, and so on. Enter GRPO. Instead of a critic model, you gauge how well the model is doing by letting it try the problem multiple times and computing how well it does on average . Then you reinforce the model attempts that were above average and punish the ones that were below average. This gives you good signal even on very hard prompts, and is much faster than using a critic model. The mathematics in the paper looks pretty fearsome, but the idea itself is surprisingly simple. You don’t need to be a professional AI researcher to have had it. In fact, GRPO is not necessarily that new of an idea. There is discussion of normalizing the “baseline” for RL as early as 1992 (section 8.3), and the idea of using the model’s own outputs to set that baseline was successfully demonstrated in 2016 . So what was really discovered in 2024? I don’t think it was just the idea of “averaging model outputs to determine a RL baseline”. I think it was that that idea works great on LLMs as well . As far as I can tell, this is a consistent pattern in AI research. Many of the big ideas are not brand new or even particularly complicated. They’re usually older ideas or simple tricks, applied to large language models for the first time. Why would that be the case? If deep learning wasn’t a good subject for the amateur scientist ten years ago, why would the advent of LLMs change that? Suppose someone discovered that a rubber-band-powered car - like the ones at science fair competitions - could output as much power as a real combustion engine, so long as you soaked the rubber bands in maple syrup beforehand. This would unsurprisingly produce a revolution in automotive (and many other) engineering fields. But I think it would also “reset” scientific progress back to something like the “gentleman scientist” days, where you could productively do it as a hobby. Of course, there’d be no shortage of real scientists doing real experiments on the new phenomenon. However, there’d also be about a million easy questions to answer. Does it work with all kinds of maple syrup? What if you soak it for longer? What if you mixed in some maple-syrup-like substances? You wouldn’t have to be a real scientist in a real lab to try your hand at some of those questions. After a decade or so, I’d expect those easy questions to have been answered, and for rubber-band engine research to look more like traditional science. But that still leaves a long window for the hobbyist or dilettante scientist to ply their trade. The success of LLMs is like the rubber-band engine. A simple idea that anyone can try 3 - train a large transformer model on a ton of human-written text - produces a surprising and transformative technology. As a consequence, many easy questions have become interesting and accessible subjects of scientific inquiry, alongside the normal hard and complex questions that professional researchers typically tackle. I was inspired to write this by two recent pieces of research: Anthropic’s “skills” product and the Recursive Language Models paper . Both of these present new and useful ideas, but they’re also so simple as to be almost a joke. “Skills” are just markdown files and scripts on-disk that explain to the agent how to perform a task. Recursive language models are just agents with direct code access to the entire prompt via a Python REPL. There, now you can go and implement your own skills or RLM inference code. I don’t want to undersell these ideas. It is a genuinely useful piece of research for Anthropic to say “hey, you don’t really need actual tools if the LLM has shell access, because it can just call whatever scripts you’ve defined for it on disk”. Giving the LLM direct access to its entire prompt via code is also (as far as I can tell) a novel idea, and one with a lot of potential. We need more research like this! Strong LLMs are so new, and are changing so fast, that their capabilities are genuinely unknown 4 . For instance, at the start of this year, it was unclear whether LLMs could be “real agents” (i.e. whether running with tools in a loop would be useful for more than just toy applications). Now, with Codex and Claude Code, I think it’s pretty clear that they can. Many of the things we learn about AI capabilities - like o3’s ability to geolocate photos - come from informal user experimentation. In other words, they come from the AI research equivalent of 17th century “gentleman science”. Incidentally, my own field - analytic philosophy - is very much the same way. Two hundred years ago, you could publish a paper with your thoughts on “what makes a good act good”. Today, in order to publish on the same topic, you have to deeply engage with those two hundred years of scholarship, putting the conversation out of reach of all but professional philosophers. It is unclear to me whether that is a good thing or not. Randomly chosen from recent AI papers on arXiv . I’m sure you could find a more aggressively-technical paper with a bit more effort, but it suffices for my point. Okay, not anyone can train a 400B param model. But if you’re willing to spend a few hundred dollars - far less than Lavoisier spent on his research - you can train a pretty capable language model on your own. In particular, I’d love to see more informal research on making LLMs better at coming up with new ideas. Gwern wrote about this in LLM Daydreaming , and I tried my hand at it in Why can’t language models come up with new ideas? . Incidentally, my own field - analytic philosophy - is very much the same way. Two hundred years ago, you could publish a paper with your thoughts on “what makes a good act good”. Today, in order to publish on the same topic, you have to deeply engage with those two hundred years of scholarship, putting the conversation out of reach of all but professional philosophers. It is unclear to me whether that is a good thing or not. ↩ Randomly chosen from recent AI papers on arXiv . I’m sure you could find a more aggressively-technical paper with a bit more effort, but it suffices for my point. ↩ Okay, not anyone can train a 400B param model. But if you’re willing to spend a few hundred dollars - far less than Lavoisier spent on his research - you can train a pretty capable language model on your own. ↩ In particular, I’d love to see more informal research on making LLMs better at coming up with new ideas. Gwern wrote about this in LLM Daydreaming , and I tried my hand at it in Why can’t language models come up with new ideas? . ↩

0 views

Why formalize mathematics - more than catching errors

I read a good post by one of the authors of the Isabelle theorem prover, that got me thinking. The author, Lawrence Paulson, observed that most math proofs are trivial, but writing them (preferably with a proof assistant) is a worthwhile activity, for reasons similar to safety checklists - “Not every obvious statement is true.” As I have been a bit obsessed with doing formalized mathematics, this got me thinking about why I am excited to spend many hours recently writing formalized proofs in Lean for exercises from Tao’s Real Analysis (along with this recent attempt to write a companion to Riehl’s Category Theory In Context ). On a very personal level, I just like math, computers and puzzles, and writing Lean proofs feels like doing all three at once. But I do believe formalization is important beyond nerd-snipping folks like me.

0 views