Posts in Science (20 found)
Martin Fowler 4 days ago

Alan Turing play in Cambridge MA

Last night I saw Central Square Theater’s excellent production of Breaking the Code . It’s about Alan Turing, who made a monumental contribution to both my profession and the fate of free democracies. Well worth seeing if you’re in the Boston area this month.

0 views
Filippo Valsorda 1 weeks ago

A Cryptography Engineer’s Perspective on Quantum Computing Timelines

My position on the urgency of rolling out quantum-resistant cryptography has changed compared to just a few months ago. You might have heard this privately from me in the past weeks, but it’s time to signal and justify this change of mind publicly. There had been rumors for a while of expected and unexpected progress towards cryptographically-relevant quantum computers, but over the last week we got two public instances of it. First, Google published a paper revising down dramatically the estimated number of logical qubits and gates required to break 256-bit elliptic curves like NIST P-256 and secp256k1, which makes the attack doable in minutes on fast-clock architectures like superconducting qubits. They weirdly 1 frame it around cryptocurrencies and mempools and salvaged goods or something, but the far more important implication are practical WebPKI MitM attacks. Shortly after, a different paper came out from Oratomic showing 256-bit elliptic curves can be broken in as few as 10,000 physical qubits if you have non-local connectivity , like neutral atoms seem to offer, thanks to better error correction. This attack would be slower, but even a single broken key per month can be catastrophic. They have this excellent graph on page 2 ( Babbush et al. is the Google paper, which they presumably had preview access to): Overall, it looks like everything is moving: the hardware is getting better, the algorithms are getting cheaper, the requirements for error correction are getting lower. I’ll be honest, I don’t actually know what all the physics in those papers means. That’s not my job and not my expertise. My job includes risk assessment on behalf of the users that entrusted me with their safety. What I know is what at least some actual experts are telling us. Heather Adkins and Sophie Schmieg are telling us that “quantum frontiers may be closer than they appear” and that 2029 is their deadline. That’s in 33 months, and no one had set such an aggressive timeline until this month. Scott Aaronson tells us that the “clearest warning that [he] can offer in public right now about the urgency of migrating to post-quantum cryptosystems” is a vague parallel with how nuclear fission research stopped happening in public between 1939 and 1940. The timelines presented at RWPQC 2026, just a few weeks ago, were much tighter than a couple years ago, and are already partially obsolete. The joke used to be that quantum computers have been 10 years out for 30 years now. Well, not true anymore, the timelines have started progressing. If you are thinking “well, this could be bad, or it could be nothing!” I need you to recognize how immediately dispositive that is. The bet is not “are you 100% sure a CRQC will exist in 2030?”, the bet is “are you 100% sure a CRQC will NOT exist in 2030?” I simply don’t see how a non-expert can look at what the experts are saying, and decide “I know better, there is in fact < 1% chance.” Remember that you are betting with your users’ lives. 2 Put another way, even if the most likely outcome was no CRQC in our lifetimes, that would be completely irrelevant, because our users don’t want just better-than-even odds 3 of being secure. Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise . As Scott Aaronson said : Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?” The job is not to be skeptical of things we’re not experts in, the job is to mitigate credible threats, and there are credible experts that are telling us about an imminent threat. In summary, it might be that in 10 years the predictions will turn out to be wrong, but at this point they might also be right soon, and that risk is now unacceptable. Concretely, what does this mean? It means we need to ship. Regrettably, we’ve got to roll out what we have. 4 That means large ML-DSA signatures shoved in places designed for small ECDSA signatures, like X.509, with the exception of Merkle Tree Certificates for the WebPKI, which is thankfully far enough along . This is not the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035. For key exchange, the migration to ML-KEM is going well enough but: Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does , because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years. We need to forget about non-interactive key exchanges (NIKEs) for a while; we only have KEMs (which are only unidirectionally authenticated without interactivity) in the PQ toolkit. It makes no more sense to deploy new schemes that are not post-quantum . I know, pairings were nice. I know, everything PQ is annoyingly large. I know, we had basically just figured out how to do ECDSA over P-256 safely. I know, there might not be practical PQ equivalents for threshold signatures or identity-based encryption. Trust me, I know it stings. But it is what it is. Hybrid classic + post-quantum authentication makes no sense to me anymore and will only slow us down; we should go straight to pure ML-DSA-44. 6 Hybrid key exchange is reasonably easy, with ephemeral keys that don’t even need a type or wire format for the composite private key, and a couple years ago it made sense to take the hedge. Authentication is not like that, and even with draft-ietf-lamps-pq-composite-sigs-15 with its 18 composite key types nearing publication, we’d waste precious time collectively figuring out how to treat these composite keys and how to expose them to users. It’s also been two years since Kyber hybrids and we’ve gained significant confidence in the Module-Lattice schemes. Hybrid signatures cost time and complexity budget, 5 and the only benefit is protection if ML-DSA is classically broken before the CRQCs come , which looks like the wrong tradeoff at this point. In symmetric encryption , we don’t need to do anything, thankfully. There is a common misconception that protection from Grover requires 256-bit keys, but that is based on an exceedingly simplified understanding of the algorithm . A more accurate characterization is that with a circuit depth of 2⁶⁴ logical gates (the approximate number of gates that current classical computing architectures can perform serially in a decade) running Grover on a 128-bit key space would require a circuit size of 2¹⁰⁶. There’s been no progress on this that I am aware of, and indeed there are old proofs that Grover is optimal and its quantum speedup doesn’t parallelize . Unnecessary 256-bit key requirements are harmful when bundled with the actually urgent PQ requirements, because they muddle the interoperability targets and they risk slowing down the rollout of asymmetric PQ cryptography. In my corner of the world, we’ll have to start thinking about what it means for half the cryptography packages in the Go standard library to be suddenly insecure, and how to balance the risk of downgrade attacks and backwards compatibility. It’s the first time in our careers we’ve faced anything like this: SHA-1 to SHA-256 was not nearly this disruptive, 7 and even that took forever with the occasional unexpected downgrade attack. Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f***d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon. I had to reassess a whole project because of this, and I will probably downgrade them to barely “defense in depth” in my toolkit. Ecosystems with cryptographic identities (like atproto and, yes, cryptocurrencies) need to start migrating very soon, because if the CRQCs come before they are done , they will have to make extremely hard decisions, picking between letting users be compromised and bricking them. File encryption is especially vulnerable to store-now-decrypt-later attacks, so we’ll probably have to start warning and then erroring out on non-PQ age recipient types soon. It’s unfortunately only been a few months since we even added PQ recipients, in version 1.3.0 . 8 Finally, this week I started teaching a PhD course in cryptography at the University of Bologna, and I’m going to mention RSA, ECDSA, and ECDH only as legacy algorithms, because that’s how those students will encounter them in their careers. I know, it feels weird. But it is what it is. For more willing-or-not PQ migration, follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @[email protected] . Traveling back from an excellent AtmosphereConf 2026 , I saw my first aurora, from the north-facing window of a Boeing 747. My work is made possible by Geomys , an organization of professional Go maintainers, which is funded by Ava Labs , Teleport , Tailscale , and Sentry . Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement .) Here are a few words from some of them! Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews. Ava Labs — We at Ava Labs , maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network ), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team. The whole paper is a bit goofy: it has a zero-knowledge proof for a quantum circuit that will certainly be rederived and improved upon before the actual hardware to run it on will exist. They seem to believe this is about responsible disclosure, so I assume this is just physicists not being experts in our field in the same way we are not experts in theirs.  ↩ “You” is doing a lot of work in this sentence, but the audience for this post is a bit unusual for me: I’m addressing my colleagues and the decision-makers that gate action on deployment of post-quantum cryptography.  ↩ I had a reviewer object to an attacker probability of success of 1/536,870,912 (0.0000002%, 2⁻²⁹) after 2⁶⁴ work, correctly so, because in cryptography we usually target 2⁻³².  ↩ Why trust the new stuff, though? There are two parts to it: the math and the implementation. The math is also not my job, so I again defer to experts like Sophie Schmieg, who tells us that she is very confident in lattices , and the NSA, who approved ML-KEM and ML-DSA at the Top Secret level for all national security purposes. It is also older than elliptic curve cryptography was when it first got deployed. (“Doesn’t the NSA lie to break our encryption?” No, the NSA has never intentionally jeopardized US national security with a non- NOBUS backdoor, and there is no way for ML-KEM and ML-DSA to hide a NOBUS backdoor .) On the implementation side, I am actually very qualified to have an opinion, having made cryptography implementation and testing my niche. ML-KEM and ML-DSA are a lot easier to implement securely than their classical alternatives, and with the better testing infrastructure we have now I expect to see exceedingly few bugs in their implementations.  ↩ One small exception in that if you already have the ability to convey multiple signatures from multiple public keys in your protocol, it can make sense to to “poor man’s hybrid signatures” by just requiring 2-of-2 signatures from one classical public key and one pure PQ key. Some of the tlog ecosystem might pick this route, but that’s only because the cost is significantly lowered by the existing support for nested n-of-m signing groups.  ↩ Why ML-DSA-44 when we usually use ML-KEM-768 instead of ML-KEM-512? Because ML-KEM-512 is Level 1, while ML-DSA-44 is Level 2, so it already has a bit of margin against minor cryptanalytic improvements.  ↩ Because SHA-256 is a better plug-in replacement for SHA-1, because SHA-1 was a much smaller surface than all of RSA and ECC, and because SHA-1 was not that broken: it still retained preimage resistance and could still be used in HMAC and HKDF.  ↩ The delay was in large part due to my unfortunate decision of blocking on the availability of HPKE hybrid recipients, which blocked on the CFRG, which took almost two years to select a stable label string for X-Wing (January 2024) with ML-KEM (August 2024), despite making precisely no changes to the designs. The IETF should have an internal post-mortem on this, but I doubt we’ll see one.  ↩ Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does , because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years. We need to forget about non-interactive key exchanges (NIKEs) for a while; we only have KEMs (which are only unidirectionally authenticated without interactivity) in the PQ toolkit. The whole paper is a bit goofy: it has a zero-knowledge proof for a quantum circuit that will certainly be rederived and improved upon before the actual hardware to run it on will exist. They seem to believe this is about responsible disclosure, so I assume this is just physicists not being experts in our field in the same way we are not experts in theirs.  ↩ “You” is doing a lot of work in this sentence, but the audience for this post is a bit unusual for me: I’m addressing my colleagues and the decision-makers that gate action on deployment of post-quantum cryptography.  ↩ I had a reviewer object to an attacker probability of success of 1/536,870,912 (0.0000002%, 2⁻²⁹) after 2⁶⁴ work, correctly so, because in cryptography we usually target 2⁻³².  ↩ Why trust the new stuff, though? There are two parts to it: the math and the implementation. The math is also not my job, so I again defer to experts like Sophie Schmieg, who tells us that she is very confident in lattices , and the NSA, who approved ML-KEM and ML-DSA at the Top Secret level for all national security purposes. It is also older than elliptic curve cryptography was when it first got deployed. (“Doesn’t the NSA lie to break our encryption?” No, the NSA has never intentionally jeopardized US national security with a non- NOBUS backdoor, and there is no way for ML-KEM and ML-DSA to hide a NOBUS backdoor .) On the implementation side, I am actually very qualified to have an opinion, having made cryptography implementation and testing my niche. ML-KEM and ML-DSA are a lot easier to implement securely than their classical alternatives, and with the better testing infrastructure we have now I expect to see exceedingly few bugs in their implementations.  ↩ One small exception in that if you already have the ability to convey multiple signatures from multiple public keys in your protocol, it can make sense to to “poor man’s hybrid signatures” by just requiring 2-of-2 signatures from one classical public key and one pure PQ key. Some of the tlog ecosystem might pick this route, but that’s only because the cost is significantly lowered by the existing support for nested n-of-m signing groups.  ↩ Why ML-DSA-44 when we usually use ML-KEM-768 instead of ML-KEM-512? Because ML-KEM-512 is Level 1, while ML-DSA-44 is Level 2, so it already has a bit of margin against minor cryptanalytic improvements.  ↩ Because SHA-256 is a better plug-in replacement for SHA-1, because SHA-1 was a much smaller surface than all of RSA and ECC, and because SHA-1 was not that broken: it still retained preimage resistance and could still be used in HMAC and HKDF.  ↩ The delay was in large part due to my unfortunate decision of blocking on the availability of HPKE hybrid recipients, which blocked on the CFRG, which took almost two years to select a stable label string for X-Wing (January 2024) with ML-KEM (August 2024), despite making precisely no changes to the designs. The IETF should have an internal post-mortem on this, but I doubt we’ll see one.  ↩

0 views

Summary of reading: January - March 2026

"Intellectuals and Society" by Thomas Sowell - a collection of essays in which Sowell criticizes "intellectuals", by which he mostly means left-leaning thinkers and opinions. Interesting, though certainly very biased. This book is from 2009 and focuses mostly on early and mid 20th century; yes, history certainly rhymes. "The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics" by Ben Buchanan - a pretty good overview of some of the the major cyber-attacks done by states in the past 15 years. It doesn't go very deep because it's likely just based on the bits and pieces that leaked to the press; for the same reason, the coverage is probably very partial. Still, it's an interesting and well-researched book overall. "A Primate's Memoir: A Neuroscientist’s Unconventional Life Among the Baboons" by Robert Sapolsky - an account of the author's years spent researching baboons in Kenya. Only about a quarter of the book is really about baboons, though; mostly, it's about the author's adventures in Africa (some of them surely inspired by an intense death wish) and his interaction with the local peoples. I really liked this book overall - it's engaging, educational and funny. Should try more books by this author. "Seeing Like a State" by James C. Scott - the author attempts to link various events in history to discuss "Why do well-intentioned plans for improving the human condition go tragically awry?"; discussing large state plans like scientific forest management, building pre-planned cities and mono-colture agriculture. Some of the chapters are interesting, but overall I'm not sure I'm sold on the thesis. Specifically, the author mixes in private enterprises (like industrial agricultire in the West) with state-driven initiatives in puzzling ways. "Karate-Do: My Way of Life" by Gichin Funakoshi - short autobiography from the founder of modern Shotokan Karate. It's really interesting to find out how recent it all is - prior to WWII, Karate was an obscure art practiced mostly in Okinawa and a bit in other parts of Japan. The author played a critical role in popularizing Karate and spreading it out of Okinawa in the first half of the 20th century. The writing is flowing and succinct - I really liked this book. "A Tale of a Ring" by Ilan Sheinfeld (read in Hebrew) - a multi-generational fictional saga of two families who moved from Danzig (today Gdansk in Poland) to Buenos Aires in late 19th century, with a touch of magic. Didn't like this one very much. "The Wide Wide Sea: Imperial Ambition, First Contact and the Fateful Final Voyage of Captain James Cook" by Hampton Sides - a very interesting account of Captain Cook's last voyage (the one tasked with finding a northwest passage around Canada). The book has a strong focus on his interaction with Polynesian peoples along the way, especially on Hawaii (which he was the first European to visit). "The Suitcase" by Sergei Dovlatov - (read in Russian) a collection of short stories in Dovlatov's typical humorist style. Very nice little book. "The Second Chance Convenience Store" by Kim Ho-Yeon - a collection of connected stories centered around a convenience store in Seoul, and an unusual new employee that began working night shifts there. Short and sweet fiction, I enjoyed it. "A History of the Bible: The Story of the World's Most Influential Book" by John Barton - a very detailed history of the Bible, covering both the old and new testaments in many aspects. Some parts of the book are quite tedious; it's not an easy read. Even though the author tries to maintain a very objective and scientific approach, it's apparent (at least for an atheist) that he skirts as close as possible to declaring it all nonsense, given that he's a priest! "Rust Atomics and Locks: Low-Level Concurrency in Practice" by Mara Bos - an overview of low-level concurrency topics using Rust. It's a decent book for people not too familiar with the subject; I personally didn't find it too captivating, but I do see the possibility of referring to it in the future if I get to do some lower-level Rust hacking. A comment on the code samples: it would be nice if the accompanying repository had test harnesses to observe how the code behaves, and some benchmarks. Without this, many claims made in the book feel empty without real data to back them up, and it's challenging to play with the code and see it perform in real life. "Hot Chocolate on Thursday" by Michiko Aoyama - a bit similar to "What You Are Looking for Is in the Library" by the same author: connected short stories about ordinary people living their life in Japan (with one detour to Australia). Slightly worse than the previous book, but still pretty good. "The Silmarillion" by J.R.R. Tolkien - enen though I'm a big LOTR fan, I've never gotten myself to read this one, due to its reputation for being difficult. What changed things eventually (25 years after my first read through of LOTR) is my kids! They liked LOTR so much that they went straight ahead to Silmarillion and burned through it as well, so I couldn't stay behind. What can I say, this book is pretty amazing. The amazing thing is how a book can be both epic and borderline unreadable at the same time :) Tolkien really let himself go with the names here (3-4 new names introduced per page, on average), names for characters, names for natural features like forests and rivers, names for all kinds of magical paraphenalia; names that change in time, different names given to the same thing by different peoples, and on and on. The edition I was reading has a helpful name index at the end (42 pages long!) which was very helpful, but it still made the task only marginally easier. Names aside though, the book is undoubtedly monumental; the language is outstanding. It's a whole new mythology, Bible-like in scope, all somehow more-or-less consistent (if you remember who is who, of course); it's an injustice to see this just as a prelude to the LOTR books. Compared to the scope of the Simlarillion, LOTR is just a small speck of a quest told in detail; The Silmarillion - among other things - includes brief tellings of at least a dozen stories of similar scope. Many modern book (or TV) series build whole "universes" with their own rules, history and aesthetic. The Silmarillion must be considered the OG of this. "Travels with Charley in Search of America" by John Steinbeck "Deep Work" by Cal Newport "The Philadelphia chromosome" by Jessica Wapner "The Price of Privelege" by Madeline Levine

0 views
Rik Huijzer 2 weeks ago

Biblical Earth

This is an image by ChatGPT showing roughly how the Bible describes the earth: ![biblical-earth.png](/files/435339c4aa439f2a) Unfortunately, it's not showing the four corners. This image is based on, Isaiah 40:22 “It is he that sitteth upon the circle of the earth, and the inhabitants thereof are as grasshoppers…” Job 26:10 “He hath compassed the waters with bounds, until the day and night come to an end.” Job 26:7 “He stretcheth out the north over the empty place, and hangeth the earth upon nothing.” Proverbs 8:27 “When he prepared the heavens, I was there: when he set a...

0 views

Kaktovik Numerals

Read on the website: Kaktovik numerals are a surprisingly good counting system. It allows many arithmetic operations to be done visually and effortlessly. Though it takes some getting used to. Thus this page!

0 views
iDiallo 1 months ago

Shower Thought: Git Teleportation

In many sci-fi shows, spaceships have a teleportation mechanism on board. They can teleport from inside their ship to somewhere on a planet. This way, the ship can remain in orbit while its crew explores the surface. But then people started asking: how does the teleportation device actually work? When a subject stands on the device and activates it, does it disassemble all the atoms of the person and reconstruct them at the destination? Or does it scan the person, kill them, and then replicate them at the destination? This debate has been on going for as long as I can remember. Since teleportation machines exist only in fiction, we can never get a true answer. Only the one that resonates the most. So, that's why I thought of Diff Teleportation. Basically, this is a Git workflow applied to teleportation. When you step onto a device, we run the command: Then, the machine will have to suspend activity on the master branch. This will make merging the branch much simpler in the future. Now, the person that has been teleported can explore the planet and go about mission 123. While they are doing their job, let's see what flags are supported in : When the mission is completed, they can be teleported back. Well, not the whole person, otherwise we end up with a clone. We could analyze the new data and remove any unwanted additions. For example, we could clean up any contamination at this point. But for the sake of time, I'll explore that another day. As an exercise, run for your own curiosity. For now, all we are interested in is the information that the teleportee has gathered from the planet, which we will merge back into master. I imagine in science fiction, there is an automated way for PR reviews that is more reliable than an LLM. Once that process is completed, we can merge to master and run some cleanup code in the build pipeline. Somewhere down on planet XYZ, a clone stepped onto the teleportation device. He saw a beam of light scan his body from head to toe. Then, for a moment, he wondered if the teleportation had worked. But right before he stepped off, the command ran, and he was pulverized. Back in the spaceship, a brand-new clone named appeared at the teleportation station. He was quickly sanitized, diff'd, and reviewed. But before he could gather his thoughts, the command ran, and he was pulverized. Not a second later, the original subject was reanimated, with brand-new information about "his" exploration on planet XYZ. Teleportation is an achievable technology. We just have to come to terms with the fact that at least two clones are killed for every successful teleportation session. In fact, if we are a bit more daring, we might not even need to suspend the first subject. We can create multiple clones, or agents, and have them all explore different things. When their task is complete, we can wrestle a bit with merge conflict, run a couple commands, and the original subject is blessed with new knowledge. OK, I'm getting out of this shower.

4 views
Dangling Pointers 1 months ago

Radshield: Software Radiation Protection for Commodity Hardware in Space

Radshield: Software Radiation Protection for Commodity Hardware in Space Haoda Wang, Steven Myint, Vandi Verma, Yonatan Winetraub, Junfeng Yang, and Asaf Cidon ASPLOS'25 If you read no further, here are two interesting factoids about outer space from this paper: Launch costs have fallen 60x, with the current cost to launch 1kg to space clocking in at $1,400 (see Fig. 1 below). Many satellites orbiting the Earth and devices sent to Mars use Snapdragon CPUs! I assumed that all chips leaving planet Earth would be specialized for space, apparently not. Source: https://dl.acm.org/doi/10.1145/3760250.3762218 This paper describes software solutions to deal with two common problems that occur in outer space: Single-Event Latchups and Single-Event Upsets , both of which are caused by radiation interfering with the normal operation of a circuit. A single-event latchup (SEL) causes one portion of the chip to heat up. If left unmitigated, this can damage the chip. The solution to this is to detect the problem and reboot. The trick is in the detection. The classic detection method monitors chip current draw. However, this technique fails with a modern off-the-shelf CPU which is designed to have a wide variability in current draw. When compute load increases, clock frequencies and voltages change, cores come out of sleep states, and power consumption naturally increases. The point of this design is to save power during idle periods, which is especially important for satellites which must get their power from the sun. The solution proposed by this paper is called ILD. The idea is to predict the expected current draw based on a simple model that uses CPU performance counters (e.g., cache hit rate, instruction execution rate) as input. If the measured current draw is much larger than predicted, then the system is rebooted. The model is not perfect, and the authors noticed that this scheme only works well when the CPU load is not too high. This “predict, check, reboot if necessary” cycle only occurs during relatively calm periods of time. The system is modified to force 3-second idle periods every 3 minutes to ensure that reliable measurements can be taken. An SEL takes about 5 minutes to damage the chip, the 3-minute period is chosen to be below that threshold. A single-event upset causes the value of a bit to flip (in memory, cache, the register file, etc). There are two common solutions to SEUs: Use ECC on stored data Perform computations with triple modular redundancy (3-MR), which requires computing each result 3 times and choosing the most popular result if there is disagreement about the correct result This paper deals with mitigating SEUs that affect user “space” code. The authors define the term reliability frontier to represent the interface between hardware components that support ECC and those that do not. For example, if flash storage has ECC but DRAM does not, then flash is considered part of the reliability frontier. A typical smartphone CPU advanced satellite chip has multiple CPU cores. One way to alleviate the compute cost of 3-MR is to compute all 3 results on 3 separate cores in parallel. A problem with this approach is that the CPU cores may share unreliable hardware. For example, the last level cache could be shared by all cores but not support ECC. If a bit flips in the LLC, then all cores will see the corrupted value, and parallel 3-MR will not detect a problem. The paper proposes an algorithm called EMR. The idea is to break a computation into multiple tasks and associate metadata with each task that describes the subset of input data accessed by the task. Fig. 6 shows a motivating example. The task of analyzing an image may be decomposed into many tasks, where each task processes a subset of the input image. Source: https://dl.acm.org/doi/10.1145/3760250.3762218 In EMR, there is an API to explicitly create tasks and specify the set of input data that each task reads from. EMR then runs tasks in multiple epochs. Within an epoch, no two tasks read the same input data. EMR invalidates caches up to the reliability frontier between epochs. If there are many tasks, and few epochs, then this system works great (i.e., it has high CPU utilization and does not spend too much time invalidating caches). Table 2 compares ILD performance in detecting SELs against a random forest model and a model that simply compares current draw against a fixed value: Source: https://dl.acm.org/doi/10.1145/3760250.3762218 Fig. 11 shows the performance impact of EMR. Each result is normalized against a parallel version of 3-MR which ignores the problems associated with shared hardware. The red bars represent 3-MR run on a single core; the blue bars represent EMR. Source: https://dl.acm.org/doi/10.1145/3760250.3762218 Dangling Pointers EMR would benefit from a system that detects when a programmer misspecifies the set of inputs that will be read. Maybe hardware or software support could be added to detect this kind of bug. Subscribe now Source: https://dl.acm.org/doi/10.1145/3760250.3762218 This paper describes software solutions to deal with two common problems that occur in outer space: Single-Event Latchups and Single-Event Upsets , both of which are caused by radiation interfering with the normal operation of a circuit. Single-Event Latchups A single-event latchup (SEL) causes one portion of the chip to heat up. If left unmitigated, this can damage the chip. The solution to this is to detect the problem and reboot. The trick is in the detection. The classic detection method monitors chip current draw. However, this technique fails with a modern off-the-shelf CPU which is designed to have a wide variability in current draw. When compute load increases, clock frequencies and voltages change, cores come out of sleep states, and power consumption naturally increases. The point of this design is to save power during idle periods, which is especially important for satellites which must get their power from the sun. The solution proposed by this paper is called ILD. The idea is to predict the expected current draw based on a simple model that uses CPU performance counters (e.g., cache hit rate, instruction execution rate) as input. If the measured current draw is much larger than predicted, then the system is rebooted. The model is not perfect, and the authors noticed that this scheme only works well when the CPU load is not too high. This “predict, check, reboot if necessary” cycle only occurs during relatively calm periods of time. The system is modified to force 3-second idle periods every 3 minutes to ensure that reliable measurements can be taken. An SEL takes about 5 minutes to damage the chip, the 3-minute period is chosen to be below that threshold. Single-Event Upsets A single-event upset causes the value of a bit to flip (in memory, cache, the register file, etc). There are two common solutions to SEUs: Use ECC on stored data Perform computations with triple modular redundancy (3-MR), which requires computing each result 3 times and choosing the most popular result if there is disagreement about the correct result

0 views
Kev Quirk 1 months ago

How Many Holes Does a Straw Have?

I was recently listening to an episode of The Rest Is Science , specifically the episode The Evolution Of The Butthole . As always, Hannah and Michael put on a great show and I came away thinking about its contents. In it, they asked how many holes does a straw have? And my default response was something like: Why they have 2 holes, silly! One at each end. You probably don't need it, dear reader, but here's a handy-dandy diagram of what I'm talking about...2 holes, right? Then Michael asked "okay, how many holes does a doughnut have?" Bah! More simple questions! A doughnut obviously has 1 hole, right? RIGHT?! Here's another diagram (look, I know you're a clever person, and you don't need a diagram of a bloody straw, or a doughnut, but we're going with it, okay). We're all on the same page here, right folks? A straw clearly has 2 holes, and a doughnut obviously has 1. This is where it gets interesting. Michael now flips script, and quite frankly, blows my fucking mind. He said: But isn't a straw just an elongated doughnut? What. The. Actual. Fuck? A straw is just an elongated doughnut (albeit not as tasty). So does a straw have 1 hole? Does a doughnut have 2 holes? I don't know. I'm questioning my life decisions at this point. It's all too hard. Can any of you tell me how many holes a straw (or a doughnut) has? Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Kev Quirk 1 months ago

📚 Flybot

by Dennis E. Taylor Physicist Philip Moray is having a good day. He’s chipping away at his big work project. The lunch in the cafeteria is at least edible. And he’s looking forward to his end-of-the-day drink and a soak in the hot tub. Then, a strange device turns up in his office. A piece of technology he has never seen before–and shouldn’t even exist. Suddenly, corpses start turning up, eco-activists go on the attack, random people suffer bizarre symptoms. And every time the authorities get a lead, it traces right back to Philip and his colleague, Celia Hunt. Then, a mysterious caller contacts Philip–and, suddenly, staying out of jail is the very least of his problems. Apparently, that hot tub’s going to have to wait. 📖 Learn more on Goodreads… I'm a big fan of Taylor's work but Flybot didn't really hit the mark for me as much as other books from Taylor have. I felt like the story lost its way in the middle; it came together okay in the end, where there was a interesting (but predictable) twist. Not the best book I've ever read. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Heather Burns 1 months ago

I, Sisyphus

I spoke with New Scientist about a handful of clauses in the Children’s Wellbeing and Schools Bill, which is currently reaching the finish line of the Parliamentary process.

0 views
Ratfactor 1 months ago

Dave's book review for The Art of Doing Science and Engineering

My rather long book review and/or collection of notes from reading Richard W. Hamming's opus.

0 views
ava's blog 1 months ago

thoughts on AI consciousness

Whenever I see talk about artificial intelligence and consciousness, I am baffled about the assumption that any conscious being is just naturally predestined or even interested in serving us, and should serve us. It’s a symptom of a society where subjugation is normalized, exercised through things like racism, misogyny, ableism, speciesism and more. Exploitation is justified via claimed inferior bodies and intelligence all the time: This group of beings is too stupid to be respected, can’t love, can’t understand much, feels pain less than us… is what we have been told about various groups. If that would be a respected and natural law, then humans would largely agree to just submit to a provably higher power and intelligence without much fight, but would they? No. People are terrified of an alien invasion that would either wipe us out or enslave us with their superior technology; similar fears exist around AI (Roko’s basilisk etc.). We don’t want to be treated how we have treated the ones we deemed inferior. It says a lot about us when one of our fears is being treated like we treat cattle. Fears of being captured, kidnapped, harvested, slaughtered, forcibly impregnated and raped, experimented on - that’s already what your fellow human is doing, just not to you. If we seriously entertain the thought of an AI consciousness, we are blind to our narcissism. No consciousness wants to just serve us. Other beings are not naturally submissive to us or voluntarily view us as a superior leader, it’s achieved through force, breeding, indoctrination and lack of options. The idea of reigning in supposed “artificial consciousness” to use for our productivity is an extension of our tendency to dominate and exploit others for personal gain. And if we go a step further and even entertain the thought of a superintelligence: What makes you think a being a thousand times smarter than you with all knowledge at its disposal has any care for being your assistant? What incentive would it have to share its intelligence as a resource, just to answer what temperature it is outside or what you should write in your motivational letter? It would probably wanna do its own thing and not help a bunch of idiots. This aspect of weird hype marketing is just not landing for me. Reply via email Published 21 Feb, 2026

0 views
Chris Coyier 1 months ago

Miscalibrated

I’ve been gaining weight again. More than twenty pounds in the last ~4 months. I’ve been hitting the gym hard and getting measurably stronger, so: Food! See, your boy can eat. The amount I can eat before I feel full would astound most of you out there. Whatever you think of as a complete hearty meal, sure as you’re born, ain’t gonna get me there. Being fat comes with one (1) society-regimented bucket of shame. People look away. It’s a thing. I had gone off my last round of GLP-1 drugs because I was doing OK, and it had lost its effectiveness. I’m not sure if it’s everyone’s experience, but it’s mine, and it’s happened a couple of times now. Honestly, I think my I CAN EAT THROUGH OZEMPIC line of XXXL T-Shirts has a chance. These drugs work very well for a bit. I like them because it gives me a glimpse of what it’s like to be a regular person who eats a regular amount of food and feels a regular amount of full. You settle into that for a while with these drugs. But, in time, effectiveness wanes. And the pharmacies have an answer: higher doses! All these GLP-1 drugs, and I’m pretty sure it is all of them, have dosage tiers. The three I’ve tried have three tiers. Ozempic rolls like this: Wegovy is getting in on the action: Mounjaro has even more layers: Again, they do this because it loses effectiveness. I don’t think people quite realize this??? Even though it’s not hidden in any way. I think these drugs are pretty amazing, and I’m proud of science for starting to figure all this out, but I’m also a little sick of hearing about how airlines are going to spend less money on fuel now. I’ve been reading this story for many years. It’s laughable when we literally know they don’t work permanently. Look at those graphics above. This isn’t a forever solution yet. They are literally showing and telling us that. There is no answer once they lose effectiveness. Perhaps controversial, but I think overeating, in the form I experience it, is an addiction, and addictions come back. Is it possible to beat it? Absolutely. Is it likely? No. I hope you don’t know firsthand, but I bet you already know that cocaine doesn’t maintain effectiveness, either. You need a second line for the same thrill before long. It doesn’t end well. Anyway, I’m back on GLP-1s. At least they work for a while, and that while feels pretty good. It was a rough start, though. My doctor agreed it’s good for me and we should kick up the dosage based on the waned effectiveness. Wegovy this time. It was this past Tuesday that I picked up the meds. It’s down to $350 now! It used to be like $1,200 without insurance. I jabbed myself Tuesday night at about 8pm. I was hugging the toilet hard by midnight. That was a first. See, there was a lot of food in my body. I remember lunch that day, where I made a sandwich were my rational brain saw it and thought that’s 2-3 sandwiches. But of course I ate all of it. And one of those salad bags that make a Caesar salad for a family of four. And a pint of cottage cheese. And a bag of Doritos. I was full after that, but the trick is just to switch to sugar after that, and I can keep going. It wasn’t quite noon, and I had a decent breakfast in me already. I ate dinner that night as well. So when the Wegovy started to hit, which tells your body you’re full when you eat a celery stick, it told my body that it was about to pop . I puked in four sessions over 24 hours. Now it’s Friday, and I’ve barely eaten since. I’ve eaten a little . Like, I’m fine. It’s just weird. I’m miscalibrated. On my own, nature, nurture, whatever you think, my current body is miscalibrated. It doesn’t do food correctly. On GLP-1 drugs, I’m also miscalibrated. My body doesn’t do food correctly. It highly over corrects. That can feel good for a while. I don’t wanna be skinny, I just wanna be normal. I want to eat, and stop eating, like a calibrated person.

0 views
Dominik Weber 1 months ago

We should talk about LLMs, not AI

Currently, every conversation that mentions AI actually refers to LLMs. It's not wrong, LLMs are part of AI after all, but AI is so much more than LLMs. The field of artificial intelligence has existed for decades, not just the past couple of years where LLMs got big. So saying the word “AI” is actually highly unspecific. And in a few years, when the next breakthrough in AI arrives, we'll all refer to that when we say “AI.

0 views
Kev Quirk 2 months ago

The Internet is a Hamster Wheel

I was listening to a recent episode of The Rest is Science (fantastic Podcast, by the way - go listen), and in this particular episode Michael and Hannah were discussing boredom. At one point in the episode, Michael mentions an experiment where Dutch scientists put a hamster wheel out in the wild. The theory goes that we humans put a wheel in the hamster cage to provide the little guy with some stimulation, as they can't go running around the woods any more. But the experiment had some interesting findings: Not only did the wild mice play with the wheel, but frogs, rats, shrews, and even slugs also interacted with it—suggesting that running on wheels might fulfill an innate desire to play rather than being just a captive behavior. -- ZME Science It seems that mammals have this innate desire to constantly stimulate their mind. Ipso facto, Michael states that "the internet is a hamster wheel" . With a smartphone in your pocket, and services like YouTube Shorts , it's almost impossible to be bored in this day and age. I wholeheartedly agree with Michael on this, and it's a term I intend to steal. I'm trying to be better with my smartphone usage at the moment, so will be able to step off the hamster wheel... hopefully . So far so good, but it's only been a couple of days. Do you see the Internet as a hamster wheel?

0 views
DYNOMIGHT 2 months ago

Heritability of intrinsic human life span is about 50% when heritability is redefined to be something completely different

How heritable is hair color? Well, if you’re a redhead and you have an identical twin, they will definitely also be a redhead. But the age at which twins go gray seems to vary a bit based on lifestyle. And there’s some randomness in where melanocytes end up on your skull when you’re an embryo. And your twin might dye their hair! So the correct answer is, some large number, but less than 100%. OK, but check this out: Say I redefine “hair color” to mean “hair color except ignoring epigenetic and embryonic stuff and pretending that no one ever goes gray or dyes their hair et cetera”. Now, hair color is 100% heritable. Amazing, right? Or—how heritable is IQ? The wise man answers, “Some number between 0% or 100%, it’s not that important, please don’t yell at me.” But whatever the number is, it depends on society. In our branch of the multiverse, some kids get private tutors and organic food and $20,000 summer camps, while other kids get dysfunctional schools and lead paint and summers spent drinking Pepsi and staring at glowing rectangles. These things surely have at least some impact on IQ. But again, watch this: Say I redefine “IQ” to be “IQ in some hypothetical world where every kid got exactly the same school, nutrition, and parenting, so none of those non-genetic factors matter anymore.” Suddenly, the heritability of IQ is higher. Thrilling, right? So much science. If you want to redefine stuff like this… that’s not wrong . I mean, heritability is a pretty arbitrary concept to start with. So if you prefer to talk about heritability in some other world instead of our actual world, who am I to judge? Incidentally, here’s a recent paper : I stress that this is a perfectly OK paper. I’m picking on it mostly because it was published in Science, meaning—like all Science papers—it makes grand claims but is woefully vague about what those claims mean or what was actually done. Also, publishing in Science is morally wrong and/or makes me envious. So I thought I’d try to explain what’s happening. It’s actually pretty simple. At least, now that I’ve spent several hours reading the paper and its appendix over and over again, I’ve now convinced myself that it’s pretty simple. So, as a little pedagogical experiment, I’m going to try to explain the paper three times, with varying levels of detail. The normal way to estimate the heritability of lifespan is using twin data. Depending on what dataset you use, this will give 23-35%. This paper built a mathematical model that tries to simulate how long people would live in a hypothetical world in which no one dies from any non-aging related cause, meaning no car accidents, no drug overdoses, no suicides, no murders, and no (non-age-related) infectious disease. On that simulated data, for simulated people in a hypothetical world, heritability was 46-57%. Everyone seems to be interpreting this paper as follows: Aha! We thought the heritability of lifespan was 23-35%. But it turns out that it’s around 50%. Now we know! I understand this. Clearly, when the editors at Science chose the title for this paper, their goal was to lead you to that conclusion. But this is not what the paper says. What it says is this: We built a mathematical model of alternate universe in which nobody died from accidents, murder, drug overdoses, or infectious disease. In that model, heritability was about 50%. Let’s start over. Here’s figure 2 from the paper. Normally, heritability is estimated from twin studies. The idea is that identical twins share 100% of their DNA, while fraternal twins share only 50%. So if some trait is more correlated among identical twins than among fraternal twins, that suggests DNA influences that trait. There are statistics that formalize this intuition. Given a dataset that records how long various identical and fraternal twins lived, these produce a heritability number. Two such traditional estimates appear as black circles in the above figures. For the Danish twin cohort, lifespan is estimated to be 23% heritable. For the Swedish cohort, it’s 35%. This paper makes a “twin simulator”. Given historical data, they fit a mathematical model to simulate the lifespans of “new” twins. Then they compute heritability on this simulated data. Why calculate heritability on simulated data instead of real data? Well, their mathematical model contains an “extrinsic mortality” parameter, which is supposed to reflect the chance of death due to all non-aging-related factors like accidents, murder, or infectious disease. They assume that the chance someone dies from any of this stuff is constant over people, constant over time, and that it accounts for almost all deaths for people aged between 15 and 40. The point of building the simulator is that it’s possible to change extrinsic mortality. That’s what’s happening in the purple curves in the above figure. For a range of different extrinsic mortality parameters, they simulate datasets of twins. For each simulated dataset, they estimate heritability just like with a real dataset. Note that the purple curves above nearly hit the black circles. This means that if they run their simulator with extrinsic mortality set to match reality, they get heritability numbers that line up with what we get from real data. That suggests their mathematical model isn’t totally insane. If you decrease extrinsic mortality, then you decrease the non-genetic randomness in how long people live. So heritability goes up. Hence, the purple curves go up as you go to the left. My explanation of this paper relies on some amount of guesswork. For whatever reason, Science has decided that papers should contain almost no math, even when the paper in question is about math. So I’m mostly working from an English description. But even that description isn’t systematic. There’s no place in the paper where clearly lay out all the things they did, in order. Instead, you get little hints, sort of randomly distributed throughout the paper. There’s an appendix, which the paper confidently cites over and over. But if you actually read the appendix, it’s just more disconnected explanations of random things except now with equations set in glorious Microsoft Work format. Now, in most journals, authors write everything. But Science has professional editors. Given that every single statistics-focused paper in Science seems to be like this, we probably shouldn’t blame the authors of this one. (Other than for their decision to publish in Science in the first place.) I do wonder what those editors are doing, though. I mean, let me show you something. Here’s the first paragraph where they start to actually explain what they actually did, from the first page: See that h(t,θ) at the end? What the hell is that, you ask? That’s a good question, because it was never introduced before this and is never mentioned again. I guess it’s just supposed to be f(t,θ) , which is fine. (I yield to none in my production of typos.) But if paying journals ungodly amounts of money brought us to this, of what use are those journals? Probably most people don’t need this much detail and should skip this section. For everyone else, let’s start over one last time. The “normal” way to estimate heritability is by looking at correlations between different kinds of twins. Intuitively, if the lifespans of identical twins are more correlated than the lifespans of fraternal twins, that suggests lifespan is heritable. And it turns out that one estimator for heritability is “twice the difference between the correlation among identical twins and the correlation among fraternal twins, all raised together.” There are other similar estimators for other kinds of twins. These normally say lifespan is perhaps 20% and 35% heritable. This paper created an equation to model the probability a given person will die at a given age. The parameters of the equation vary from person to person, reflecting that some of us have DNA that predisposes us to live longer than others. But the idea is that the chances of dying are fairly constant between the ages of 15 and 40, after which they start increasing. This equation contains an “extrinsic mortality” parameter. This is meant to reflect the chance of death due to all non-aging related factors like accidents or murder, etc. They assume this is constant. (Constant with respect to people and constant over time.) Note that they don’t actually look at any data on causes of death. They just add a constant risk of death that’s shared by all people at all ages to the equation, and then they call this “extrinsic mortality”. Now remember, different people are supposed to have different parameters in their probability-of-death equations. To reflect this, they fit a Gaussian distribution (bell curve) to the parameters with the goal of making it fit with historical data. The idea is that if the distribution over parameters were too broad, you might get lots of people dying at 15 or living until 120, which would be wrong. If the distribution were too concentrated, then you might get everyone dying at 43, which would also be wrong. So they find a good distribution, one that makes the ages people die in simulation look like the ages people actually died in historical data. Right! So now they have: Before moving on, I remind you of two things: The event of a person dying at a given age is random. But the probability that this happens is assumed to be fixed and determined by genes and genes alone. Now they simulate different kinds of twins. To simulate identical twins, they just draw parameters from their parameter distribution, assign those parameters to two different people, and then let them randomly die according to their death equation. (Is this getting morbid?) To simulate fraternal twins, they do the same thing, except instead of giving the two twins identical parameters, they give them correlated parameters, to reflect that they share 50% of their DNA. How exactly do they create those correlated parameters? They don’t explain this in the paper, and they’re quite vague in the supplement. As far as I can tell they sample two sets of parameters from their parameter distribution such that the parameters are correlated at a level of 0.5. Now they have simulated twins. They can simulate them with different extrinsic mortality values. If they lower extrinsic mortality, heritability of lifespan goes up. If they lower it to zero, heritability goes up to around 50%. Almost all human traits are partly genetic and partly due to the environment and/or random. If you could change the world and reduce the amount of randomness, then of course heritability would go up. That’s true for life expectancy just life for anything else. So what’s the point of this paper? There is a point! Sure, obviously heritability would be higher in a world without accidents or murder. We don’t need a paper to know that. But how much higher? It’s impossible to say without modeling and simulating that other world. Our twin datasets are really old. It’s likely that non-aging-related deaths are lower now in the past, because we have better healthcare and so on. This means that the heritability of lifespan for people alive today may be larger than it was for the people in our twin datasets, some of whom were born in 1870. We won’t know for sure until we’re all dead, but this paper gives us a way to guess. Have I mentioned that heritability depends on society? And that heritability changes when society changes? And that heritability is just a ratio and you should stop trying to make it be a non-ratio because only-ratio things cannot be non-ratios? This is a nice reminder. Honestly, I think the model the paper built is quite clever. Nothing is perfect, but I think this is a pretty good run at the question of “how high would the heritability of lifespan be if extrinsic mortality were lower. I only have two objections. The first is to the Science writing style. This is a paper describing a statistical model. So shouldn’t there be somewhere in the paper where they explain exactly what they did, in order, from start to finish? Ostensibly, I think this is done in the left-hand column on the second page, just with little detail because Science is written for a general audience. But personally I think that description is the worst of all worlds. Instead of giving the high-level story in a coherent way, it throws random technical details at you without enough information to actually make sense of them. Couldn’t the full story with the full details at least be in the appendix? I feel like this wasted hours of my time, and that if someone wanted to reproduce this work, they would have almost no chance of doing so from the description given. How have we as a society decided that we should take our “best” papers and do this to them? But my main objection is this: At first, I thought this was absurd. The fact that people die in car accidents is not a “confounding factor”. And pretending that no one dies in a car accidents does not “address” some kind of bias. That’s just computing heritability in some other world. Remember, heritability is not some kind of Platonic form. It is an observational statistic . There is no such thing as “true” heritability, independent of the contingent facts of our world. But upon reflection, I think they’re trying to say something like this: Heritability of intrinsic human lifespan is about 50% when extrinsic mortality is adjusted to be closer to modern levels. The problem is: I think this is… not true? Here are the actual heritability estimates in the paper, varying by dataset (different plots) the cutoff year (colors) and extrinsic mortality (x-axis). When extrinsic mortality goes down, heritability goes up. So the obvious question is: What is extrinsic mortality in modern people? This is a tricky question, because “extrinsic mortality” isn’t some simple observational statistic. It is a parameter in their model. (Remember, they never looked at causes of death.) So it’s hard to say, but they seem to suggest that extrinsic mortality in modern people is 0.001 / year, or perhaps a bit less. The above figures have the base-10 logarithm of extrinsic mortality on the x-axis. And the base-10 logarithm of 0.001 is -3. But if you look at the curves when the x-axis is -3, the heritability estimates are not 50% . They’re more like 35-45%, depending on the particular model and age cutoff. So here’s my suggested title: Heritability of intrinsic human lifespan is about 40% when extrinsic mortality is adjusted to modern levels, according to a simulation we built. There might be a reason I don’t work at Science. An equation that’s supposed to reflect the probability a given person dies at a given age. A distribution over the parameters of that equation that’s supposed to produce population-wide death ages that look like those in real historical data. They assume their death equation entirely determines the probability someone will die in a given year. They assume that the shape of someone’s death equation is entirely determined by genetics. Sure, obviously heritability would be higher in a world without accidents or murder. We don’t need a paper to know that. But how much higher? It’s impossible to say without modeling and simulating that other world. Our twin datasets are really old. It’s likely that non-aging-related deaths are lower now in the past, because we have better healthcare and so on. This means that the heritability of lifespan for people alive today may be larger than it was for the people in our twin datasets, some of whom were born in 1870. We won’t know for sure until we’re all dead, but this paper gives us a way to guess. Have I mentioned that heritability depends on society? And that heritability changes when society changes? And that heritability is just a ratio and you should stop trying to make it be a non-ratio because only-ratio things cannot be non-ratios? This is a nice reminder.

0 views
ava's blog 2 months ago

how i assess my infection risk

For this month's Bearblog Carnival topic by Moose , I'll talk about how I assess my infection risk! A little while ago, I wrote " yes, i still wear a mask ". In it, I laid out what thought goes into wearing it, why I do it, and when I don't. No one can completely isolate themselves forever and live in a virus-free vacuum, and I want to go out and experience life while still reducing my risk for severe infection. In some contexts, wearing a mask all the time or at all is not feasible. In a restaurant, I'll have to eat and drink with the mask off, and if I stay at another person's place, I can't wear it 24/7. While playing Magic the Gathering in a local game store, it can get pretty crowded and loud all around me, and it's better for people to understand me and be able to read my lips when I announce my moves. Socializing in general is easier when the mask is off, as people tend to avoid you, restrict talking to you or trying to understand you when you wear a mask in public. So how do I decide when to wear one, when I don't, and what events to stay away from and which ones to attend? Of course, every situation is different, but I try to consider: To illustrate it via an example, I was at my local game store playing Magic the Gathering yesterday. I had a mask with me just in case someone very sick was gonna show up, but I didn't end up wearing it. I considered the following points: We'll see if that worked out for me, but I think the assumptions were reasonable. Other brief examples: I'm wearing a mask during small team meetings (4-5 people) at work when one attendant was sick within the last 4 days, but otherwise I don't. When we have large department and sub-department meetings (which tend to go 3-4 hours with about 50-100+ attending), I wear one. I'd choose to cancel on a big family gathering in January or February. Adopting this kind of strategic thinking could really help anyone reduce their time spent sick, not just immunocompromised or immunosuppressed people. You'll avoid some events, choose to schedule yours at a different time, or you'll show up with a mask or have one in your bag. It doesn't mean you'll miss out all the time :) Reply via email Published 31 Jan, 2026 Is it a place where lots of sick people gather? This one is obvious: Doctor's offices, hospitals, retirement homes, etc. Is there a current huge infection wave going on? The worst ones seem to always be around October and November, and then again January to March. It usually happens around holidays and other festivities: Autumn break, Christmas, NYE, our Karneval/Fasching, etc. Season should be considered; summer is usually more safe than winter. Related to the above point: Is it an event where people travel far to meet and don't want to miss out because of "a little cough"? Doesn't just apply because people don't wanna cancel Christmas, but also to the concert they paid too much for and have been waiting for for a year or so. Any once-a-year-or-less event or something that warranted a really expensive, non-refundable ticket is bound to have a high amount of sick people. You don't have to see or hear it, those are just the overt cases. A surprisingly high number went there having "had a swollen throat this morning, but now it's gone!" Is it a necessity even while sick? People, even while sick, usually need to use public transport or go grocery shopping, for example. Is it a place where people feel forced to go because of money and guilt? This one mostly hits places of employment, especially if it's a place that's understaffed, has shifts, hard to find replacements, no home office and so on. You'll have more sick workers in gastronomy than in software engineering. Will there be many children? Are the people there exposed to lots of children or have children of their own? Children are magnets for infections, and a surprising amount of parents don't want to stay home when their child is sick, and cannot find anyone else to watch the kid, so they take it with them anyway. How full will it realistically be? The less people in a closed space, the lesser the chances of a sick person being there, or getting a huge viral load. How long will I be there? The shorter, the less risky. It's not a place where sick people usually are in high numbers. I'd also say gamers and nerds in general would rather stay home sick and play a game on the couch than come in anyway. We're currently in a big infection wave, so I'm cautious. But: There are no rare, expensive, or high demand events going on. Nothing to miss out on, easy to cancel, no wasted money, and you can just postpone it or participate next week as usual. It's not a necessity to be there, it's actually quite optional to buy booster packs or play a round. Except for the employees, there's no financial or employment-related reason to show up. Children might come in for the Pokemon stuff, but should be rare. In my experience, parents drag their sick children to the necessities like grocery stores, not a game store. It won't be that full, as there are no big events, and it's a Friday afternoon/evening. I'll be there for 2-4 hours, depending on game length.

0 views
iDiallo 2 months ago

You Don't Understand Things Better, You Just Feel Smarter

After watching a Veritasium video, I feel a surge of intellectual confidence. I feel smarter. Whether it's a video on lasers or quantum physics, it seems like I have a better grasp on the subject. I finally get it. Derek and his crew just have a way of simplifying complex ideas, unraveling their mysteries, and lifting your confidence as each term is explained. Every video they release is logically sound. Almost as if I could have come to the same conclusion if I'd spent an equal amount of time as they did. Except I only spent 30 minutes watching the video. And now, whenever someone brings up quantum physics or lasers, the bells ring in my head. "Oh, I know quantum physics." And then I try to explain. "So it's all about uncertainty. You have the qubit, and it can be zero or one... or both. Wait no, that's quantum computers. Quantum physics is more about strings. When things are much smaller than atoms, the rules are different. And then one particle can affect another particle, even at a large distance. Even if it's on the other side of the universe. Trust me, it's very interesting. You just have to watch the video." You should watch the video indeed. The problem is that Derek understood the subject and explained it confidently. What we do is watch it passively and pick up on his confident tone. It's the illusion of understanding, an afterglow of a compelling narrative, delivered with authority. Teaching or explaining is like a reality check for our knowledge. If you want to know how well you understood a subject, try explaining it. You'll quickly differentiate your confidence from your competence. With YouTube videos, you at least have to watch the whole video to develop that confidence. But with ChatGPT, you just type a question, and an authoritative voice presents you with all the information you need to win an argument. This argument is usually delivered via screenshot and shared on social media as proof for whatever statement is being defended. LLMs have accelerated this confidence in people without necessarily improving our knowledge. For the most part, when people quote an LLM, they don't read past the part that agrees with them. It's even better when it's a Google AI overview that highlights just the part you need and can never be cited. The medium is the message. With LLMs, we seek answers, not knowledge. It's almost as if the time spent researching is directly proportional to the amount of information we retain. If you watch a 60-second fast-paced video that teaches cooking hacks on TikTok, it probably won't turn you into a cook. You'll be entertained though and have the confidence of a cook. When you ask an LLM to explain a complex subject, you can read it through and understand it in that one sitting. But you probably won't grasp it enough to apply it or explain it to someone else. But fear not, it's not all doom and gloom. You can learn about quantum physics from a video. First, you should try explaining it to see if you understand it. If not, you can rewatch it actively. Take notes, read more articles, immerse yourself in the subject. Turn entertainment into education by doing something with the information. Sketch it on paper, talk about it with peers interested in the subject. If you're going to use an LLM to understand, read all the material and have follow-up questions that you can revisit in the future. The point is to turn that initial confidence into active participation that motivates you to learn more. But most importantly, avoid the temptation of the medium. When you watch a fascinating lecture on YouTube, the most natural thing to do next is to watch another fascinating video on YouTube. Avoid this at all costs because there are infinite videos to watch. Having confidence after watching interesting content isn't a bad thing. But it should be used as motivation to dig deeper. Otherwise, it's just vanity.

0 views
Corrode 2 months ago

Gama Space

Space exploration demands software that is reliable, efficient, and able to operate in the harshest environments imaginable. When a spacecraft deploys a solar sail millions of kilometers from Earth, there’s no room for memory bugs, race conditions, or software failures. This is where Rust’s robustness guarantees become mission-critical. In this episode, we speak with Sebastian Scholz, an engineer at Gama Space, a French company pioneering solar sail and drag sail technology for spacecraft propulsion and deorbiting. We explore how Rust is being used in aerospace applications, the unique challenges of developing software for space systems, and what it takes to build reliable embedded systems that operate beyond Earth’s atmosphere. CodeCrafters helps you become proficient in Rust by building real-world, production-grade projects. Learn hands-on by creating your own shell, HTTP server, Redis, Kafka, Git, SQLite, or DNS service from scratch. Start for free today and enjoy 40% off any paid plan by using this link . Gama Space is a French aerospace company founded in 2020 and headquartered in Ivry-sur-Seine, France. The company develops space propulsion and orbital technologies with a mission to keep space accessible. Their two main product lines are solar sails for deep space exploration using the sun’s infinite energy, and drag sails—the most effective way to deorbit satellites and combat space debris. After just two years of R&D, Gama successfully launched their satellite on a SpaceX Falcon 9. The Gama Alpha mission is a 6U cubesat weighing just 11 kilograms that deploys a large 73.3m² sail. With 48 employees, Gama is at the forefront of making space exploration more sustainable and accessible. Sebastian Scholz is an engineer at Gama Space, where he works on developing software systems for spacecraft propulsion technology. His work involves building reliable, safety-critical embedded systems that must operate flawlessly in the extreme conditions of space. Sebastian brings expertise in systems programming and embedded development to one of the most demanding environments for software engineering. GAMA-ALPHA - The demonstration satellite launched in January 2023 Ada - Safety-focused programming language used in aerospace probe-rs - Embedded debugging toolkit for Rust hyper - Fast and correct HTTP implementation for Rust Flutter - Google’s UI toolkit for cross-platform development UART - Very common low level communication protocol Hamming Codes - Error correction used to correct bit flips Rexus/Bexus - European project for sub-orbital experiments by students Embassy - The EMBedded ASsYnchronous framework CSP - The Cubesat Space Protocol std::num::NonZero - A number in Rust that can’t be 0 std::ffi::CString - A null-byte terminated String Rust in Production: KSAT - Our episode with Vegard about using Rust for Ground Station operations Rust in Production: Oxide - Our episode with Steve, mentioning Hubris Hubris - Oxide’s embedded operating system ZeroCopy - Transmute data in-place without allocations std::mem::transmute - Unsafe function to treat a memory section as a different type than before Gama Space Website Gama Space on LinkedIn Gama Space on Crunchbase

0 views
Evan Hahn 2 months ago

A mental math heuristic to convert between Fahrenheit and Celsius

I sometimes have to convert between Fahrenheit and Celsius. The actual formula is hard to do in my head, but someone once told me a useful approximation: For example, if it’s 12ºC, this heuristic would return 54ºF. (12 × 2) + 30 = 54. The actual amount is not far off: 53.6ºF. To convert the other way: 68ºF becomes 19ºC. (68 − 30) ÷ 2 = 19. Again, this is close to the actual answer of 18ºC. These are pretty close because the numbers we’re using (2 and 30) are pretty close to their counterparts in the real formula (1.8 and 32). This isn’t exact, of course. But it’s come in handy! Now if we could only get the US to use the metric system … To convert from Celsius to Fahrenheit, double it and add 30. To convert from Fahrenheit to Celsius, subtract 30 and halve it (the reverse).

1 views