Posts in Crypto (20 found)

WhatsApp Encryption, a Lawsuit, and a Lot of Noise

It’s not every day that we see mainstream media get excited about encryption apps! For that reason, the past several days have been fascinating, since we’ve been given not one but several unusual stories about the encryption used in WhatsApp. Or more accurately, if you read the story, a pretty wild allegation that the widely-used app lacks encryption . This is a nice departure from our ordinary encryption-app fare on this blog, which mainly deals with people (governments, usually) claiming that WhatsApp is too encrypted.Since there have now been several stories on the topic, and even folks like Elon Musk have gotten into the action, I figured it might be good to write a bit of an explainer about it. Our story begins with a new class action lawsuit filed by the esteemed law firm Quinn Emanuel on behalf of several plaintiffs. The lawsuit notes that WhatsApp claims to use end-to-end encryption to protect its users, but alleges that all WhatsApp users’ private data is secretly available through a special terminal on Mark Zuckerberg’s desk. Ok, the lawsuit does not say precisely that — but it comes pretty darn close: The complaint isn’t very satisfying, nor does it offer any solid evidence for any of these claims. Nonetheless, the claims have been heavily amplified online by various predictable figures, such as Elon Musk and Pavel Durov , both of whom (coincidentally) operate competing messaging apps. Making things a bit more exciting, Bloomberg reports that US authorities are now investigating Meta , the owner of WhatsApp, based on these same allegations. (How much weight you assign to this really depends on what you think of the current Justice Department.) If you’re really looking to understand what’s being claimed here, the best way to do it is to read the complaint yourself: you can find it here (PDF). Alternatively, you can save yourself a lot of time and read the next five sentences, which contain pretty much the same amount of factual information: Here’s the nut of it: The Internet has mostly divided itself into people who already know these allegations are true, because they don’t trust Meta and of course Meta can read your messages — and a second set of people who also don’t trust Meta but mostly think this is unsupported nonsense. Since I’ve worked on end-to-end encryption for the last 15+ years, and I’ve specifically focused on the kinds of systems that drive apps like WhatsApp, iMessage and Signal, I tend to fall into the latter group. But that doesn’t mean there’s nothing to pay attentionto here. Hence: in this post I’m going to talk a little bit about the specifics of WhatsApp encryption; what an allegation like this would imply (technically); we can verify that things like this are true (or not verify, as the case may be). More generally I’ll try to add some signal to the noise. Full disclosure: back in 2016 I consulted for Facebook (now Meta) for about two weeks, helping them with the rollout of encryption in Facebook Messenger. From time to time I also talk to WhatsApp engineers about new features they’re considering rolling out. I don’t get paid for doing this; they once asked me if I’d consider signing an NDA and I told them I’d rather not. Instant messaging apps are pretty ancient technology. Modern IM dates from the 1990s, but the basic ideas go back to the days of time sharing . Only two major things have really changed in messaging apps since the days of AOL Instant Messenger: the scale, and also the security of these systems. In terms of scale, modern messaging apps are unbelievably huge. At the start of the period in the lawsuit, WhatsApp already had more than one billion monthly active users . Today that number sits closer to three billion . This is almost half the planet. In many countries, WhatsApp is more popular than phone calls. The downside of vast scale is that apps like this can also collect data at similarly large scale. Every time you send a message through an app like WhatsApp, you’re sending that data first to a server run by WhatsApp’s parent company, Meta. That server then stores it and eventually delivers it to your intended recipients. Without great care, this can result in enormous amounts of real-time message collection and long-term storage. The risks here are obvious. Even if you trust your provider, that data can potentially be accessed by hackers, state-sponsored attackers, governments, and anyone who can compel or gain access to Meta’s platforms. To combat this, WhatsApp’s founders Jan Koum and Brian Acton took a very opinionated approach to the design of their app. Beginning in 2014 (around the time they were acquired by Facebook), the app began rolling out end-to-end (E2E) encryption based on the Signal protocol . This design ensures that all messages sent through Meta/WhatsApp infrastructure are encrypted, both in transit and on Meta’s servers. By design, the keys required to decrypt messages exist only on a users’ device (the “end” in E2E), ensuring that even a malicious platform provider (or hacker of Meta’s servers) should never be able to read the content of your messages. Due to WhatsApp’s huge scale, the adoption of end-to-end encryption on the platform was a very big deal. Not only does WhatsApp’s encryption prevent Meta from mining your chat content for advertising or AI training, the deployment of this feature made many governments frantic with worry. The main reason was that even law enforcement can’t access encrypted messages sent through WhatsApp (at least, not through Meta itself.). To the surprise at many, Koum and Acton made a convert of Facebook’s CEO, Mark Zuckerberg, who decided to lean into new encryption features across many of the company’s products, including Facebook Messenger and (optionally) Instagram DMs. This decision is controversial, and making it has not been cost-free for Meta/Facebook. The deployment of encryption in Meta’s products has created enormous political friction with the governments of the US, UK, Australia, India and the EU. Each government is concerned about the possibility that Meta will maintain large numbers of messages they cannot access, even with a warrant. For example, in 2019 a multi-government “open letter” signed by US AG William Barr urged Facebook not to expand end-to-end encryption without the addition of “lawful access” mechanisms: So that’s the background. Today WhatsApp describes itself as serving on the order of three billion users worldwide, and end-to-end encryption is on by default for personal messaging . They haven’t once been ambiguous about what they claim to offer. That means that if the allegations in the lawsuit proved to be true, this would be one of the largest corporate coverups since Dupont . The best thing about end-to-end encryption — when it works correctly — is that the encryption is performed in an app on your own phone . In principle, this means that only you and your communication partner have the keys, and all of those keys are under your control. While this sounds perfect, there’s an obvious caveat: while the app runs on your phone, it’s a piece of software. And the problem with most software is that you probably didn’t write it. In the case of WhatsApp, the application software is written by a team inside of Meta. This wouldn’t necessarily be a bad thing if the code was open source, and outside experts could review the implementation. Unfortunately WhatsApp is closed-source, which means that you cannot easily download the source code to see if encryption performed correctly, or performed at all. Nor can you compile your own copy of the WhatsApp app and compare it to the version you download from the Play or App Store. (This is not a crazy thing to hope for: you actually can do those things with open-source apps like Signal. ) While the company claims to share its code with outside security reviewers, they don’t publish routine security reviews. None of this is really unusual — in fact, it’s extremely normal for most commercial apps! But it means that as a user, you are to some extent trusting that WhatsApp is not running a long-con on its three billion users. If you’re a distrustful, paranoid person (or if you’re a security engineer) you’d probably find this need for trust deeply unappealing. Given the closed-source nature of WhatsApp, how do we know that WhatsApp is actually encrypting its data? The company is very clear in its claims that it does encrypt . But if we accept the possibility that they’re lying: is it at least possible that WhatsApp contains a secret “backdoor” that causes it to secretly exfiltrate a second copy of each message (or perhaps just the encryption keys) to a special server at Meta? I cannot definitively tell you that this is not the case. I can, however, tell, you that if WhatsApp did this, they (1) would get caught, (2) the evidence would almost certainly be visible in WhatsApp’s application code, and (3) it would expose WhatsApp and Meta to exciting new forms of ruin. The most important thing to keep in mind here is that Meta’s encryption happens on the client application, the one you run on your phone. If the claims in this lawsuit are true, then Meta would have to alter the WhatsApp application so that plaintext (unencrypted) data would be uploaded from your app’s message database to some infrastructure at Meta, or else the keys would. And this should not be some rare, occasional glitch . The allegations in the lawsuit state that this applied to nearly all users, and for every message ever sent by those users since they signed up. Those constraints would tend to make this a very detectable problem. Even if WhatsApp’s app source code is not public, many historical versions of the compiled app are available for download. You can pull one down right now and decompile it using various tools, to see if your data or keys are being exfiltrated. I freely acknowledge that this is a big project that requires specialized expertise — you will not finish it by yourself in a weekend (as commenters on HN have politely pointed out to me.) Still, reverse-engineering WhatsApp’s client code is entirely possible and various parts of the app have indeed been reversed several times by various security researchers. The answer really is knowable, and if there is a crime, then the evidence is almost certainly* right there in the code that we’re all running on our phones. If you’re going to (metaphorically) commit a crime, doing it in a forensically-detectable manner is very stupid. Several online commenters have pointed out that there are loopholes in WhatsApp’s end-to-end encryption guarantees. These include certain types of data that are explicitly shared with WhatsApp, such as business communications (when you WhatsApp chat with a company, for example.) In fairness, both WhatsApp and the lawsuit are very clear about these exceptions. These exceptions are real and important. WhatsApp’s encryption protects the content of your messages, it does not necessarily protect information about who you’re talking to, when messages were sent, and how your social graph is structured. WhatsApp’s own privacy materials talk about how personal message content is protected while other categories of data exist. Another big question for any E2E encrypted messaging app is what happens after the encrypted message arrives at your phone and is decrypted. For example, if you choose to back up your phone to a cloud service, this often involves sending plaintext copies of your message to a server that is not under your control. Users really like this, since it means they can re-download their chat history if they lose a phone. But it also presents a security vulnerability, since those cloud backups are not always encrypted. Unfortunately, WhatsApp’s backup situation is complex. Truthfully, it’s more of a Choose Your Own Adventure novel: Finally, WhatsApp has recently been adding AI features. If you opt into certain AI tools (like message summaries or writing help), some content may be send off-device for processing a system WhatsApp calls “ Private Processing ,” which is built around Trusted Execution Environments (TEEs). WhatsApp’s user-facing overview is here , Meta’s technical whitepaper is here , and Meta’s engineering post is here . This capability should not reveal plaintext data to Meta, either: more importantly, it’s brand new and much more recent than the allegations int he lawsuit. As a technologist, I love to write about the weaknesses and limitations of end-to-end encryption in practice. But it’s important to be clear: none of these loopholes stuff can account for what’s being alleged in this lawsuit . This lawsuit is claiming something much more deliberate and ugly. When I’m speaking to laypeople, I like to keep things simple. I tell them that cryptography allows us to trust our machines. But this isn’t really an accurate statement of what cryptography does for us. At the end of the day, all cryptography can really do is extend trust. Encryption protocols like Signal allow us to take some anchor-point we trust — a machine, a moment in time, a network, a piece of software — and then spread that trust across time and space. Done well, cryptography allows us to treat hostile networks as safe places; to be confident that our data is secure when we lose our phones; or even to communicate privately in the presence of the most data-hungry corporation on the planet. But for this vision of cryptography to make sense, there has to be trust in the first place. It’s been more than forty years since Ken Thompson delivered his famous talk, “ Reflections on Trusting Trust “, which pointed out how there is no avoiding some level of trust . Hence the question here is not: should we trust someone. That decision is already taken. It’s: should we trust that WhatsApp is not running the biggest fraud in technology history. The decision to trust WhatsApp on this point seems perfectly reasonable to me, in the absence of any concrete evidence to the contrary. In return for making that assumption, you get to communicate with the three billion people who use WhatsApp. But this is not the only choice you can make! If you don’t trust WhatsApp (and there are reasonable non-conspiratorial arguments not to), then the correct answer is to move to another application; I recommend Signal . * Without leaving evidence in the code, WhatsApp could try to compromise the crypto purely on the server side, e.g., by running man-in-the-middle attacks against users’ key exchanges. This has even been proposed by various government agencies, as a way to attack targeted messaging app users. The main problem with this approach is the need to “target”. Performing mass-scale MITM against WhatsApp users in a manner described by this complaint would require (1) disabling the security code system within the app, and (2) hoping that nobody ever notices that WhatsApp servers are distributing the wrong keys. This seems very unlikely to me. The plaintiffs (users of WhatsApp) have all used WhatsApp for years. Through this entire period, WhatsApp has advertised that it uses end-to-end encryption to protect message content, specifically, through the use of the Signal encryption protocol. According to unspecified “whistleblowers”, since April 2016, WhatsApp (owned by Meta) has been able to read the messages of every single user on its platform, except for some celebrities. If you use native device backup on iOS or Android devices (for example, iCloud device backup or the standard Android/Google backup), your WhatsApp message database may be included in a device backup sent to Apple or Google . Whether that backup is end-to-end encrypted depends on what your provider supports and what you’ve enabled. On Apple platforms, for example, iCloud backups can be end-to-end encrypted if you enable Apple’s Advanced Data Protection feature, but won’t be otherwise. Note that in both cases, the backup data ends up with Apple or Google and not with Meta as the lawsuit alleges. But this still sucks . WhatsApp has its own backup feature (actually, it has more than one way to do it.) WhatsApp supports end-to-end encrypted backups that can be protected with a password, a 64-digit key, and (more recently) passkeys. WhatsApp’s public docs are here and WhatsApp’s engineering writeup of the key-vault design is here . Conceptually, this is an interesting compromise: it reduces what cloud providers can read, but it introduces new key-management and recovery assumptions (and, depending on configuration, new places to attack). Importantly, even if you think backups are a mess — and they often are — this is still a far cry from the effortless, universal access alleged in this lawsuit.

0 views
Sean Goedecke 1 months ago

Crypto grifters are recruiting open-source AI developers

Two recently-hyped developments in AI engineering have been Geoff Huntley’s “Ralph Wiggum loop” and Steve Yegge’s “Gas Town”. Huntley and Yegge are both respected software engineers with a long pedigree of actual projects. The Ralph loop is a sensible idea: force infinite test-time-compute by automatically restarting Claude Code whenever it runs out of steam. Gas Town is a platform for an idea that’s been popular for a while (though in my view has never really worked): running a whole village of LLM agents that collaborate with each other to accomplish a task. So far, so good. But Huntley and Yegge have also been posting about $RALPH and $GAS, which are cryptocurrency coins built on top of the longstanding Solana cryptocurrency and the Bags tool, which allows people to easily create their own crypto coins. What does $RALPH have to do with the Ralph Wiggum loop? What does $GAS have to do with Gas Town? From reading Huntley and Yegge’s posts, it seems like what happened was this: So what does $GAS have to do with Gas Town (or $RALPH with Ralph Wiggum)? From a technical perspective, the answer is nothing . Gas Town is an open-source GitHub repository that you can clone, edit and run without ever interacting with the $GAS coin. Likewise for Ralph . Buying $GAS or $RALPH does not unlock any new capabilities in the tools. All it does is siphon a little bit of money to Yegge and Huntley, and increase the value of the $GAS or $RALPH coins. Of course, that’s why these coins exist in the first place. This is a new variant of an old “airdropping” cryptocurrency tactic. The classic problem with “memecoins” is that it’s hard to give people a reason to buy them, even at very low prices, because they famously have no staying power. That’s why many successful memecoins rely on celebrity power, like Eric Adams’ “NYC Token” or the $TRUMP coin. But how do you convince a celebrity to get involved in your grift business venture? This is where Bags comes in. Bags allows you to nominate a Twitter account as the beneficiary (or “fee earner”) of your coin. The person behind that Twitter account doesn’t have to agree, or even know that you’re doing it. Once you accumulate a nominal market cap (for instance, by moving a bunch of your own money onto the coin), you can then message the owner of that Twitter account and say “hey, all these people are supporting you via crypto, and you can collect your money right now if you want!” Then you either subtly hint that promoting the coin would cause that person to make more money, or you wait for them to realize it themselves 1 . Once they start posting about it, you’ve bootstrapped your own celebrity coin. This system relies on your celebrity target being dazzled by receiving a large sum of free money. If you came to them before the money was there, they might ask questions like “why wouldn’t people just directly donate to me?”, or “are these people who think they’re supporting me going to lose all their money?“. But in the warm glow of a few hundred thousand dollars, it’s easy to think that it’s all working out excellently. Incidentally, this is why AI open-source software engineers make such great targets. The fact that they’re open-source software engineers means that (a) a few hundred thousand dollars is enough to dazzle them 2 , and (b) their fans are technically-engaged enough to be able to figure out how to buy cryptocurrency. Working in AI also means that there’s a fresh pool of hype to draw from (the general hype around cryptocurrency being somewhat dry by now). On top of that, the open-source AI community is fairly small. Yegge mentions in his post that he wouldn’t have taken the offer seriously if Huntley hadn’t already accepted it. If you couldn’t tell, I think this whole thing is largely predatory. Bags seems to me to be offering crypto-airdrop-pump-and-dumps-as-a-service, where niche celebrities can turn their status as respected community figures into cold hard cash. The people who pay into this are either taken in by the pretense that they’re sponsoring open-source work (in a way orders of magnitude less efficient than just donating money directly), or by the hope that they’re going to win big when the coin goes “to the moon” (which effectively never happens). The celebrities will make a little bit of money, for their part in it, but the lion’s share of the reward will go to the actual grifters: the insiders who primed the coin and can sell off into the flood of community members who are convinced to buy. Bags even offers a “Did You Get Bagged? 💰🫵” section in their docs, encouraging the celebrity targets to share the coin, and framing the whole thing as coming from “your community”. This isn’t a dig - that amount of money would dazzle me too! I only mean that you wouldn’t be able to get Tom Cruise or MrBeast to promote your coin with that amount of money. Some crypto trader created a “$GAS” coin via Bags, configuring it to pay a portion of the trading fees to Steve Yegge (via his Twitter account) That trader, or others with the same idea, messaged Yegge on LinkedIn to tell him about his “earnings” ( currently $238,000), framing it as support for the Gas Town project Yegge took the free money and started posting about how exciting $GAS is as a way to fund open-source software creators Bags even offers a “Did You Get Bagged? 💰🫵” section in their docs, encouraging the celebrity targets to share the coin, and framing the whole thing as coming from “your community”. ↩ This isn’t a dig - that amount of money would dazzle me too! I only mean that you wouldn’t be able to get Tom Cruise or MrBeast to promote your coin with that amount of money. ↩

0 views

Crypto scammers are using my name. Don't fall for it.

I woke up this morning to discover that, without my knowledge or consent, someone had created an ICO using the Superpowers name and my Twitter username. They emailed me excitedly to tell me that I'd already made $800 in royalties. I asked them to take it down and to take my name off of it. They said they'd done that. But...it's still there. I have nothing to do with this ICO. You should not put money into it. It is a scam.

0 views
flak 2 months ago

using lava lamps to break RSA

It’s well known, in some circles, that the security of the internet depends on a wall of lava lamps to protect us from hackers. It is perhaps less well known that hackers can turn around and use this technology to augment their attacks. background Trillions of dollars in transactions are protected by the RSA algorithm. The security of this algorithm depends on the difficulty of factoring a large number, assumed to be infeasible when the prime numbers are selected randomly. To this end, Cloudflare has a wall of lava lamps to keep us all safe. The chaotic perturbations in the lava flow can be used to generate random numbers. However, Cloudflare has fallen victim to a regrettably common SEV-CRIT random number generator vulnerability. They have exposed the internal state of their system to the internet. Countless puff pieces include pictures of the lava wall, allowing attackers to recreate its internal state. There are even tubefluencers with videos of the wall in action. In this paper, we present a novel technique that uses pictures of a wall of lava lamps to calculate prime factors. As the chaotic behavior of lava lamps depends on quantum effects, it is not possible to replicate these results with solely conventional computing techniques. method We download a picture of lava lamps. (Optionally using a local image.) We reduce the entropy of the image using the SHA-512 algorithm to 512 bits. We further reduce it to 128 bits using the MD5 algorithm. A further reduction to 32 bits is performed with the CRC32 algorithm. This concludes stage one, entropy compaction and stabilization. We next proceed to stage two, factor extraction. We use a three bit extractor mask (hexadecimal: 0x00000007) to examine the low three bits looking for either of the prime numbers 3 or 5 . If found, that’s our result. Otherwise we right shift by one position and repeat. results We achieve a success rate exceeding 99.9% when factoring 15 . Larger values such as 21 are also factored 66% of the time. Even more challenging targets such as 35 can be factored with a 33% success rate. Ongoing experimentation suggests this technique is capable of factoring 46% of all positive integers. We hope to improve on this result with further refinement to the factor extraction stage. Theoretical calculations suggest a three bit extractor may be sufficient achieve a 77% success rate. Refer to figure 1. figure 1 The author’s lava factor tool factoring 15 . acknowledgements The author is indebted to Peter Gutmann’s pioneering work in dog factorization. No lava lamps were permanently damaged in the conduct of this experiment. source Source code is provided in accordance with the principles of knowledge sharing. Commercial use prohibited.

0 views
Danny McClelland 2 months ago

ZeroNet: The Web Without Servers

I’ve been exploring ZeroNet recently, a peer-to-peer web platform that’s been around since 2015 but still feels like a glimpse of what the internet could be. It’s not mainstream, and it’s not trying to be. But for anyone who cares about decentralisation and censorship-resistance, it’s worth understanding. ZeroNet is a decentralised network where websites exist without traditional servers. Instead of requesting a page from a server somewhere, your browser downloads it from other users who already have it. Think BitTorrent, but for websites. Once you’ve visited a site, you become a host for it too. The more people visit, the more resilient the site becomes. There’s no company to take to court. No single point of failure. No domain registrar that can be pressured into pulling the plug. The technical bits are surprisingly elegant. ZeroNet uses Bitcoin cryptography for identity. Each site has a unique address derived from a public/private key pair. The site owner signs updates with their private key, and everyone can verify those signatures. This means content can be updated, but only by whoever holds the key. No passwords, no accounts, no centralised authentication. Content is distributed using BitTorrent’s protocol. When you visit a ZeroNet site, you’re downloading it from peers and simultaneously seeding it to others. Sites are essentially signed archives that propagate across the network. For privacy, ZeroNet can route traffic through Tor. It’s optional, but turning it on means your IP address isn’t visible to other peers. Combined with the fact that there’s no central server logging requests, the privacy properties are genuinely interesting. My interest in ZeroNet ties directly into my broader views on privacy . I’m not naive about the limitations of decentralised systems, or the fact that censorship resistance can protect content that probably shouldn’t be protected. But there’s something valuable in understanding how these networks function. The centralised web has become remarkably fragile. A handful of companies control most of the infrastructure, and they’re increasingly subject to political and legal pressure. That’s sometimes appropriate. Nobody wants to defend genuinely harmful content. But the tools of control, once built, don’t stay confined to their intended purpose. ZeroNet represents a different architecture entirely. It’s not about evading accountability, it’s about distributing it. Instead of trusting a company to host your content and hoping they don’t change their terms of service, you trust mathematics. The trade-offs are real: slower access, no search engines worth mentioning, and a user experience that assumes technical competence. But those are engineering problems, not fundamental limitations. I’m not suggesting everyone should abandon the normal web for ZeroNet. That would be impractical and unnecessary. But understanding how decentralised alternatives work feels increasingly important. The architecture of the tools we use shapes what’s possible, and diversity in that architecture is probably healthy. For now, I’m treating ZeroNet as an experiment. Something to explore and learn from rather than rely on. But in a world where digital infrastructure is more contested than ever, it’s useful to know that alternatives exist. Thanks to ポテト for pointing me towards ZeroNet.

0 views
ptrchm 4 months ago

How to Accept Crypto Payments in Rails

I wanted to see what it would take to implement crypto payments in PixelPeeper . With Coinbase Commerce, it’s surprisingly easy. To accept crypto, you’ll need an ETH wallet address and a Coinbase Commerce account. Coinbase provides a nice hosted checkout page, where users have multiple options to pay (the funds will be converted to USDC). Upon successful payment, Coinbase will collect a small fee and the funds will be sent to your ETH address (there’s a caveat, see the last section). How does it work? Create a Charge Rails Integration The Woes of Crypto UX

0 views
Jim Nielsen 4 months ago

Research Alt

Jeremy imagines a scenario where you’re trying to understand how someone cut themselves with a blade. It’d be hard to know how they cut themselves just by looking at the wound. But if you talk to the person, not only will you find out the reason, you’ll also understand their pain. But what if, hear me out here, instead we manufactured tiny microchips with sensors and embedded them in all blades? Then we program them such that if they break human flesh, we send data — time, location, puncture depth, current blade sharpness, etc. — back to our servers for processing with AI. This data will help us understand — without bias, because humans can’t be trusted — how people cut themselves. Thus our research scales much more dramatically than talking to individual humans, widening our impact on humanity whilst simultaneously improving our product (and bottom line)! I am accepting venture funds for this research. You can send funds to this bitcoin address: . Reply via: Email · Mastodon · Bluesky

0 views
flowtwo.io 6 months ago

Broken Money

In some sense, the circularity of the financial system is almost poetic; it represents how dependent we all are on one another. However, it’s also very fragile. Everything is a claim of a claim of a claim, reliant on perpetual motion and continual growth to not collapse. — Lyn Alden, Broken Money , Ch. 23, para. 24 When I started paying more attention to Bitcoin, I felt a desire to develop a better understanding of money in general. It seemed necessary if I wanted to really "get" Bitcoin and why it was so important. I looked for a book that could explain how money really worked, in an accessible format. I couldn't find anything at the time. That was around 2019, before Broken Money was written. Broken Money was published by Lyn Alden in 2023. It's a really approachable text that describes the history of money, different forms of money, and the underlying qualities that make money useful. In his article “On the Origin of Money,” Menger described that an ideal money transports value across both space and time, meaning that it can be transported across distances efficiently or saved for spending in the future. — Alden, Ch. 8, para. 20 Money is a system that efficiently transports value across space and time. I'd add "people" as a 3rd dimension of transport. I think that's a good definition to start with. There's no way to argue that the author isn't biased to some extent. She's both personally and professionally invested in the success of Bitcoin and therefore is going to make the problems with the current financial system as pronounced as possible. With that said, I don't think she's making this stuff up. Most of the explanations and theories presented were believable to me, and the evidence is hard to ignore. But we have to accept that macroeconomics is incredibly complex and the best we can do is have theories. Modern Monetary theory, Austrian economics, Chicago School of Economics, girl math....these are all popular schools of thought that attempt to explain how money and the economy works. But the economy is a complex, dynamic system of forces and no one theory can perfectly explain it. Alden spends a good deal of time in the book writing about the history of money and how we arrived to our present day financial framework. Bitcoin isn't mentioned until chapter 20 actually. This dissection of money really highlighted many of the flaws and limitations in our global monetary system. I'm going to focus on the parts I found most interesting and, frankly, concerning. Today, every fiat currency on Earth is inflationary. This just means the value of a "dollar" (in the general sense) decreases over time. It's highly debatable whether this is good for a society, and who it's good for. Alden makes the case that inflation is counter-intuitive to how prices should work—but it's a necessary evil that the government enforces so our highly leveraged financial system doesn't collapse. A 2% inflation target means that prices on average will double every 35 years. This is interesting, because ongoing productivity gains should make prices lower over time, not higher. Central bankers do everything in their power to make sure prices keep going up. — Alden, Ch. 25, para. 69 The problem with constant change to the "price" of a dollar makes it hard to make long-term financial plans. Prices are the only mechanism for communicating information about value, so if these prices change over time in non-predictable ways then we can't properly reason about long-term saving and spending decisions. In general, inflationary money rewards debtors (people who owe money) and incentivizes spending. At first that sounds fine, since the most financially vulnerable people in society are usually those in debt. But the total amount of debt owned by the lowest earners in society doesn't even scratch the surface compared to the debt owned by the largest corporations, and even the government itself. So really, inflation rewards those at the top of the economy, it debases people's savings, and it incentivizes consumption and spending. It's a roller-coaster we have no choice but to ride. The breakdown of the modern banking system was eye opening for me. There's a distinction between the base money supply, which is all the money that actually exists, and the broad money supply, which is the total amount of dollars in circulation in the economy. Maybe you are as surprised as I was to find out these aren't the same thing. In essence, base money is all the dollars that have been created by the government's central bank; either by printing money or by issuing treasury reserves. Broad money is what you get if you added up every individual and corporate bank account balance in the country. For both base money and broad money, most countries currently work the same way as the United States. A country’s central bank manages the base money of the system, and the commercial banking system operates the larger amount of broad money that represents an indirect and fractionally reserved claim to this base money. — Alden, Ch. 24, para. 28 What I took from this is that every dollar you see in your bank account does not represent a whole "dollar loan" that you'd be able to go claim anywhere. It's a fraction of a fraction of a claim on a real dollar somewhere in a huge system of hierarchical ledgers. Money lent from one institution can be deposited at another institution and immediately (and fractionally) lent from there, resulting in the double-counting, triple-counting, quadruple counting, and so forth, of deposits relative to base money. At that point, people have far more claims for gold than the amount of gold that really exists in the system, and so in some sense, their wealth is illusory. — Alden, Ch. 13, para. 21 Although this is exactly how fractional reserve banking is designed to work, it still makes me feel uneasy. Everyone is just loaning assets they don't own, buying and selling these loans, and in general just creating money out of thin air based on false promises. It's a shaky foundation that our entire society depends on. The biggest flaw I see with modern economic systems is how much power is centralized—a small group of individuals make all the decisions on how much money to print, what the cost of borrowing should be, and other monetary policies that influence millions of people. Although this is mainly a consequence of democratically elected leadership, the fact that humans make these macroeconomic decisions on behalf of everyone seems fallible at best, and downright corruptible at worst. It only takes one unethical or despotic leader to destroy a national currency: To a less extreme extent — as I describe later in this book — this is sadly what happens throughout many developing countries today: people constantly save in their local fiat currency that, every generation or so, gets dramatically debased, with their savings being siphoned off to the rulers and wealthy class. — Alden, Ch. 8, para. 83 It's seen time and time again in developing countries, sadly. Even in non-developing countries, economic policy tends to favour those who already have money and, by extension, political power. That means big corporations and their wealthy owners. Over a 2-year period from the start of 2020 to the start of 2022, the broad money supply increased by approximately 40%. Printing money in this way devalued savers, bondholders, and in general people who didn’t receive much aid, and rewarded debtors and those who received large amounts of aid (keeping in mind that the biggest recipients of aid were corporations and business owners) — Alden, Ch. 27, para. 18 This sort of hair-trigger, reactionary decision-making is kind of unavoidable with the system of government we've devised. Democracy works in extremes and pushes those at the top to make rash decisions to appease voters and to maintain the appearance of leadership by making change for the sake of change. In essence, what I'm saying is human decision making is too flawed and influenced by emotion to be the way we make these decisions. People being at the centre of national fiscal policy is bad enough, but in the case of the United States, it's even worse. Because the U.S Dollar is the world's base currency, that means the decisions made by members of the Federal Reserve and Treasury Department affect the entire world. The buying power of every other currency is measured relative to USD, so if the U.S government decided to print a ton of money and give it to themselves, they are effectively stealing from the the rest of the world. This seems like an unfair advantage for one country to have. I know life isn't fair, but I believe we, as a global society, could come to a consensus on a way to transact across borders that doesn't depend on any specific country's economy. The most shocking part about this system is that it's not actually beneficial for America long-term! it artificially increases the purchasing power of the U.S. dollar. The extra monetary premium reduces the United States’ export competitiveness and gradually hollows outs the United States’ industrial base. To supply the world with the dollars it needs, the United States runs a persistent trade deficit. The very power granted to the reserve currency issuer is also what, over the course of decades, begins to poison it and render it unfit to maintain its status. — Alden, Ch. 21, para. 8 This is certainly debatable, but it makes sense intuitively. If one country is allowed to issue currency which is globally accepted, and it's the only country with this ability, then their currency will carry an extra monetary premium above all others. This "built-in" economic premium granted to the American people allows them, collectively as a society, to rest on their laurels and not have to work as hard. In other words, America has the option to "buy instead of build" because they are so wealthy. This is the fundamental reason for the trade deficit it has with almost every other country. Learning about this was highly relevant in 2025 in the midst of the trade war the current U.S President has launched. You could view the tariffs he's introduced as a way to neutralize this monetary premium and force their stagnated economy to start building and manufacturing in a way they haven't needed to since the Bretton Woods system was established. After a lengthly explanation of the history of money, and then several chapters bashing the current monetary system, Alden finally introduces Bitcoin to the reader. I won't go into much detail here as there are plenty of better resources than me which will explain Bitcoin's core concepts, if you're interested. I'd also recommend reading this book. It's explains Bitcoin really well. I'll briefly summarize how Bitcoin attempts to solve the problems I discussed above. Bitcoin is a deflationary currency. It has a fixed supply of 21 million total coins, which means it's purchasing power will trend upwards over time. In the best case scenario, this means everyone will continuously get richer as we all equally benefit from improvements in production efficiency and technological innovation. In the worst case, it means society comes to a halt as people delay purchases indefinitely waiting for their savings to be worth more. Either way, I believe a globally recognized, alternative currency model would be a healthy counter-balance to our existing fiat currency systems. At it's core, Bitcoin is a bearer asset. Ownership of Bitcoin is instantly verifiable via the blockchain ledger. Money, in it's physical form, is similar in that it's a bearer asset. But the dollars in your bank account don't represent ownership at all. They're a promise by your bank to give you that amount of dollars if you asked for it. This promise can't always be fulfilled. Bitcoin is unique in that it's purely digital, yet it has the same qualities as physical dollars. Finally, Bitcoin—as a monetary system—is completely decentralized. No single entity or government has any control over its rules. And it's rules are decided algorithmically and predictable for the rest of time, in theory. Nothing can change about Bitcoin unless the change is accepted by a majority of participants in the system . Couple that with the fact that Bitcoin's value is directly tied to the network size and its popularity as an accepted form of currency. So its incentive structure is designed to ensure the network will remain fair and accessible to everyone. Otherwise, no one will want to use it. Is it perfect? No...but it's fairer than how monetary policy is defined today. Broken Money is a sobering look at the state of money today. It traces the origins of money throughout human history—from the rai stones of Yap island to the post-COVID global inflation surge of the 2020s. It was well researched and well written. I don't think Bitcoin is going to overtake the fiat currency systems of the world. But I believe it's going to be around for a long time, acting as a hedge against the government's centralized control of money. In the worst case, it will act as a store of value akin to digital gold. In the best case, we will continue to innovate and build technology on top of Bitcoin that expands its utility in both familiar and novel ways. In a recent post on Nostr , Alden makes the case that Bitcoin is something like an open-source decentralized Fedwire , the settlement system that underpins the entire U.S banking industry. This feels like an apt comparison to me—mainly because the Bitcoin network can support basically the same transaction throughput as Fedwire can. Maybe one day Bitcoin will become the global settlement system for an entirely new class of banks and financial service providers. In my view, open-source decentralized money that empowers individuals, that is permissionless to use, and that allows for a more borderless flow of value, is both powerful and ethical. The concept presents an improvement to the current financial system in many ways and provides a check on excessive power, which makes it worth exploring and supporting. — Alden, Ch. 41, para. 79

0 views
Filippo Valsorda 7 months ago

Encrypting Files with Passkeys and age

Typage ( on npm) is a TypeScript 1 implementation of the age file encryption format . It runs with Node.js, Deno, Bun, and browsers, and implements native age recipients, passphrase encryption, ASCII armoring, and supports custom recipient interfaces, like the Go implementation . However, running in the browser affords us some special capabilities, such as access to the WebAuthn API. Since version 0.2.3 , Typage supports symmetric encryption with passkeys and other WebAuthn credentials, and a companion age CLI plugin allows reusing credentials on hardware FIDO2 security keys outside the browser. Let’s have a look at how encrypting files with passkeys works, and how it’s implemented in Typage. Passkeys are synced, discoverable WebAuthn credentials. They’re a phishing-resistant standard-based authentication mechanism. Credentials can be stored in platform authenticators (such as end-to-end encrypted iCloud Keychain), in password managers (such as 1Password), or on hardware FIDO2 tokens (such as YubiKeys, although these are not synced). I am a strong believer in passkeys, especially when paired with email magic links , as a strict improvement over passwords for average users and websites. If you want to learn more about passkeys and WebAuthn I can’t recommend Adam Langley’s A Tour of WebAuthn enough. The primary functionality of a WebAuthn credential is to cryptographically sign an origin-bound challenge. That’s not very useful for encryption. However, credentials with the extension can also compute a Pseudo-Random Function while producing an “assertion” (i.e. while logging in). You can think of a PRF as a keyed hash (and indeed for security keys it’s backed by the FIDO2 extension): a given input always maps to the same output, without the secret there’s no way to compute the mapping, and there’s no way to extract the secret. Specifically, the WebAuthn PRF takes one or two inputs and returns a 32-byte output for each of them. That lets “relying parties” implement symmetric encryption by treating the PRF output as a key that’s only available when the credential is available. Using the PRF extension requires User Verification (i.e. PIN or biometrics). You can read more about the extension in Adam’s book . Note that there’s no secure way to do asymmetric encryption: we could use the PRF extension to encrypt a private key, but then an attacker that observes that private key once can decrypt anything encrypted to its public key in the future, without needing access to the credential. Support for the PRF extension landed in Chrome 132, macOS 15, iOS 18, and 1Password versions from July 2024 . To encrypt an age file to a new type of recipient, we need to define how the random file key is encrypted and encoded into a header stanza . Here’s a stanza that wraps the file key with an ephemeral FIDO2 PRF output. The first argument is a fixed string to recognize the stanza type. The second argument is a 128-bit nonce 2 that’s used as the PRF input. The stanza body is the ChaCha20Poly1305 encryption of the file key using a wrapping key derived from the PRF output. Each credential assertion (which requires a single User Presence check, e.g. a YubiKey touch) can compute two PRFs. This is meant for key rotation , but in our use case it’s actually a minor security issue: an attacker who compromised your system but not your credential could surreptitiously decrypt an “extra” file every time you intentionally decrypt or encrypt one. We mitigate this by using two PRF outputs to derive the wrapping key. The WebAuthn PRF inputs are composed of a domain separation prefix, a counter, and the nonce. The two 32-byte PRF outputs are concatenated and passed to HKDF-Extract-SHA-256 with as salt to derive the ChaCha20Poly1305 wrapping key. That key is used with a zero nonce (since it’s used only once) to encrypt the file key. This age recipient format has two important properties: Now that we have a format, we need an implementation. Enter Typage 0.2.3. The WebAuthn API is pretty complex, at least in part because it started as a way to expose U2F security keys before passkeys were a thing, and grew organically over the years. However, Typage’s passkey support amounts to less than 300 lines , including a simple implementation of CTAP2’s CBOR subset . Before any encryption or decryption operation, a new passkey must be created with a call to . calls with a random to avoid overwriting existing keys, set to to ask the authenticator to store a passkey, and of course . Passkeys not generated by can also be used if they have the extension enabled. To encrypt or decrypt a file, you instantiate an or , which implement the new and interfaces. The recipient and identity implementations call with the PRF inputs to obtain the wrapping key and then parse or serialize the format we described above. Aside from the key name, the only option you might want to set is the relying party ID . This defaults to the origin of the web page (e.g. ) but can also be a parent (e.g. ). Credentials are available to subdomains of the RP ID, but not to parents. Since passkeys are usually synced, it means you can e.g. encrypt a file on macOS and then pick up your iPhone and decrypt it there, which is pretty cool. Also, you can use passkeys stored on your phone with a desktop browser thanks to the hybrid BLE protocol . It should even be possible to use the AirDrop passkey sharing mechanism to let other people decrypt files! You can store passkeys (discoverable or “resident” credentials) on recent enough FIDO2 hardware tokens (e.g. YubiKey 5). However, storage is limited and support still not universal. The alternative is for the hardware token to return all the credential’s state encrypted in the credential ID, which the client will need to give back to the token when using the credential. This is limiting for web logins because you need to know who the user is (to look up the credential ID in the database) before you invoke the WebAuthn API. It can also be desirable for encryption, though: decrypting files this way requires both the hardware token and the credential ID, which can serve as an additional secret key, or a second factor if you’re into factors . Rather than exposing all the layered WebAuthn nuances through the typage API, or precluding one flow, I decided to offer two profiles: by default, we’ll generate and expect discoverable passkeys, but if the option is passed, we’ll request the credential is not stored on the authenticator and ask the browser to show UI for hardware tokens. returns an age identity string that encodes the credential ID, relying party ID, and transports as CTAP2 CBOR, 4 in the format . This identity string is required for the security key flow, but can also be used as an optional hint when encrypting or decrypting using passkeys. More specifically, the data encoded in the age identity string is a CBOR Sequence of One more thing… since FIDO2 hardware tokens are easily accessible outside the browser, too, we were able to build a age CLI plugin that interoperates with typage security key identity strings: age-plugin-fido2prf . Since FIDO2 PRF only supports symmetric encryption, the identity string is used both for decryption and for encryption (with ). This was an opportunity to dogfood the age Go plugin framework , which easily turns an implementation of the Go interface into a CLI plugin usable from age or rage , abstracting away all the details of the plugin protocol . The scaffolding turning the importable Identity implementation into a plugin is just 50 lines . For more details, refer to the typage README and JSDoc annotations. To stay up to date on the development of age and its ecosystem, follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @[email protected] . On the last day of this year’s amazing CENTOPASSI motorcycle rallye, we watched the sun set over the plain below Castelluccio , and then rushed to find a place to sleep before the “engines out” time. Found an amazing residence where three cats kept us company while planning the next day. Geomys , my Go open source maintenance organization, is funded by Smallstep , Ava Labs , Teleport , Tailscale , and Sentry . Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement .) Here are a few words from some of them! Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews. Ava Labs — We at Ava Labs , maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network ), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team. It started as a way for me to experiment with the JavaScript ecosystem, and the amount of time I spent setting up things that we can take for granted in Go such as testing, benchmarks, formatting, linting, and API documentation is… incredible. It took even longer because I insisted on understanding what tools were doing and using defaults rather than copying dozens of config files. The language is nice, but the tooling for library authors is maddening. I also have opinions on the Web Crypto APIs now. But all this is for another post.  ↩ 128 bits would usually be a little tight for avoiding random collisions , but in this case we care only about never using the same PRF input with the same credential and, well, I doubt you’re getting any credential to compute more than 2⁴⁸ PRFs.  ↩ This is actually a tradeoff: it means we can’t tell the user a decryption is not going to work before asking them the PIN of the credential. I considered adding a tag like the one being considered for stanzas or like the one. The problem is that the WebAuthn API only lets us specify acceptable credential IDs upfront, there is no “is this credential ID acceptable” callback, so we’d have to put the whole credential ID in the stanza. This is undesirable both for privacy reasons, and because the credential ID (encoded in the identity string) can otherwise function as a “second factor” with security keys.  ↩ Selected mostly for ecosystem consistency and because it’s a couple hundred lines to handroll.  ↩ Per-file hardware binding : each file has its own PRF input(s), so you strictly need both the encrypted file and access to the credential to decrypt a file. You can’t precompute some intermediate value and use it later to decrypt arbitrary files. Unlinkability : there is no way to tell that two files are encrypted to the same credential, or to link a file to a credential ID without being able to decrypt the file. 3 the version, always the credential ID as a byte string the RP ID as a text string the transports as an array of text strings It started as a way for me to experiment with the JavaScript ecosystem, and the amount of time I spent setting up things that we can take for granted in Go such as testing, benchmarks, formatting, linting, and API documentation is… incredible. It took even longer because I insisted on understanding what tools were doing and using defaults rather than copying dozens of config files. The language is nice, but the tooling for library authors is maddening. I also have opinions on the Web Crypto APIs now. But all this is for another post.  ↩ 128 bits would usually be a little tight for avoiding random collisions , but in this case we care only about never using the same PRF input with the same credential and, well, I doubt you’re getting any credential to compute more than 2⁴⁸ PRFs.  ↩ This is actually a tradeoff: it means we can’t tell the user a decryption is not going to work before asking them the PIN of the credential. I considered adding a tag like the one being considered for stanzas or like the one. The problem is that the WebAuthn API only lets us specify acceptable credential IDs upfront, there is no “is this credential ID acceptable” callback, so we’d have to put the whole credential ID in the stanza. This is undesirable both for privacy reasons, and because the credential ID (encoded in the identity string) can otherwise function as a “second factor” with security keys.  ↩ Selected mostly for ecosystem consistency and because it’s a couple hundred lines to handroll.  ↩

0 views
Neil Madden 8 months ago

Streaming public key authenticated encryption with insider auth security

Note: this post will probably only really make sense to cryptography geeks. In “When a KEM is not enough ”, I described how to construct multi-recipient (public key) authenticated encryption. A naïve approach to this is vulnerable to insider forgeries: any recipient can construct a new message (to the same recipients) that appears to come from the original sender. For some applications this is fine, but for many it is not. Consider, for example, using such a scheme to create auth tokens for use at multiple endpoints: A and B. Alice gets an auth token for accessing endpoints A and B and it is encrypted and authenticated using the scheme. The problem is, as soon as Alice presents this auth token to endpoint A, that endpoint (if compromised or malicious) can use it to construct a new auth token to access endpoint B, with any permissions it likes. This is a big problem IMO. I presented a couple of solutions to this problem in the original blog post. The most straightforward is to sign the entire message, providing non-repudiation. This works, but as I pointed out in “ Digital signatures and how to avoid them ”, signature schemes have lots of downsides and unintended consequences. So I developed a weaker notion of “insider non-repudiation”, and a scheme that achieves it: we use a compactly-committing symmetric authenticated encryption scheme to encrypt the message body, and then include the authentication tag as additional authenticated data when wrapping the data encryption key for each recipient. This prevents insider forgeries, but without the hammer of full blown outsider non-repudiation, with the problems it brings. I recently got involved in a discussion on Mastodon about adding authenticated encryption to Age (a topic I’ve previously written about ), where abacabadabacaba pointed out that my scheme seems incompatible with streaming encryption and decryption, which is important in Age use-cases as it is often used to encrypt large files. Age supports streaming for unauthenticated encryption, so it would be useful to preserve this for authenticated encryption too. Doing this with signatures is fairly straightforward: just sign each “chunk” individually. A subtlety is that you also need to sign a chunk counter and “last chunk” bit to prevent reordering and truncation, but as abacabadabacaba points out these bits are already in Age, so its not too hard. But can you do the same without signatures? Yes, you can, and efficiently too. In this post I’ll show how. One way of thinking about the scheme I described in my previous blog post is to think of it as a kind of designated-verifier signature scheme. (I don’t hugely like this term, but it’s useful here). That is, we can view the combination of the committing MAC and authenticated KEM as a kind of signature scheme where the signature can only be verified by recipients chosen by the sender, not by third-parties. If we take that perspective, then it becomes clear that we can just do exactly the same as you do for the normal signature scheme: simply sign each chunk of the message separately, and include some chunk counter + last chunk marker in the signature. How does this work in practice? Firstly, we generate a fresh random data encryption key (DEK) for the message. This is shared between all recipients. We then use this DEK to encrypt each chunk of the message separately using our compactly-committing AEAD. To prevent chunk reordering or truncation we can use the same method as Age: Rogaway’s STREAM construction , which effectively just encodes the chunk counter and last-chunk bit into the AEAD nonce. (Personally, I prefer using a symmetric ratchet instead, but that’s for another post). This will produce a compactly committing tag for each chunk—typically 16 bytes per chunk (or 32 bytes if we care about malicious senders ). The original scheme I proposed then fed this tag (of which there was only 1) as associated data when wrapping the DEK for each recipient using an authenticated key-wrap algorithm and a per-recipient wrapping-key derived from an authenticated KEM. If the DEK is 32 bytes, and the key-wrapping algorithm produces a 16-byte tag then this outputs 48 bytes per recipient. We can do exactly the same thing for the new scheme, but we only feed the tag from the first chunk as the associated data, producing wrapped keys that commit to the first chunk only. We then simply repeat the process for each subsequent chunk, but as the DEK is unchanged we can leave it empty: effectively just computing a MAC over the commitment for each chunk in turn. In our example, this will produce just a 16-byte output per recipient for each chunk. If we compare this to typical signature schemes that would be used for signing chunks otherwise, we can fit 4 recipient commitments in the same space as a single Ed25519 signature (64 bytes), or 16 recipients in the same space as an RSA-2048 signature. To support such a scheme, the interface of our KEM would need to change to include a new operation that produces an intermediate commitment to a particular chunk tag, with an indication of whether it is the last tag or not. The KEM is then free to reuse the shared secrets derived for each recipient, avoiding the overhead of computing new ones for each chunk. This is an efficiency gain over using a normal digital signature for each chunk. Here is a sketch of what the overall process would look like, to hopefully clarify the ideas presented. Alice is sending a message to Bob and Charlie. The message consists of three “chunks” and is using an authenticated KEM based on X25519. The total space taken is then 128 + 32 + 32 = 192 bytes, and we can remove the 3 original 16-byte AEAD tags, giving a net overhead of just 144 bytes. Compared to signing each chunk with Ed25519 which would need 192 bytes, or RSA-2048 which needs 768 bytes. Decryption then performs the obvious operations: decrypting and recomputing the MAC tag for each chunk using the decapsulated DEK and then verifying the commitment blocks match at the end of each subsequent chunk. This is still very much a sketch, and needs to be reviewed and fleshed out more. But I believe this is quite a neat scheme that achieves streaming authenticated encryption without the need for tricksy little signatureses , and potentially much more efficient too. First, Alice generates a random 32-byte DEK and uses it to encrypt the message, producing tags t 1 , t 2 , and t 3 . Alice generates a random ephemeral X25519 keypair: ( esk , epk ). She computes a shared secret with Bob, ss b , via something like HPKE’s DH-AKEM. Likewise, she computes a shared secret with Charlie, ss c . Alice then wraps the DEK from step 1 for Bob and Charlie, using a key-wrap algorithm like AES in SIV mode (keyed with ss b and then ss c ), including t 1 as additional authenticated data. She outputs the two wrapped keys plus the ephemeral public key ( epk ) as the encapsulated key blob. This will be 32 bytes for the epk, plus 48 bytes for each key blob (one for Bob, another for Charlie), giving 128 bytes total. She then calls a new “commit” operation on the KEM for each subsequent chunk tag: i.e., t 2 and t 3 . This commit operation performs the same as step 5, but with a blank DEK, outputting just a 16-byte SIV for each recipient for a total of 32 bytes per chunk. These commitment blocks can then be appended to each chunk. (In fact, they can replace the normal AEAD tag, saving even more space).

0 views
Neil Madden 8 months ago

Are we overthinking post-quantum cryptography?

tl;dr: yes, contra thingamajig’s law of wotsits . Before the final nail has even been hammered on the coffin of AI, I hear the next big marketing wave is “quantum”. Q uantum computing promises to speed up various useful calculations, but is also potentially catastrophic to widely-deployed public key cryptography. Shor’s algorithm for a quantum computer, if realised, will break the hard problems underlying RSA, Diffie-Hellman, and Elliptic Curve cryptography—i.e., most crypto used for TLS, SSH and so on. Although “cryptographically-relevant” quantum computers (CRQCs) still seem a long way off ( optimistic roadmap announcements and re-runs of previously announced “breakthroughs” notwithstanding), for some applications the risk is already real. In particular, if you are worried about nation-states or those with deep pockets, the threat of “store-now, decrypt-later” attacks must be considered. It is therefore sensible to start thinking about deploying some form of post-quantum cryptography that protects against these threats. But what, exactly? If you are following NIST’s post-quantum crypto standardisation efforts, you might be tempted to think the answer is “everything”. NIST has now selected multiple post-quantum signature schemes and public key encryption algorithms (“ KEMs ”), and is evaluating more. Many application and protocol standards are then building on top of these with the assumption that post-quantum crypto should either replace all the existing crypto, or else at least ride alongside it everywhere in a “hybrid” configuration. We have proposals for post-quantum certificates , post-quantum ciphersuites for TLS , for SSH , for Signal and so on. From my view on the sidelines, it feels like many cryptographers are pushing to entirely replace existing “classical” cryptographic algorithms entirely with post-quantum replacements, with the idea that we would turn off the classical algorithms somewhere down the line. Unfortunately, many of the proposed post-quantum cryptographic primitives have significant drawbacks compared to existing mechanisms, in particular producing outputs that are much larger. For signatures, a state of the art classical signature scheme is Ed25519, which produces 64-byte signatures and 32-byte public keys, while for widely-used RSA-2048 the values are around 256 bytes for both. Compare this to the lowest security strength ML-DSA post-quantum signature scheme, which has signatures of 2,420 bytes (i.e., over 2kB!) and public keys that are also over a kB in size (1,312 bytes). For encryption, the equivalent would be comparing X25519 as a KEM (32-byte public keys and ciphertexts) with ML-KEM-512 (800-byte PK, 768-byte ciphertext). What is the practical impact of this? For some protocols, like TLS, the impact is a bit painful but doable, and post-quantum hybrid ciphersuites are already being rolled out. But there is a long tail of other technologies that have yet to make the transition. For example, consider JWTs, with which I am intimately familiar (in a Stockholm Syndrome way). The signature of a JWT is base64-encoded, which adds an extra 33% size compared to raw binary. So, for Ed25519 signatures, we end up with 86 bytes, after encoding. For ML-DSA, the result is 3,227 bytes. If you consider that browsers typically impose a 4kB maximum size for cookies per-domain, that doesn’t leave a lot of room left for actual data. If you wanted to also encrypt that JWT, then the base64-encoded content (including the signature) is encrypted and then base64-encoded again, resulting in a signature that is already over the 4kB limit on its own. None of this should be taken as an endorsement of JWTs for cookies, or of the design decisions in the JWT specs, but rather it’s just an example of a case where replacing classical algorithms with post-quantum equivalents looks extremely hard. In my own view, given that the threat from quantum computers is at best uncertain and has potentially already stalled (see image below), the level of effort we invest in replacing already deployed crypto with something new needs to be proportional to the risk. In a list of things that keep me up at night as a security engineer, quantum computing would be somewhere on the second or third page. There is, IMO a not-insignificant chance that CRQCs never materialise, and so after a few years we actually roll back entirely to pre-quantum cryptographic algorithms because they are just better. For some applications (such as Signal) that risk profile is quite different, and it is right that they have invested effort into moving to PQC already, but I think for most organisations this is not the case. What would a Minimum Viable Post-Quantum Cryptography transition look like? One that protects against the most pressing threats in a way that is minimally disruptive. I believe such a solution would involve making two trade-offs: To my eyes, this is the obvious first step to take and is potentially the only step we need to take. But this approach seems at odds with where PQC standardisation is heading currently. For example, if we adopt this approach then code-based approaches such as Classic McEliece seem much more attractive than the alternatives currently being standardised by NIST. The main drawback of McEliece is that it has massive public keys (261kB for the lowest security parameters, over 1MB for more conservative choices). But in exchange for that you get much smaller ciphertexts: between 96 and 208 bytes. This is much less than the other lattice-based KEMs, and somewhere between elliptic curves and RSA in size. For many applications of JWTs, which already use static or rarely-updated keys, not to mention OAuth, SAML, Age , PGP, etc this seems like an entirely sensible choice and essentially a low-risk drop-in replacement. Continue using pre-quantum signatures or Diffie-Hellman based authentication mechanism , and layer Classic McEliece on top. A hybrid X25519-McEliece KEM could use as little as 128 bytes for ciphertext—roughly half the size of a typical RSA equivalent and way less than ML-KEM, and could also support (pre-quantum) authentication as an Authenticated KEM by hybridisation with an existing DH-AKEM , avoiding the need for signatures at all . This is the approach I am taking to PQC in my own upcoming open source project, and it’s an approach I’d like to see in JWT-land too (perhaps by resurrecting my ECDH-1PU proposal , which avoids many JWT-specific pitfalls associated with alternative schemes). If there’s enough interest perhaps I’ll find time to get a draft together. Firstly, to commit only to post-quantum confidentiality and ignore any threats to authenticity/integrity from quantum computers until it is much more certain that CRQCs are imminent. That is, we transition to (hybrid) post-quantum encryption to tackle the store-now, decrypt-later threat, but ignore post-quantum signature schemes. We will still have classical authentication mechanisms. Secondly, we implement the absolute bare minimum needed to protect against that store-now, decrypt-later threat: that is, simple encryption with static keys. Any stronger properties, such as forward secrecy, or post-compromise recovery, are left to existing pre-quantum algorithms such as elliptic curve crypto. This largely eliminates the need to transfer bulky PQ public keys over the network, as we can share them once (potentially out-of-band) and then reuse them for a long time.

0 views
flowtwo.io 10 months ago

Is Solo Bitcoin Mining a Good Investment?

I recently saw a headline about a "solo miner" successfully mining a Bitcoin block and receiving the coveted 3.125₿ mining reward. Actually, there's been a few of these stories in the past couple months. It always generates a bit of buzz in the Bitcoin community; it's proof that the Blockchain is truly decentralized and anyone can participate in it. All you need is an internet connection and a computer. It's also sort of analogous to seeing a headline about someone winning the lottery. But does seeing a story like that ever entice you to go buy a lottery ticket? For most people, the answer is no, because it's common knowledge that the lottery is a bad investment and the odds are stacked against you. But this made me wonder, is that the case for Bitcoin mining too? And what are the variables that will turn it from a bad investment into a good one, if so. The reality of Bitcoin mining today is that it's become extremely centralized . Just 4 companies provide over 75% of the current global hashrate. And one company, Foundry USA, owns about 1/3 of the total hashrate. This is not ideal—a distributed network is fundamental for ensuring transactions in the ledger are authentic, censorship resistant, and correct. Too much centralization of mining makes the network susceptible to a 51% attack . Therefore, it's important that Bitcoin mining continues to be accessible to the general public. And since the only real motivation for participating is financial, it's crucial for the investment to be profitable, somehow. How does one determine the profitability of mining bitcoin? Mining the next block in the blockchain involves finding a specific random hash, so it's probabilistic in nature. A simple approximation for a probabilistic outcome is to calculate the expected value (EV) of participating in the network, and then compare that with the cost of doing so. This is straightforward to calculate with a few assumptions, which I will explain below. I'm going to start with calculating the EV of running a single consumer grade bitcoin miner. This can be considered the "base case" that we can extrapolate from afterwards. There are several companies producing these mining units; I'm going to choose one popular and somewhat recent model: the Bitaxe Gamma 600 Series . Here's the specifications we need: Source 1 Source 2 I'm based in Canada, so I'm going to use the average cost of electricity in Canada for this calculation. According to energyhub.org , the national average residential cost of power is $0.192 CAD/kWh. I'm also going to assume you run the miner 24 hours a day for 3 years straight. This provides the time frame to amortize the purchase cost of the miner. It also makes the math easier because we don't need to consider the next bitcoin halving event in 2028. Putting it all together, the cost of running a single Bitaxe miner per month is calculated as follows: So purchasing and running a Bitaxe Gamma 600 Series will cost you $8.46 a month roughly. Now let's calculate the amount of money you can expect to make. Lets start with the current global hashrate for bitcoin mining. According to CoinWarz, the current hashrate is sitting just below 1 ZettaHash per second (ZH/s). It's not often I get to use a unit prefix like zetta, so that's exciting. Zetta is a trillion billion. The hashrate fluctuates quite a bit and will likely keep increasing in the future, but for this calculation I will assume it's 1 ZH/s. The current mining reward, until 2028, is 3.125₿. At current prices, this equates to $367,900 CAD for every successfully mined block. The Bitcoin network is designed to produce a block every 10 minutes. It will adjust the difficulty level for mining blocks based on the recently seen hashrate, algorithmically, to ensure the network maintains a steady production rate of blocks. Therefore, we can expect roughly 4320 blocks to be mined every month. The expected value calculation is simply the chance you will successfully mine a block multiplied by the value of that block's reward. We can approximate the probability of that happening by dividing the hashrate of our Bitaxe by the total hashrate in the network. So, according to the math, you can expect to make $1.91, while spending $8.46 a month, mining bitcoin with a single Bitaxe Gamma 600 Series, in Canada. This equates to a return on investment of -77%. Now, just for fun, let's compare this to buying a lottery ticket. Fortunately someone on GitHub already did the necessary calculations for me (Thank you keitchchhh !). This analysis is specifically for the lotteries available in Ontario, Canada, where I live. The expected values here are based on a single $3 play and is dependent on the jackpot value. Even the low end, $1.40, your expected return on investment is only -53% so it's already a better investment than solo bitcoin mining. So in conclusion, buying a single consumer grade Bitcoin miner is probably not going to make you rich. You'd be better off playing the LottoMAX. But this begs the question, what do you need to do to make mining profitable? I asked an LLM to make a calculator based on all these variables so you can play around with the numbers and see what sort of changes can turn the ROI positive. Monthly Cost: $ 0.00 Monthly EV Profit: $ 0.00 Monthly Expected Value (EV): $ 0.00 Return on Investment (ROI): 0.00 % Next, I made some graphs showing how all these different variables affect the overall ROI. The data visualization code is available on my GitHub here . For all these graphs, I kept all the other variables the same as our "single-Bitaxe-miner-running-for-3-years-straight-with-average-energy-prices-in-Canada" example above. I just varied one input at a time to see the effect on the profitability. First, here's the effect of the global hashrate. At the current levels of ~ 1000EH/s, the profitability of your single Bitaxe is basically bottomed out. But it exponentially increases as total hashrate decreases, reaching profitability around the 225EH/s mark. Next, let's look at how the cost of electricity affects the ROI. It doesn't affect it much. Even if electricity was free, the ROI would still be -68%, which suggests, in this Bitaxe example, most of the cost is coming from the hardware itself and not the power it consumes. Next, what if we suddenly time travelled back to 2008 when the mining reward was 50₿? And let's pretend it still had the same exchange rate it does today. Would we be profitable then? Yes, we would be. In fact we'd only have to travel back to 2012 to receive an awesome 84% return on our investment. Of course, this example is pretty baseless since the mining reward and the overall price of Bitcoin are highly correlated . Speaking of the Bitcoin price, how do changes in its exchange rate with fiat currencies (like the Canadian dollar, in my calculation) affect our ROI? Let's see. It has a positive linear relationship, which might have been obvious to the more statistically inclined readers. If we entered a bull market where the price of 1₿ exceeded $500,000 CAD, our single Bitaxe miner would be looking like a good investment all of a sudden. But yet again, the price of Bitcoin is not entirely independent of all the other variables in our equation. Namely, the total hashrate in the network would likely increase as Bitcoin's price rises. That's just how a free market works. Here's the ROI compared against the number of years you keep your Bitaxe miner running. Let's just pretend the mining reward stays the same. Even if you kept your single Bitaxe running for 20 years straight, and thus were able to amortize the cost of the hardware over that entire period, you'd still be sitting at a -40% ROI. I guess it's not about how long you play the game either. So what is in our control? Obviously, it's how big of a mining setup you have! Instead of 1 Bitaxe, lets buy 100! Huh...I guess it's going to help much. In fact, it wouldn't help our ROI at all. Even if you owned a million Bitaxe Gamma 600 Series and ran them 24/7 for 3 years, your return would still be -77%. What I've learned is that what really determines your mining profitability are 3 things: The first one is entirely dependent on how you source your electricity for mining. The last two are based on the miner you choose and can be compared nicely using this site: https://www.asicminervalue.com/efficiency For example, if we look at the #1 rated miner for efficiency on that site, the Bitmain Antminer S21 XP+ Hyd —it boasts a hashrate of 500Th/s while consuming 5.5kW of power, which results in an efficiency of 11J/Th. Compare that with the Bitaxe Gamma, which has an efficiency of 14J/Th. Admittedly, that doesn't sound like much more. But when you scale to computing 1000s of Terahashes per second, all that energy saving makes a difference. For instance, if we could lower our energy cost from the national average of 0.192 $/kWh down to 0.05 $/kWh, then mining with the Bitmain Antminer S21 XP+ Hyd would result in an ROI of 9%. If we bought a lot of them and got a bulk discount of 15% on the hardware too, suddenly our ROI jumps to 23%. And if we could keep them running for 5 years instead of 3, our expected return on investment is now 71%. So there's an example of a way to make mining profitable. This was a highly simplified analysis. There was a bunch of things I didn't consider: So don't take this as financial advice to go buy 100 Antminers and put them in your basement. But I hope you learned something. Let me know in the comments if I made any egregious statistical errors in my analysis, or didn't account for something else important! Feedback is always appreciated. Thanks for reading. Energy consumption: 17 W Hashrate: 1.2TH/s Cost: $220 CAD How much your energy costs How much the mining hardware costs How energy efficient the miners are, calculated in Joules per Terahash computed (J/Th Hardware maintenance cost Cooling cost (lots of miners generate an insane amount of heat) Bulk discounts or buying used hardware All the correlations between these variables

0 views
Binary Igor 11 months ago

Bitcoin P2P Network: peer discovery, reachability and resilience

Peer-to-Peer (P2P) Networks introduce a completely new set of challenges. In the traditional Client-Server Architecture, there is a server and client ... Things work completely differently in the Peer-to-Peer (P2P) Networks. These networks consist of equal peers that communicate directly with each other. Their goal is to be as decentralized as possible and not to have any single point of control or failure.

0 views
Filippo Valsorda 1 years ago

Benchmarking RSA Key Generation

RSA key generation is both conceptually simple, and one of the worst implementation tasks of the field of cryptography engineering. Even benchmarking it is tricky, and involves some math: here’s how we generated a stable but representative “average case” instead of using the ordinary statistical approach. Say you want to generate a 2048-bit RSA key. The idea is that you generate random 1024-bit numbers until you find two that are prime, you call them p and q , and compute N = p × q and d = 65537⁻¹ mod φ(N) 1 (and then some more stuff to make operations faster, but you could stop there). The computation of d is where the RSA magic lies, but today we are focusing on the first part. There is almost nothing special to selecting prime candidates. You draw an appropriately sized random number from a CSPRNG, and to avoid wasting time, you set the least significant bit and the two most significant bits: large even numbers are not prime, and setting the top two guarantees N won’t come out too small. Checking if a number x is prime is generally done with the Miller-Rabin test 2 , a probabilistic algorithm where you pick a “base” and use it to run some computations on x . It will either conclusively prove x is composite (i.e. not prime), or fail to do so. Figuring out how many Miller-Rabin tests you need to run is surprisingly difficult: initially you will learn the probability of a test failing for a composite is 1/4, which suggests you need 40 rounds to reach 2⁻⁸⁰; then you learn that’s only the upper bound for worst-case values of x , 3 while random values have a much much lower chance of failure; eventually you also realize that it doesn’t matter that much because you only run all the iterations on the prime, while most composites are rejected in the first iteration. Anyway, BoringSSL has a table and we’ll want 5 Miller-Rabin tests with random bases for a 1024-bit prime. Miller-Rabin is kinda slow though, and most numbers have small divisors, so it’s usually more efficient to quickly reject those by doing “trial divisions” or a GCD with the first handful of primes. The first few dozens are usually a major win, but using more and more primes has diminishing returns. There are a million and one things that can go wrong, but interestingly enough you have to go out of your way to get them wrong: if generating large candidates fully at random, all those cases have cryptographically negligible chance. To recap, to generate an RSA key you generate two primes. To generate a prime you pick random numbers, try to rule them out with trial divisions, and then do a few Miller-Rabin tests on them. Now, how are we supposed to benchmark that? Luck will drastically affect runtime: you’re essentially benchmarking a lottery. While debugging a performance regression Russ Cox ran hundreds of measurements and still got some noisy and in some places suspect results. It’s also not fast enough that you can run millions of measurements and let things average out. One might be tempted to normalize the measurements by dividing the runtime by the number of candidates tested, but this unevenly dilutes all the final computations, and is still perturbed by how many candidates are caught by trial division and how many proceed to the Miller-Rabin tests. Similarly, benchmarking Miller-Rabin in isolation ignores the final computations, and doesn’t measure the impact of trial divisions. What we can do is use math to figure out how an average representative sequence of candidates looks like and benchmark that. Since the key generation process is repeatable, 4 we can pre-generate a golden sequence of candidates, and even share it across implementations to benchmark apples to apples. First, we need to figure out how many composites we should expect on average before each prime. The prime-counting function approximation tells us there are Li(x) primes less than x , which works out 5 to one prime every 354 odd integers of 1024 bits. Then, we normalize the small divisors of the composites. A random number has a 1/p chance of being divisible by p , and based on that we can calculate how many composites divisible by the first n primes we’d expect to encounter before a prime. For example, we’d expect 33% of numbers to be divisible by 3, 46% to be divisible by 3 or 5, 69% of numbers to be divisible by one of the first 10 primes, 80% to be divisible by one of the first 50 primes, and so on . Flipping that around, we make 118 of our 353 composites divisible by 3, 47 divisible by 5 but not by 3, 27 divisible by 7 but not by 3 or 5, and so on. This will make the number of successful trial divisions representative, and will even let us do comparative benchmarks between different trial division thresholds without regenerating the inputs. Beyond setting the top and bottom bits like keygen will , we also unset the second-least significant bit and set the third-least significant bit of each candidate to normalize the number of iterations of the inner loop of Miller-Rabin, which depends on the trailing zeroes of x-1 . We don’t need to worry about composites failing Miller-Rabin tests: if 5 tests are enough to get to 2⁻¹¹² then one test fails with at most 2⁻²² chance, which is not cryptographically negligible but will not show up in benchmarks. Similarly, we don’t need to worry about e not being invertible modulo φ(N) : we use 65537 as e , which is prime, so only 1/65537 numbers aren’t coprime with it. The result is remarkably stable and should be representative both in terms of absolute runtime and in terms of CPU time spent in different functions, allowing meaningful profiling. Generating 20 random average traces and benchmarking them yields variance of less than 1%. You can see it in use in this Go standard library code review . The script to generate the traces, as well as ten ready to use traces are available in CCTV and you’re welcome to use them to benchmark your implementations! If you got this far, you might also want to follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @[email protected] . One day a friend was driving me to the SFO airport from Redwood Park and we were late. Like, flight begins boarding in a few minutes late. But then we came up to this view, and had to stop to take it in. The people in the other car had set out a little camping chair to watch the sun set over the clouds below. I have an incredible video of driving down into the clouds. Made the flight! My maintenance work is funded by the awesome Geomys clients: Interchain , Smallstep , Ava Labs , Teleport , SandboxAQ , Charm , Tailscale , and Sentry . Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement .) Here are a few words from some of them! Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews. Ava Labs — We at Ava Labs , maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network ), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team. SandboxAQ — SandboxAQ ’s AQtive Guard is a unified cryptographic management software platform that helps protect sensitive data and ensures compliance with authorities and customers. It provides a full range of capabilities to achieve cryptographic agility, acting as an essential cryptography inventory and data aggregation platform that applies current and future standardization organizations mandates. AQtive Guard automatically analyzes and reports on your cryptographic security posture and policy management, enabling your team to deploy and enforce new protocols, including quantum-resistant cryptography, without re-writing code or modifying your IT infrastructure. Charm — If you’re a terminal lover, join the club. Charm builds tools and libraries for the command line. Everything from styling terminal apps with Lip Gloss to making your shell scripts interactive with Gum . Charm builds libraries in Go to enhance CLI applications while building with these libraries to deliver CLI and TUI-based apps. Or, if you want to make your life harder and your code more complex for no practical benefit, you can use λ(N) instead of φ(N) , but that’s for a different rant.  ↩ There is also the Lucas test, and doing both a round of Miller-Rabin with base 2 and a Lucas test is called a Baillie–PSW. There are no known composites that pass the Baillie–PSW test, which sounds great, but the Lucas test is a major pain to implement.  ↩ In an adversarial setting, you also need to worry about the attacker forcing or adapting to your selection of bases. The amazingly-named Prime and Prejudice: Primality Testing Under Adversarial Conditions by Albrecht et al. pulls a number of fun tricks, but the main one boils down to the observation that if you hardcode the bases or generate them from x , they are not random.  ↩ It’s not strictly speaking deterministic, because the tests are randomized, but the chance of coming to a different conclusion is cryptographically negligible, and even the chance of major deviations in runtime is very small, as we will see.  ↩ I did a quick Monte Carlo simulation to check this was correct, and it was really fun to see the value swing and converge to the expected value. Math!  ↩ Or, if you want to make your life harder and your code more complex for no practical benefit, you can use λ(N) instead of φ(N) , but that’s for a different rant.  ↩ There is also the Lucas test, and doing both a round of Miller-Rabin with base 2 and a Lucas test is called a Baillie–PSW. There are no known composites that pass the Baillie–PSW test, which sounds great, but the Lucas test is a major pain to implement.  ↩ In an adversarial setting, you also need to worry about the attacker forcing or adapting to your selection of bases. The amazingly-named Prime and Prejudice: Primality Testing Under Adversarial Conditions by Albrecht et al. pulls a number of fun tricks, but the main one boils down to the observation that if you hardcode the bases or generate them from x , they are not random.  ↩ It’s not strictly speaking deterministic, because the tests are randomized, but the chance of coming to a different conclusion is cryptographically negligible, and even the chance of major deviations in runtime is very small, as we will see.  ↩ I did a quick Monte Carlo simulation to check this was correct, and it was really fun to see the value swing and converge to the expected value. Math!  ↩

0 views