Latest Posts (15 found)

Permissive vs Copyleft Open Source

The premise of copyleft licenses is attractive: Create more open source! With permissive licenses, someone can take the code and make proprietary modifications to it and sell it to other people without releasing the modifications. We want people to publish their improvements, right? With copyleft, we can force people to publish their improvements to copyleft code. Businesses will want to use our code because creating it was so much work in the first place. We need copyleft if we want more contributors, more open source, more code re-use, more freedom. In this post, I break down all the ways copyleft licenses fail to achieve their stated goals, and explain why permissive licenses succeed where copyleft fails. (Skip this if you already know the difference between permissive and copyleft licenses.) Open source licenses were created for important reasons: To protect the author, to protect the contributors, and to protect the consumers. This is achieved through codifying rights regarding redistribution, derived works, discrimination, and disclaimer of liability/warranty. Perhaps most importantly, an open source license is designed around allowing us to “fork” a project while retaining all the rights in the license. 1 When picking an open source license, there’s two camps: Permissive licenses (like BSD, MIT, Apache-2.0) and copyleft licenses (like MPL, GPL, AGPL). Permissive licenses give the author the same rights as the contributors and consumers. Everyone can do anything as long as the license is not removed from where it is. If you want to use permissively-licensed code in a proprietary program, go wild. Copyleft licenses are different–they were created as an exploit of our intellectual property and copyright system, a kind of “virus” to combat our proprietary rights-reserved-by-default legal system. With copyleft licenses, derivative work is required to perpetuate the same license. The boundary of derivation varies between licenses. For example: MPL constrains at the “file” level, GPL is at the binary distribution level which can be bypassed with dynamic linking, AGPL is at the “network” level specifically to target SaaS usage. There are many downstream repercussions of these details that I’ll discuss below. (Please bear with me as I take a brief detour to explain this not-very-intuitive point that comes up again and again. It turns out the asymmetry of copyleft is a critical flaw relevant in many scenarios.) In the construct of our society’s intellectual property law 2 , the author retains copyright by default. If I write some code and don’t explicitly release my rights to it, then it is considered proprietary and owned by me by default. I don’t even need to write “all rights reserved”! Having copyright over my code gives me the right to license it however I want. I can keep it proprietary, or I can make a custom contract to sell to a customer or another publisher, or I can release it under a common license template like an open source license. When I commit code to a code repository with some open source license, I am consenting to include my code under the project license (which usually lives in the root of the repository and applies to the whole thing unless said otherwise). But because I still own the copyright to my work, I can also release the same code under a different license in a different repository. This means as an author of some copyleft code, I have special rights that my users don’t have: I am allowed to use my code for proprietary purposes, but my users are not. 3 Permissive licenses don’t have this asymmetry by virtue of giving everyone the right to use the code for proprietary purposes. Okay, why is this important? Some projects choose copyleft licenses as a way to convince businesses to pay for a custom license. On paper, this sounds rational: The code is available to review and even prototype against, but deploying to customers may cross the viral copyleft boundary and a business would choose to pay for a custom license from the maintainer to avoid relicensing their entire product. For every dependency that a business pays for a custom license, it adopts a liability and creates another point of failure. Each dependency has the power to rug the business: They can choose not to renew the license next year, or they can choose to pivot the project in an incompatible direction, or they can refactor in a way that increases the copyleft boundary. Most importantly, the business has no recourse! 4 Recall that with permissive licenses, we all have the same rights: If a maintainer does something that doesn’t work for me, I can simply fork and continue using the code as I was before. I don’t lose any rights even if I was depending on a custom license before. It is safer for a business to do a custom license deal with a permissively licensed project! With copyleft and a custom license, I don’t have the protection of retaining my rights if something goes awry. If my business model or infrastructure architecture is incompatible with the copyleft nature of the dependency, then my entire business is at risk. Now imagine how much worse this could be at scale: Naively, it seems like a great idea “if all dependencies were copyleft and we automated licensing them” then these liabilities would compound exponentially. There is a common assumption from copyleft-licensed projects that the consumer will use the project as a whole. This is incorrect on two fronts: The stated goal of copyleft is to create more copyleft source code, but in practice it does the opposite: Copyleft creates more proprietary code. When a business finds a dependency that they’d like to use, but notice that it’s copyleft-licensed, the business has three choices: I’ve been working as an open source maintainer and contributor 20+ years and not a single time in my life have I seen a business willingly 6 make public and relicense their internal code to comply with a dependency they were considering using. Not once! On the other hand, I’ve personally witnessed many cases where a business would do a clean room rewrite of copyleft-encumbered code to avoid touching it. In fact, I’ve done work like this myself on several occasions and have been lucky to convince my employers to allow me to do it under a permissive MIT license. By default, rewrites stay locked up in an internal repository that no one else gets to benefit from. 7 Why rewrite the code? Because it’s wildly easier to rewrite code after the design boundaries have been established. If you have a mature codebase that your team spent a person-year building, ask them: “Given everything we’ve learned, how long would it take for us to rewrite this in a different language?” The answer to this is often around 10× faster. Much of the time spent building the first version is finding and avoiding all of the “wrong decisions”, iterating with user feedback, reframing features to make more holistic sense, testing them, polishing them, arguing over button colours, etc. 8 Second, consider the “replacement value” of the project (from the previous section). If we only need 20% of the functionality, we don’t need to rewrite 100% of the code. If you spent 52 weeks building 100% of the functionality and supporting infrastructure, and I just need 20% of it that I can rewrite 10× faster… that’s just a couple weeks of work! 9 Why would a business pay for an existential liability that when they could just have a half-decent programmer do an in-house rewrite of the relevant component in two weeks? While copyleft claims (but fails) to create more open source code, permissive code actually achieves this by default. When a business uses permissively licensed code and runs into a bug or needs to patch an improvement, they have two choices: No one wants to maintain and rebase an internal fork, it’s frustrating and grueling work with negligible benefits unless this one modification is the very core differentiator of your business–and this is extremely rare, especially considering most software businesses have hundreds or thousands of dependencies. By default, businesses tend to contribute more to permissive open source code in their own self-interest. Most software businesses outright ban their employees from touching strong copyleft licenses like AGPL. For example, Google’s document on allowable licenses for third party dependencies . 10 Some proponents of copyleft claim that this is a feature, not a bug: Why should we benefit businesses at all, when we could only benefit individuals? Unfortunately, this exclusion is counterproductive because nearly every programmer in our capitalist society is employed by such a business. At minimum, these projects miss out from people who would upstream fixes and improvements during work hours, but also many employment agreements have much influence over what people do in their personal time (if not practically then at least emotionally). And it’s not just businesses! I am an independent open source developer who is passionate about permissively licensed code (could you have guessed?) and I create a lot of it. I will not even read AGPL code because I don’t want to risk possibly contaminating some of my many other projects. I can write Python that has a copyleft GPL dependency without relicensing my project to match, but I can’t write Go that has a GPL dependency without relicensing. Why? Python is interpreted and the code is linked dynamically, whereas Go produces a statically linked binary so the copyleft “infects” the rest of the code. Unless, of course, if I want to ship my Python program as a py2exe bundle with a GPL dependency, then that changes everything. Does anyone actually understand these implications and how they vary across languages? Of course not! But wait, what if I split out the GPL dependency into a shared library that is dynamically linked? Oh, that’s fine. Unless you ask the FSF, who disagrees with almost everyone else. What about other copyleft licenses like LGPL? MPL? AGPL? What if I wrap an AGPL dependency in a network-isolated container which batch-processes input from a proprietary component in my system? That’s fine. Did we really improve anything or are we just asking people to create complex infrastructure and deployment workarounds? I liked this quote from David Chisnall on lobste.rs (the rest is worth reading too): […] And this is reason #3742648 why I don’t contribute to AGPL things: They place a compliance burden on good-faith actors that are trivial to bypass for bad-faith actors. While copyleft licenses have several exclusionary clauses (e.g. can’t use this code if you’re statically linking against other code that has a different license), permissive licenses do not. Copyleft licenses give special rights to the author, permissive licenses do not. Strong copyleft licenses are practically banned from large subsets of consumers and contributors (we’ll debate if this is justifiable), permissive licenses are not. Permissively licensed code is a better fit under the “both non-excludable and non-rivalrous” definition of public goods. A common complaint is that Amazon AWS exploits open source by profiting from it without sufficiently contributing back, and that the only solution is strong copyleft like AGPL. Until 2018, MongoDB was copyleft AGPL licensed, and Amazon AWS happily provided a hosted service for MongoDB. MongoDB didn’t like that Amazon was profiting from their work, so on October 2018, MongoDB changed their license to a commercial source-available license called SSPL–specifically to exclude Amazon being able to use it this way. By January 2019, just 2.5 months later, Amazon built and released a proprietary API-compatible version called DocumentDB. MongoDB was being developed since 2009, for 9 years. Did it take Amazon 9 years to rewrite the “replacement value” of their service? No, it took 2.5 months. Did AGPL save MongoDB? No. Did copyleft AGPL create more open source code? No, Amazon did a proprietary internal full rewrite and never published the code as open source. Did relicensing to an even more restrictive license force Amazon to give MongoDB more money? No. This has happened again and again. Elasticsearch relicensed from permissive Apache-2.0 to commercial SSPL in 2021, which resulted in the previous Apache-2.0 version being forked and maintained as OpenSearch. Amazon contributed code to the permissively licensed version, but Elasticsearch did not succeed at extracting more money from Amazon by using a more restrictive license. In 2024, Elasticsearch changed their mind and relicensed to copyleft AGPL , but AWS continues to use and contribute to OpenSearch. Did permissive Apache-2.0 create more open source code? YES! Unlike the copyleft AGPL example, Amazon forked the permissively-licensed project and continued to maintain it in public, allowing everyone else to benefit too. In 2024, Redis (permissively licensed under BSD) relicensed to a Source Available license, same story: Redis failed to extract more money from customers, instead a permissive community fork continued under Valkey. In 2025, Redis changed their mind and relicensed to copyleft AGPL ! What happened? It’s still early, but so far Valkey (BSD) continues to thrive, Redis continues to stagnate. I get this question a lot, and my general answer is: They already do, just not exactly in the way we might want. When we imagine businesses contributing to open source, we imagine them sending money to maintainers of dependencies they use. While this does happen, it’s very rare. What’s less rare is businesses increasingly choose to release parts of their code as open source code, and contribute improvements to other projects. While this doesn’t help overburdened maintainers, it is otherwise a good thing! I believe the lowest hanging fruit is to encourage businesses to get more involved with creating and maintaining open source code by empowering their employees to do so. The best way for maintainers to grow funding from businesses who use their software is to establish relationship with their “customers” and find services that are valuable for these customers. For example, funding to work on features that are important for the customers, or funding for an annual support retainer. If you’re optimizing for adoption and impact, I suggest the most permissive license that is popular in your community. Usually that’s MIT or Apache-2.0. If you’re trying to create a self-sustaining silo of code that doesn’t get reused outside of a comparatively small ecosystem, then copyleft licenses achieve this. If you’re trying to exclude groups from using your code, like for-profit businesses or the military industrial complex or people with disagreeable political alignments, then open source is probably not for you because that would violate the Open Source Definition . In the past couple of decades, permissive licenses (like BSD, MIT, Apache-2.0) have succeeded at creating more open source code, more collaboration and mutual participation from businesses, more code-reuse, and more public goods. This wasn’t always true: In the 90s, copyleft was king. Businesses were very skeptical of open source and security through obscurity was a popular mentality. In many ways, businesses considered themselves at war with open source. Recall this 2001 interview quote from Steve Ballmer (Microsoft CEO at the time) : Q: Do you view Linux and the open-source movement as a threat to Microsoft? A: Yeah. It’s good competition. It will force us to be innovative. It will force us to justify the prices and value that we deliver. And that’s only healthy. The only thing we have a problem with is when the government funds open-source work. Government funding should be for work that is available to everybody. Open source is not available to commercial companies. The way the license is written, if you use any open-source software, you have to make the rest of your software open source. If the government wants to put something in the public domain, it should. Linux is not in the public domain. Linux is a cancer that attaches itself in an intellectual property sense to everything it touches. That’s the way that the license works. Of course this was a misinformed and naive view, but the seed of the concern was real as discussed in the Custom-licensed copyleft code is a liability section. In 2025, the landscape is very different. Businesses have found a mutually-aligned interest with permissive open source, and the ecosystem has exploded thanks to that. Is this the best we can do? I hope not! Let’s continue to find more Schelling points of collaboration and aligned interests, so that the public can benefit from a larger portion of private effort. The Open Source Initiative maintains The Open Source Definition and the sets of licenses which comply with it .  ↩︎ Our intellectual property construct goes back all the way to the Berne Convention , first adopted in 1886. Today, 181 countries have ratified it.  ↩︎ The special right of the copyright holder is made even more dangerous when a project requires an IP assignment agreement (also known as a Contributor License Agreement). These agreements assign all the copyright to a single owner (often the maintainer or a holding entity), so that they can unilaterally relicense the work without requiring individual consent from every contributor. OSS licenses were designed to leverage the logistic hurdle of requiring consent of all contributors.  ↩︎ We can imagine a very specific contract that outlines all the ways the project codebase must continue to be compatible with the business’s needs for a long duration, but is this actually plausible? We don’t know what we don’t know, both businesses and open source projects continually evolve. Plus, it’s incredibly hard to negotiate bespoke agreements between every combinatorial pair of business and dependency.  ↩︎ I cannot understate how expensive it is to take a private legacy codebase and make it public. When a codebase is private, every choice is made with that context: Secrets are littered in the commit history, assumptions are hardcoded, proprietary business details are exposed. The longer we wait, the more expensive it gets. All of those micro-decisions add up, sometimes to the point where it’s cheaper to just rewrite the whole thing in public from scratch.  ↩︎ There were several cases where businesses “accidentally” used copyleft code and were later forced to relicense parts of their proprietary code to comply retroactively, but this was a mistake insofar that they would not have made this choice if they knew of the impending outcome. For example: Linksys   ↩︎ Example: When Amazon AWS rewrote their MongoDB (AGPL) service as DocumentDB (internal/proprietary). Discussed more below in Copyleft fails to prevent corporate capture .  ↩︎ It was 10× faster before AI-assisted tooling, who knows how much faster it will be in the future! It’s certainly not getting slower. Again, AWS DocumentDB was created as a replacement for MongoDB in just 2.5 months, and this was in 2019.  ↩︎ Yes these numbers are made up and sound a little wild, but they’re closer to the truth in the vast majority of cases than is comfortable to admit. There are of course examples of copyleft projects like the Linux kernel which contain tens of thousands of person-years of effort, such that even a 100× improvement is still very costly–perhaps this was a lesson that Google Fuchsia learned.  ↩︎ Apple’s App Store prohibits apps which include copyleft licensed code due to EULA contention pointed out by the FSF . This restricts potential contributors who are building mobile apps.  ↩︎

0 views

urllib3 Origin Story

This post was featured on opensource.org/maintainers/shazow My first big open source project was urllib3 . Today it’s used by almost every Python user and receives about a billion downloads each month, but it started in 2007 out of necessity. I was working at TinEye (formerly known as Idée Inc.) as my first “real” job out of university, and we needed to upload billions of images to Amazon S3. I wrote a script to get processing and estimated how long it would take to finish… two months! Turns out in 2007, HTTP libraries weren’t reusing sockets or connection pooling, weren’t thread-safe, didn’t fully support multipart encoding, didn’t know about resuming or retries or redirecting, and much more that we take for granted today. It took me about a week to write the first version of what ultimately became urllib3, along with workerpool for managing concurrent jobs in Python, and roughly one more week to do the entire S3 upload using these new tools. A month and a half ahead of schedule, and we became one of Amazon’s biggest S3 customers at the time. I was pleased with my work. I was just months into my role at TinEye and I already had a material impact. Reflecting on this time almost two decades later, I realized that doing a good job at work was not what created the real impact. There are many people out there who are smarter and work harder who move the needle further at their jobs than I ever did. The real impact of my work was realized when I asked my boss and co-founder of TinEye, Paul Bloore, if I could open source urllib3 under my own name with a permissive MIT license, and Paul said yes. I did not realize at the time how generous and rare this was, but I learned later after having worked with many companies who fought tooth and nail to retain and control every morsel of intellectual property they could get their hands on. It’s one thing to write high impact code that helps ourselves or our employer, but it’s another thing to unlock it so that it can help millions of other people and organizations too. Choosing a permissive open source license like MIT made Paul’s decision easy: There was no liability or threat to the company. TinEye had all the same rights to the code as I did or any other contributor did. In fact, Paul allowed me to continue improving it while I worked there because it benefited TinEye as much as it benefited everyone else. Releasing urllib3 under my own name allowed me to continue maintaining and improving the project even after leaving, because it was not locked under my employer’s namespace and I felt more ownership over the project. Hundreds of contributors started streaming in, too. Nobody loves maintaining a fork if they don’t have to, so it’s rational to report bugs upstream and supply improvements if we have them. The growth of urllib3 since the first release in 2008 has been a complicated journey. Today, my role is more of a meta-maintainer where I support our active maintainers (thank you Seth M. Larson, Quentin Pradet, Illia Volochii!) while allowing people to transition into alumni maintainers over time as life circumstances change. It’s important to remember that while funding open source is very important and impactful ( please consider supporting urllib3 ), it’s not always about money. People don’t want to work on one thing their whole life, so we have to allow for transition and succession. I learned many lessons from my first big open source project, and I continue to apply them to all of my projects since then with great success. I hope you’ll join along!

0 views

Hanlon-Clarke Corollary

I went into a deep research hole trying to discover the origin of the quote: “Any sufficiently advanced incompetence is indistinguishable from malice.” tl;dr: Earliest online record of this quote is by Vernon Schryver on May 1, 2002. Wikipedia, LLMs, and various random blog posts cite it as “Grey’s Law” which I haven’t been able to corroborate. Wikipedia references a book from 2015 which refers to the quote as “Grey’s Law” but attributes it to “unknown author”. I couldn’t find an actual source for someone named Grey, and closest “Grey” reference I’ve found mentioned is a Youtuber named CGP Grey from 2010+ . Meanwhile, Rationalwiki’s Talk section on Grey’s Law has an anecdotal story from someone who claims to have made that quote in the 90’s under a handle related to Grey: […] I have a distinct memory that some point in the very late 90’s or early 00’s was the first time (for me) that I posited the maxim “any sufficiently advanced incompetence is indistinguishable from malice,” with explicit crediting of Clarke and Hanlon. […] Unfortunately, I haven’t been able to find actual archives of posts with this quote from the 90s. Earliest online record of this quote is by Vernon Schryver on May 1, 2002 on in the newsgroup, which has sadly been purged by Google Groups but luckily archive.org still has a snapshot: https://web.archive.org/web/20070221155843/http://groups.google.com/group/news.admin.net-abuse.email/msg/f9f67dca7591a860 It was later dubbed the “Napoleon-Clarke Law” by Paul Ciszek on October 11, 2004 in the usenet group. Paul explains later in the same thread: Well, Napoleon said something about not attributing to malice that which is adequately explained by incomptence, and Clarke said that any sufficiently advanced technology is indistinguishable from magic. So far as I know I am the first one to put them together and call it the Napoleon-Clarke law. After finding some of the pieces pointing to Vernon Schryver and Paul Ciszek, I stumbled on this excellent overview thread by Kevin J. Maroney in Sept 25, 2005 on the newsgroup that is worth a read: https://groups.google.com/g/rec.arts.sf.fandom/c/1JaqxckrAOc/m/xO5jzyejK1UJ We later reattributed Napoleon’s quote to Hanlon’s Razor . Coming back full circle with Grey’s anecdote who correctly attributed the mashup to Hanlon and Clarke, I’m going to take this opportunity to further rebrand Vernon’s quote as the Hanlon-Clarke Corollary. Until we can find better corroboration of Grey’s anecdote (please reach out if you have a lead!), I say we attribute this quote mashup to Vernon Schryver. Hanlon-Clarke Corollary: “Any sufficiently advanced incompetence is indistinguishable from malice.” ~ Vernon Schryver, 2002

1 views

How can open social protocols fail us in 2025

Let’s compare the possible failure modes of various open social protocols: Some scenarios I’d like to consider: Some specific failure modes I’m concerned about: Let’s begin: There are two scenarios to consider here: One is for self-hosting and second is accounts that are hosted on a shared instance. While ActivityPub has many uses outside of social networks and the social network aspects like Mastodon are capable of self-hosting, they are practically designed for shared hosting 1 : It is practically impossible for everyone to self-host an instance today. Even if I had a magic wand and made every Mastodon account self-hosted, the service as designed today would be unusable. With that in mind, I will primarily focus on the shared-hosted Mastodon-compatible social network instance case specifically: Can my identity be taken away? The owner of the instance has full control over instance accounts and can do with them what they please. Can my audience be taken away? In fact, this happened to me personally. I was on an instance with a few hundred followers, then the admin burned out and abruptly shut down the server and now both the identity and audience are forever gone, impossible for me to recover. Can my ability to extend it and build on the protocol be taken away? ActivityPub is an open and generously permissive protocol built around implicit trust, almost to a fault with some security implications here such as private accounts’ posts getting leaked across federated boundaries 2 . What happens when the most popular app or instance becomes malicious? 🚨 Worst case scenario is if the largest instance disappears (or gets purchased by a malicious billionaire) then all of those identities are gone, all that audience is gone, the content is gone (some may be recoverable through caches on federated instances, but not robustly attributable). But other instances survive and are largely unaffected except for the reduction in audience, and this is an improvement over something like Twitter. Is there something the maintainer can do that would prompt me to abandon the service/protocol altogether? 💚 I don’t think so. There is a huge ecosystem with diverse implementations by varieties of people. There are already apps built on ActivityPub/Mastodon that span every conceivable ideology, including TruthSocial that is owned by the president of the United States and Threads owned by Meta. I don’t think any of them can say or do much which would severely change my perspective on ActivityPub or Mastodon as a technology. I found at least 80 independent Mastodon instances with over 10,000 users and 10 with over 100,000 users. Protocol challenges aside, this level of instance diversity is very impressive. Bluesky (and the ATProto protocol it’s built on) has many optional provisions for improving the sovereignty of users, such as managing signing keys with the Public Ledger of Credentials ( PLC ) 3 and running my own Personal Data Server ( PDS ) 4 that can serve as a source of truth for my data. While this is not the default, and it’s unrealistic for everyone to self-host a PDS, the protocol is designed for a world where everyone has ownership over their signing key and it does support alternate identity schemes in the future (potentially even fully sovereign and programmable onchain signatures). Can my identity be taken away? The default onboarding with a identity is controlled by Bluesky and can be taken away, but the protocol facilitates using my own domain for my identity, so I can be instead of and that can’t be taken away by Bluesky BUT it can be taken away by ICANN and my registrar. Note that this namespace is mostly cosmetic, the actual protocol interactions are mapped to a long-lived public key (a DID, managed by the PLC ) and the “human readable” identity is simply an alias to it which can be changed without losing the interaction integrity. This mapping is currently stored in a verifiable but centralized database ( PLC Directory ) which can misbehave by preventing updates, limiting access to the ledger logs, and removing/reordering the history of operations . 5 Overall, if my handle is taken away, then someone else will show up as me on Bluesky and I will need to register another handle, but I will not lose any of my social graph. If the PLC Directory disappears, I could lose my ability to write to my account. Can my audience be taken away? ATProto messages are designed to be robustly replicated with cryptographic signatures, so there can be arbitrary paths to getting messages to my audience. Unlike ActivityPub, if one replica loses all of my posts then another replica can robustly re-introduce them to the network without risk of forgery. On the other hand, a core part of Bluesky is their approach to composable and comprehensive moderation tooling that is available at many layers (the client via labels, the AppView, the relay, etc). On multiple occasions, Bluesky chose to deplatform users who effectively lost all access to their audience. It would help if Bluesky leaned more heavily on their excellent community moderation tools and eventually remove global moderation altogether. Additional popular clients and AppViews with their own policies would help, too. Can my ability to extend it and build on the protocol be taken away? I can extend ATProto with my own “lexicons” and create whole parallel functionality within the same infrastructure. In fact, there are already many interesting use cases being used, like: WhiteWind for blogging, Smoke Signal for coordinating events, Flashes App for photo sharing, and many other experiments . That said, I suspect some of this will change in the future as more griefing and other attacks on the protocol are explored. What happens when the most popular app or instance becomes malicious? 🚨 What happens if Bluesky Social, PBC (the company behind Bluesky) disappears tomorrow? Let’s say bsky.app and associated infrastructure they run is gone (bsky.social PDS, AppView, relays, etc), now what? The good: The Bluesky App and infrastructure tools are all permissively-licensed open source , and someone else can fork them and release alternate versions. There are some minor players already trying this. The historic state can be replayed from any archives people have (there are some services who maintain snapshots). The bad: It’s extremely expensive to operate for the current userbase (33M users today), it was estimated to cost over $500,000/mo in hardware costs when Bluesky was half this size. Assuming double that after bandwidth, human, and other costs, it’s prohibitively expensive for a random individual to take on that cost and it’s too much to quickly coordinate a grassroots cooperative effort. Frankly, both the protocol (relays feeding an AppView monolith) and culture (“signups should be free”) are not designed for a collectively owned/operated ecosystem at scale. Is there something the maintainer can do that would prompt me to abandon the service/protocol altogether? 🚨 This is a tricky one. If the Bluesky team said tomorrow “you know what? we’re tired, this isn’t working out, we’re going to stop working on it” then I’m not confident that someone else would pick up the baton. That could very well spell the end of the social network as it is today. It’s not like there’s a bunch of companies making good money from it that they’d be compelled to take over to retain their business. If there were more independent and institutionally motivated companies hosting their own instances of the whole stack so that users can trivially switch endpoints on the app, I’d feel more confident. Warpcast (and the Farcaster protocol) takes a similar portable cryptographically-signed approach as ATProto, except with several important differences: Full identity sovereignty, native payments, and a culture around building an active self-reinforcing economy within the protocol ecosystem. Both third-party services and unique features like mini-apps that take advantage of payments being a native mechanism on Farcaster. Keep this in mind for the final point. Can my identity be taken away? While the default onboarding flow uses a custodied identity similar to Bluesky (Farcaster ID), anyone can set an ENS identity which is fully self-custodied and can’t be taken away even, not even by ICANN. So instead of I am . This also allows for arbitrary programmable collectively owned namespaces, since the ENS can be owned and managed by any smart contract. Additionally, Farcaster’s key management is onchain in a smart contract by default , so every user has full control over their permissions in a censorship-resistant way 6 . There are several supported signing formats, including passkeys. There is also optional social recovery available, by default it allows Warpcast to help recover my account if I lose access to all my signers (passkey, mobile app, etc) but I can change that. Can my audience be taken away? 💚 Not entirely, but there is still only one main app (Warpcast) that is used by the vast majority of people. The dominant app can effectively moderate people out of the timeline, indistinguishable from being categorized as spam. This will change very soon as large companies like Coinbase are integrating Farcaster into their apps which would compete in scale with the active audience on the network today. A few more large participants with a variety of timeline/moderation approaches will make audience ownership relatively safe. Can my ability to extend it and build on the protocol be taken away? While the apps do a great job hiding the implementation details, the key management is non-custodial in a smart contract, so I can fully control who gets write access to my namespace and that can’t be taken away from me. The Farcaster Hub protocol is similarly cryptographically signed so my messages can be routed in arbitrary ways even with one node censoring. While lot of development today is happening around Mini-Apps (formerly known as Frames) which run within the timeline with native access to payment and social functions, there is also a healthy community building on the Farcaster protocol itself . What happens when the most popular app or instance becomes malicious? 🚨 What happens if Merkle Manufactory (company behind Farcaster) disappears tomorrow, along with the app and all the infrastructure they run? While the protocol and hubs to run it are permissively licensed open source , the main app is closed source and much of the auxiliary infrastructure required to operate the app is also closed source (algorithmic timeline service, spam moderation, etc). There are some alternate implementations, but it’s still early. On the other hand: It’s not wildly expensive to operate the necessary infrastructure ($X00/mo operate a hub), largely in part because the Farcaster network is still fairly small (3% of Bluesky right now, at 870K users and 45K daily actives), so it’s quite plausible that individuals, small groups, or another company could take on this burden. Plus there is a thriving culture of economic activity on Farcaster. Coinbase is integrating a client, Neynar provides paid API services for developers, a bunch of people are almost making a living through various mini-app activities. This scenario would be painful, but the proportion of Farcaster’s size to the economic activity on the protocol is still promising. This may change as Farcaster grows. Is there something the maintainer can do that would prompt me to abandon the service/protocol altogether? 🚨 This one is also tricky. I suspect there is not yet enough momentum if the founders decided to wind down. I ask myself if I would care if a megacorp acquired the team along with the most popular app but committed to maintaining the same design properties… I’m not sure! More independent/open source, and economically motivated apps platforming large audiences would make me feel more confident that moderation won’t get abused. If it’s far too costly to bootstrap infrastructure in a black swan event (censorship, evil billionaire, physical infrastructure failure, whatever) then the efficacy of a robust protocol is lost. For a given player, the balance is between the cost of correcting a failure and value that is gained from recovering. If a network has a thriving economy and it is comparatively inexpensive to operate it, then we can be more assured that it will persevere. Some protocols have desirable properties when the network is small, but lose them at scale: A few blogs federating ActivityPub remain perfectly robust from capture, but a million user Mastodon instance is a juicy point of failure. Even smaller instances are brittle and risky: perhaps the sysadmin gets burned out and wants to do something else or a security update isn’t installed quickly enough. Similarly, Farcaster seems almost sustainable at today’s scale, but will it be off-balance like Bluesky if it grows to tens of millions of users? We can’t expect a single dominant player to resist exercising powers like threats of censorship, even if the underlying protocol is resistant. If one player is platforming 99% of the network, there is nothing stopping them from abandoning the underlying protocol altogether and just replacing it with a private database instead. We must have a multi-polar plurality of interdependent players to protect us from the effects of capture. I’d like to see each protocol describe what their ideal evolution and adoption looks like a year from today. If everything about the current roadmap goes right and all the relevant partners enthusiastically join forces, what does the world look like for this protocol in 2026? Thanks to Boris , Boscolo , and Leeward for reviewing early drafts of this post. Also big thanks to everyone who asked great questions after this was published. The Mastodon fediverse has around 9.5M registered and almost 1M monthly active users across 8700 instances . The number of single-user instances out there must be a rounding error over the total number of users across Mastodon. I was only able to find a few dozen instances with 3 or fewer users.  ↩︎ There is a fundamental assumption in ActivityPub based social networks that federated instances are trustworthy sources and custodians of data. If another instance misbehaves, such as accidentally leaking private account posts , then the expected recourse is that instances will notice this and defederate from them. Unfortunately, by that point, the damage has already been done. It’s a system that relies on constant vigilance and good behaviour by administrators of instances.  ↩︎ Public Ledger of Credentials, the directory for mapping these credentials, is currently a service operated by the Bluesky team. For people familiar with rollups, it’s like having a centralized data availability and centralized sequencer, but with (self-certifying) verifiable state transitions. If the Bluesky team becomes malicious, another team can fork the PLC state at an earlier checkpoint and manage it differently, but the AppView would still need to decide which PLC to obey. I hope this design is improved in the future.  ↩︎ One of my favourite parts about Bluesky’s architecture design is that it’s very layered with conceptually optional optimizations. At the very bottom sits the Personal Data Server which is itself enough to bootstrap a basic social network from other such people. Above it are the relays, which aggregate many PDS into a composite firehose. Above the relays is the AppView, which creates the timelines that are served to the app’s end users. Right now the PDS implementation is coupled with the signing key, but conceptually they could be separate.  ↩︎ If the PLC Directory disappears today, this would be quite catastrophic if a replica of the ledger’s logs is not available. This is the authoritative source for who is allowed to update what, and without it the cryptographic permission system is broken. Another great usecase for an onchain smart contract, which would also help with update censorship and limiting access to the latest verifiable state.  ↩︎ Censorship-resistance here means that I can’t be stopped from submitting an update to my settings, and no one can be prevented from accessing the latest verified state of my settings.  ↩︎

0 views

Post-Growth

Imagine thousands of years in the future, when humanity is dispersed across the stars. We are fully actualized and have reached the final leafs of the universe’s tech tree. We are happy and in control of our destiny. Are the trillions of us spanning many solar systems? What if there are only millions, or even thousands of us? It’s tempting to take our history of swarming through continents on Earth, consuming all available resources to scale and grow, and extrapolate it into space. Many planets across many solar systems, each with many billions of people. A lot of science fiction paints this version of the future, but what would it take for us to imagine a different future? What if we defeated death and could choose to live forever? What if we become cybernetic, melding with the machines we build? What if we tame our minds and upload replicas into digital storage, onto digital simulations, and back into artificial physical sleeves? What if we transcend life, creating artificial beings beyond imagination? What if we finally achieve alchemy: transmute matter into energy and back again, transporting at the speed of light and reconstructing into whatever physical bodies we want whenever we want? Imagine a distant species, once human but now anything, which can exist and scale to any shape or quantity on a whim. A species that is confident of its mastery over reality, unneeding of redundancy and excess. Would there be thousands of us one day, travelling in groups as we wish? Materializing peers and family on another day into billions, only to pack up later into few again. Would some choose to join their minds into a collective consciousnesses, blurring the line of enumeration? Will there be one? For a long time, we have been in an Age of Growth as a society. Growth has been our main metric for success that we optimized obsessively, sometimes to toxic extremes. Can we imagine what it would take for us to transcend into a post-growth society? Thank you to John for sending this related audio short story after reading this: Outlasting the Universe

0 views

Conditioning

I think of conditioning as a long-term conversation between decentralized components. After breaking my foot, I’ve been working on conditioning all of the tiny ligaments to properly support movements and impacts I haven’t practiced in months. I can’t just say “hey foot, you’re not broken anymore, go back to normal.” I have to slowly and incrementally reinforce each change. Similar to learning a musical instrument by training our fingers to move a specific way, over and over – we’re telling all of the related “muscle memory” that this is an important motion we need to be good at. Like lifting weights, we’re telling the muscle “you’re going to continually exceed your capacity, so get bigger and stronger.” It’s not just a one-and-done: Conditioning takes time and repetition, because the channel is low-bandwidth and lossy, and the micro-changes required are many and sequential. When we train our dog to not be reactive about visitors (or to salivate when the bell rings), we can use positive reinforcement to rewire the brain with the help of oxytocin signals. Treats, “good dog”, or even hugs can all be forms of communicating that the precursor to some stressor is actually okay. Just like humans working with a therapist, we try to condition the sources of our anxieties to take on different frames (or acclimate to our fears). Similar to positive reinforcement with oxytocin, there is cortisol and dopamine and other hormones/neurotransmitters which communicate desirable/undesirable conditioning between different aspects of our minds and bodies. I really like the model that we’re not monoliths, but rather a bunch of disparate components who are stuck together. Our brain can’t simply tell our body to shed fat and build muscle, our tic disorder to chill out, our ligaments to support wider range of motion… but our brain can slowly and methodically facilitate these changes by communicating them through conditioning! What does this mean for society? If we break out of the monolith mindset of our bodies, we can also think of communities, corporations, nations, species as “mega organisms” who suffer from inefficient/indirect communication. We may condition our societies towards democratic practices or into accepting authoritarian rule. To justice or inequity. It can happen slowly through reinforcement, or it can snap like a bone breaking from ever-increasing pressure. Those suffering may ache like a festering infection, but we can acclimate (or amputate). What’s the difference between Jeff Bezos and a rando who woke up with a hundred billion dollars? Bezos has preconditioned infrastructure to achieve his endeavours. He can take on logistics, media, space exploration, investment, and who knows what else. What can rando do without spending months/years “conditioning” their assets to support their endeavours? Just think of how long it would take to even hire a sizeable team of good people! This is conditioning. In many ways, the peripersonal space of powerful individuals expands far beyond their wealth. It extends to that which they have conditioned to advance their pursuits. I want us to condition ourselves out of thinking like monoliths. How do we do that? We should condition ourselves to think in terms of conditioning, rather than quick drastic swings that bring injury after injury. (“We need to be good at lifting heavy weights, so we should lift the heavy weights immediately.”) We can set ambitious goals that are beyond the ability of a single component, or even a single person or nation, and we need to figure out how to communicate them properly to all participants–repeatedly, consistently! All participants need to practice them, repeatedly and consistently! Let’s not ignore our pains, but take time to listen and understand where the pain is really coming from and why, and how to address it. But most importantly: Recognize when we’re being conditioned into something we don’t want to be.

0 views

1231

I wrote down the names of 5 friends, I’m going to send each of them a gratitude email by the end of the year. Want to join me? It can be just a couple of sentences of how they positively impacted your life, or a brief story of a positive shared experience, or what you admire about them.

0 views

Why Blockchain?

Traditionally, we have identity cards issued by our local government, or domains managed by ICANN, or a profile managed by LinkedIn. These are all empowered intermediaries: They can choose to deny changes, or outright ban us from their systems. Ethereum Name Service ( ENS ) is an example of a collectively owned identity resolving system that is also fully programmable. We can set it up a name like that routes one way on weekdays, and other ways on weekends. There is no centrally managed “API” that can be sunset and prevent us from using it like before. By using programmable security, we can abstract the ownership of our ENS to be managed by an individual, or a team, or a democratic collective, or anything else that can be expressed as a program. Traditionally, banks require us to pay bills from a Checking Account, but encourage us to hold money in a Savings Account through modest interest rates. Autopay Bills can only be paid out of Checking Accounts. If the Checking Account drops below zero, we get charged an Overdraft Fee (even if money is available in the Savings Account), and we’ll also get hit by a Late Fee from the bill provider. Not to mention many things being limited to business hours, out of network ATM fees, excessive foreign exchange fees hidden in the price spread, over a billion people blocked out of the banking system at large. Why do we tolerate these anti-consumer constraints? Banking in a collectively owned programmable substrate is quite different: There are no limits to how accounts must behave, we can automatically split incoming payments among our collaborators (e.g. splits.org ); we can setup a cascading system that pays for our services however we want–even complex vesting or payroll payments streamed down to the second (e.g. superfluid.finance , sablier ); we can completely abstract multitudes of currencies and assets (e.g. cowswap ), allowing us to pay with whatever we prefer while being confident that we’re receiving the most competitive rate that anyone else would too. If you’re a web developer, how many times have you built the same signup/login flow? What would we build if we didn’t have to re-implement standardized “smart” components that are collectively owned? What if our consumers could bring their own… What if all of those components kept getting better and better, while our application could stay the same? The question is not “why can’t we build this without a blockchain?” The question is: Why would we want to build things without a collectively owned programmable substrate? I work on projects in the Ethereum ecosystem and hold ETH. There are many blockchains today with varying credibility and decentralization properties. Very few of them prioritize decentralization in the same way that Ethereum does, so that is my primary example for talking about what’s possible.  ↩︎ The fee for executing a transaction on Ethereum is referred to as the “gas fee”, and it is denominated in Ether. It is designed to closely proxy the real cost of the transaction to the Ethereum consensus and network, in order to prevent denial of service attacks. Any time something costs more on one side than another side, this can be an attack vulnerability or a potential externality which can be profitable to extract. Spam is a great example of this: To be effective, spam must cost far less to produce it than the profit that can be extracted from the externality it creates.  ↩︎

0 views

Trustless

“There are no such thing as zero trust assumptions.” Trustless is a confusing word: We don’t use it in common speech; it means “not trust worthy” in the dictionary; decentralized technologies sometimes use it as “not needing to trust”; and the uninitiated might interpret it to mean that it’s advocating for less trust? Despite the confusion, there is a very useful insight behind the concept of “trustless” (in the decentralization sense). To understand it, let’s suspend our immediate reaction and break down two interesting properties behind this concept: Explicit trust and immutable trust. Our goal is not to reduce the amount of trust in the world, but rather to create more and better trust that we can use to create more powerful and interesting systems. Consider a typical intimate relationship: We can create more and better trust through communication–setting expectations and being clear about our boundaries, allowing us to thrive even in more complicated and non-traditional arrangements. Let’s try to build a framework for doing this when designing other kinds of systems. To trust something mutable is to understand that it can change. To trust something immutable is to understand that it cannot change. We’ll explore how we can plan for and benefit from both scenarios. When the CEO of a company makes a promise, we need to remember that it’s not the individual making a lifelong commitment. The Chief Executive Officer is a role within a company. One day it can be Dick, next day it can be Parag, then Jack, then suddenly Elon. A leadership role within a company is mutable, which means that the direction of a company can change. When we put our trust in a verbal commitment of a company, perhaps drafted by a PR firm, we must remember that the role which drives that promise can be changed–and inevitably will be, as we don’t live forever. Individuals are mutable, too. People age, change careers, lose interest, get sick, and ultimately die. It’s okay to be mutable. Even when we commit to a contractual obligation, the meaning of the contract can still vary within the interpretation of the law. Legal experts can sometimes help us to better understand all of the possible outcomes and how likely they are. When we add a trust assumption in a specific protocol or a theorem, we can be confident that the nature of that trust is not going to change. Our understanding of it might change–perhaps we discover an undesirable property that was encoded within it. No worries, we can stop using the old undesirable protocol and switch to a new one. Imagine a terrible law that was written before we knew any better, that is ultimately challenged and abandoned. The old immutable law still says what it said before, but our legal system decides to stop using it. Being immutable is okay, too. To trust something explicitly is to fully consent to how it can behave and how it can change. To trust a complex system with many components of varying mutability as a single opaque agent is to accept its complexity implicitly. When a couple starts dating, they start out trusting that their relationship is operating within the default expectations of our society or their social group. Sometimes that’s as far as relationships get, they rely entirely on these implicit assumptions. Other relationships will slowly and explicitly map out important assumptions and boundaries: Are we exclusive? Have we been tested for STI’s recently? What’s our safe word? Are we always putting the toilet seat down? Relationships start being based on implicit trust, but can become increasingly anchored by explicit trust. When the CEO of a company makes a promise, they’re making a commitment on behalf of all of the participants of the organization: The employees, the board, the suppliers and contractors, sometimes even the shareholders. How many times has a company promised us that our data is safe, only to leak their data a few months later because an employee was careless? When we choose to trust a company to behave in some way, we are implicitly consenting to the behaviour of all of the components within that system. Some trust assumptions are simple and explicit (“I trust you to always put the toilet seat down”) and we can have clear expectations of what happens if that trust is violated (toilet seat stays up). Other trust assumptions are just the tip of a complex iceberg of assumptions (“I trust Apple to care about my privacy”) and it can be extremely hard to enumerate all of the possible failure modes and fully consent, so we can choose to trust them implicitly. Let’s say that implicit trust assumptions are ones that contain an arrangement of additional trust assumptions behind it, with varying mutability and implicitness. The challenge with mutable and implicit trust points is that it’s impossible to chain them while maintaining strong confidence. The complexity of assumptions spirals astronomically with each additional chain. Sometimes the legal system can help us mitigate this by enforcing assumptions from the outside: If we have a contractual agreement with all participating parties, then anyone deviating from the expectations can be later sued and perhaps damages can be recovered (but even that is merely a mitigation, the original desired state cannot be guaranteed). What if we had a bunch of mechanisms that had explicit and immutable trust properties? We could chain them to arbitrary arrangements, while still having full confidence of the outcome’s variability. We can take several mathematical axioms, and use them to build complex mathematical theorems that maintain the same hard trust expectations we established from the axioms. In a world with more explicit and immutable trust mechanisms, we can create more interesting and complicated trusted relationships within our society. Trustless does not mean we must trust each other less, but rather it means that we can trust each other more! Even in our intimate relationships, ample communication and setting explicit boundaries allows us to create stronger and more complex intimate relationships. Relationships that are entirely bootstrapped from the implicit assumptions of our society’s defaults are the ones that are limited within that same framework–to the point that we’ve had to build an institution of marriage to enforce it with the help of our legal system. With more explicit and immutable trust mechanisms, we can create more complicated cooperative collectives, we can create more interesting governance structures, we can create a larger solution space for our future. Let’s create a world with more better trust.

0 views

Decentralization

As technological advancements march on, we’re having a lot of very important conversations about what we want out of the platforms we use day to day. We often talk about “decentralization” since it carries so much meaning, but it’s also easy to talk past each other because we may be thinking about different components of decentralization in different contexts. If we have two systems where one is more trustless and the second is more permissionless, can we say that one is decentralized but the other is not? Can we even say that one is more decentralized than the other? Or does each simply have different points of centralization across different dimensions? Perhaps we need a clearer vocabulary to use when we discuss decentralization. Peer-to-peer, trustless, permissionless, non-custodial, credibly-neutral… What do we mean by these words? Let’s break down some of the properties that are found in some of our fantasy decentralized systems, and imagine that each of these properties exists on a scale of 0 to 10: This is not a comprehensive list, but it helps to discuss concrete properties that fall under this umbrella. We must be mindful that changing the scale of one of these properties does not make a system entirely centralized or decentralized! By changing the design of our system, we can add or remove points of centralization across specific dimensions. We can acknowledge the obvious reasons: When we give disproportional power to specific individuals, it can lead to abuse of that power. But there is a more interesting property: It’s easy to build centralized systems on top of decentralized systems, but it’s hard to build decentralized systems on top of centralized systems. This asymmetry means that there is a larger possible solution space in decentralized systems than in centralized systems. One is a superset of the other. Let’s explore each component within this context: This is similar to how we can easily build an insecure system within a secure system, but it’s extremely hard to build a secure system within an insecure system. Secure systems have a larger and more interesting solution space within them. It’s clear that we can take something completely decentralized in every respect, and easily build on top of it something that defeats every desirable property. Many “scams” take this shape: They exploit the intuition that a system with desirable properties would only be a host to other things with desirable properties. Our lives are composed of a mishmash of systems that are centralized or decentralized in different ways. It can be convenient to point at one piece like TCP/IP exclaim “the internet is decentralized!” Or we can point at ICANN (or Amazon AWS or Facebook) and exclaim “the internet is centralized!” When we realize how it’s easy to build centralized systems on top of decentralized systems, it’s unsurprising that we can find so many conflicting examples. In many ways, our political systems fall on these axes, too: We value legitimate democracies, and eschew corrupted authoritarianism. We fear that people in positions of power will abuse them without “transparency” and “checks and balances”. We want to have private and personal relationships with people. We like the idea of living in a society that allows us to do creative things without asking for permission, and we would prefer a future where a small class of people doesn’t have disproportional power and wealth over everyone else. Other interesting examples to contemplate within this framework: In what properties and scales is our legal system decentralized? What about realestate ownership? Modern medicine? Ask not what decentralization can do for you, but what you can do for decentralization. Decentralization can remove the surface area for problems that are created by centralization, such as those caused by added intermediaries and custodians. Let’s setup a fantasy strawman scenario to illustrate this: Imagine Twitter starts requiring a new kind of intermediary for posting on their service: Moving forward, everyone has to tweet through another Twitter employee. That person will get your username and password, and you’ll need to phone them up and dictate your tweet for them to post on your behalf. What are some things that can go wrong here? For one, we’re going to have all kinds of communication issues–literally a game of broken telephone! Tweets are going to get miswritten due to bad phone reception, or misunderstanding of accents, or the intermediary having a bad day, or maybe they’re working with a political figure that they despise. We’ll need to hire thousands or maybe millions of intermediaries to do this job, and this also opens up an opportunity for the intermediary to leak our password, or outright hijack our account and post unauthorized tweets. Perhaps we can discourage it with a licensing scheme and harsh penalties. Admittedly Twitter was not very decentralized to begin with, but let’s work through our framework: Now imagine we remove this added intermediary, and suddenly all of these new problems disappear from that particular point. Other points of centralization still exist as they did before, and perhaps we can work towards removing them too someday. Clearly this is not a serious example, but it illustrates important points: This is again similar to security: Removing components that don’t have the necessary guarantees is the best way to improve security, but it can be hard to remove components without sacrificing their features that we’ve become dependent on. On the other hand, how many times have we found fragile dependencies in our systems that we don’t need anymore? We have many challenges in modern society that are orthogonal to whether they’re expressed by a centralized system or decentralized system. For example, consider ownership: We can have a centralized physical ledger on our Mayor’s desk, with entries of who owns what. Or we can have a decentralized digital ledger, with the very same entries. In either case, it does not change what policy on ownership we enact as a society. It does not change what taxes we choose to collect, or what kind of construction zoning we allow, or if we decide to disregard that ledger and start anew. Some other social challenges worth noting: Decentralization does not solve wealth inequality , it does not solve local law enforcement, it does not solve the ability for people to create centralized systems within it (and all of the problems that are created from them). Most properties of centralization are about concentrating control or power in a specific component, this can make it very easy to make quick unilateral changes in policy. To decentralize is to give up power or control. When we imagined the fantasy Twitter scenario, the added intermediaries acquired a powerful position: They gained control of what tweets were being posted. To remove them, they would need to give up that power. We can imagine scenarios where having ultimate power and control can be vital, such as a military chain of command. When facing dire threat and every moment is a life-or-death decision, having competent leadership with full control over their domain is crucial. Centralized properties can be very convenient and powerful, and some of their harm can be mitigated by placing them within structured decentralized process–like giving a representative partial control over an organization, while still being able to vote them out. More harmful parts of centralization are amplified when they approach permanent capture, when we become too dependent on them and we’re no longer able to exercise any mitigations for misbehaviour. It is up to us to choose whether we allow more and more parts of our lives to become captured in centralization, or if we want to pursue the dream of decentralizing things that were not possible to decentralize before and see where that takes us. Appendix with additional notes and links . Thanks for feedback on early drafts to Benjamin, Ezzeri, Harper, Jenny, Max, Phill.

0 views

How does DeFi yield?

Decentralized Finance (DeFi) is a technology that exploded in popularity in 2020, taking place almost entirely on the Ethereum network . There are many things about it that are outright fascinating: Algorithmic stablecoins, automated market makers, flash loans, flash mints! Every month there is a wild new invention that changes the landscape, often increasingly difficult to understand which is both frustrating and intimidating. I want to talk about one thin slice of this technology that I’m seeing a lot of people have trouble grasping: Where does the yield come from? A big part of the DeFi movement is easy access to pools of high-yielding assets. The highest-yielding assets are leveraged derivatives of other liquidity pools, but at the foundation we have access to an investment vehicle that has impressive returns with zero economic risk . How is this possible? Traditionally, returns on investments come from inflation, inefficiency, speculation/risk, or violation of trust. We have government bonds that push the interest rate floor that private institutions offer. We have arbitrage opportunities which increase liquidity between markets. We have investors making claims that something is going to be worth more or less in the future, arguably contributing to price discovery. We have institutions giving out debt in hopes that they’ll get it back with some interest. We have ponzi schemes where unscrupulous fund custodians provide some clients returns taken directly from the reserves of other clients until it all falls apart. Sure, all the same old tricks exist. There’s no shortage of scams, there’s volatility between markets, there’s speculation, there’s inflation and disinflation. None of this is new—no one is claiming it is. Here is what’s new: If you’ve ever typed and then you’re familiar with atomic transactions in databases. ACID-compliant database transactions have been a thing since the 70s, but it’s something that has never made it to the global financial system. A single institution might promise that a list of instructions sent to them would be executed as a single unit, but the reality is that anything that spans institutions will have multi-day settlement periods and often involve humans signing off on parts of it. Even within a single institution, operations are often batched and batches are executed periodically. Very little is actually real time in traditional finance. Let’s say we see two marketplaces which value apples differently. We can buy a bunch of apples from one, haul them over to the other market, sell them for a profit. That sounds great in theory, but by the time we arrange the purchase from one market, load them onto our cart, and haul them over to the other market, a few thousand high-frequency trading bots from Wall Street have moved the price of apples a million times and we’re probably not going to make minimum wage on our effort. To make it worth our while, the difference has to be substantial and our capital has to be huge to really take advantage of it. This is why arbitrage hedge funds are typically managing upwards of hundreds of millions of dollars in assets, and a single avid WSB reader wouldn’t dare compete… Usually. Imagine we could pull out cash from our bank, buy apples, move apples, sell apples, and get cash back into our back account in a single atomic transaction. Either the whole thing succeeds, or none of it happens. It’s zero risk arbitrage (though there are fixed fees for the privilege). That’s great! We can complete with high frequency traders now, but we still can’t compete with their capital. But wait, what else can we do with atomic transactions? Imagine we could take out a loan for a million dollars, buy a lot of apples, sell the apples, pay back the loan plus a fee, and keep the difference in a single atomic transaction. Either the whole thing succeeds, or none of it happens. It’s zero risk loans for zero time (except we pay a fee to the liquidity pools who lock their capital that we loan from). This is called flash loans . Now we can compete with wealthy hedge funds, as long as we can afford the modest flash loan fees (these vary, but roughly 0.0X% in fees) and transaction gas fees. There are other even wilder possibilities with atomic transactions, like flash mints (instead of loan money, create money for zero time), but let’s talk about zero time. The longer a debt is held, the more unknowns are added to the probability that the debt will be paid out. A mortgage holder could lose their job, a bank could buy debt that represents a bunch of people who lost their jobs, a global pandemic could ravage our local economy. If my friend wants twenty bucks for a second, I’m not too worried about not getting it back unless they’re an illusionist . If my friend wants to borrow twenty bucks for ten years, I doubt I’ll see it again. So much can change with time, I’ll probably forget about it. A flash loan does not exist outside of the scope of the transaction. In effect, a flash loan is debt that exists for precisely zero time. Aside from the real-but-tractable likelihood that there is a software bug in the system, there is no risk of an atomic zero-time loan not being paid back. The idea is sound, even if there have been times when a particular implementation was not. It seems that only people who have trouble grokking zero-time loans are expecting software to be perfect in zero-time. 🙃 In fact, DeFi exploits are some of the most fascinating users of flash loans , since they’re able to amplify market imbalances in unexpected ways with incredible effect. These are not fundamental flaws, and they’ve all been mitigated in modern DeFi contracts, but I’m sure there will be more before things become completely safe and stable. That’s the big difference with traditional finance: Even after eons of custodial trust violations, from the very first primate to borrow a tool and not return it as promised, we are still struggling with the same fundamental shortcomings. Meanwhile, DeFi on Ethereum has only been alive for a couple of years, and there is a very achievable end point of stability and safety. Is this investment advice? 😳🫴🦋 Could we implement this on traditional finance infrastructure? It’s possible. Meanwhile, we’re still seeing banks sweating to maintain billions of lines of COBOL running in production that was written 50+ years ago . Ask again in a hundred years? Even if we could wave a magic wand and retrofit all of the world’s banking today, the custodial nature of traditional banking is what introduces a lot of the risk. Can a loan across multi-national banking institutions truly be risk-free if each institution has to trust the other side to keep up its end of the deal? Once we move to a trustless non-custodial system that has Turing-complete atomic instructions, we end up where we are today with Ethereum’s DeFi ecosystem. No risk is never truly no risk! Of course. Software bugs happen, human error happens, black swan events happen. The goal of these mechanisms is to dramatically reduce the number of intermediaries and cut down on the surface area for risk, in ways we haven’t been able to do before. Huh, free money with no risk? Wouldn’t everyone put their money in this and the returns go to 0? That’s not impossible! But keep in mind, flash loans is just one of a multitude of yield-earning mechanisms, and one with the lowest returns already. There are plenty of other mechanisms that pay far more but involve more hands-on babysitting (like adjusting collateral ratios as prices change) or have more risk (less mature smart contracts that could be exploitable) or are more speculative. Furthermore, as more money enters the ecosystem, more opportunity becomes available for flash-loan trading bots to do their thing. How much yield could I get right now? Asking for a friend… Depends on your risk tolerance and how hands-on you want to be. Many of these rates vary wildly from week to wee. What are flash loan bots actually doing? Here’s a well-documented smart contract bot that takes out a loan from Aave and performs arbitrage between two markets on ETH-DAI: https://github.com/fifikobayashi/Flash-Arb-Trader More sophisticated bots in production today traverse much more complex paths across many lenders, exchanges, and trading pairs before completing a profitable cycle. All atomically, of course.

0 views

I tried the Oculus Quest 2 for a couple of hours and wrote a review

For context, this is comparing with my Valve Index which is not very apples-to-apples. This is specific to the wireless embedded mode, not tethered Quest Link mode. I tested it with the Elite Strap, which ought to be included by default. Tracking is “surprisingly good” as everyone says, especially superficially. It’s hard to visually detect glitches when the system is operating smoothly. On the other hand, I did feel mild nausea after some casual use. Normally I can use my Index (or old Vive) for hours without nausea. It’s hard to tell if this is due to some more subtle tracking issues, or because of the limited lens controls, or purely from processing glitches. It’s not hard to make the embedded computing system overload and jitter. Using it while apps are installing causes tracking failures, or using insufficiently “optimized” apps like VR Chat. The tracking will freeze occasionally or swing aggressively. It’s not very common under more “Made for Quest” experiences. I was particularly concerned about the Quest inside-out controller tracking but again I was surprised at how well they worked. Not perfect (there were positions where they jittered or disappeared), but better than expected. Under good operating conditions, it was more than tolerable. Text clarity is very very good, but the graphics fidelity on everything else is noticeably sad. Granted it’s an underpowered platform, compared to an expensive gaming computer, but this comes with consequences. Obviously “optimized” scenes are very low-polygon, but that can still work under the right context. The thing that is more noticeable is the textures are very low quality too. The end result feels kind of like playing a game on Low Quality settings in 800x600 resolution but on a 6K high-DPI display. The vector-rendered elements are very crisp, but everything else is from a different lo-fi universe. The field of view is noticeably tighter than the Valve Index, but I am also personally less bothered by this than most. No complaints with any screen door effect or godrays. It is very pleasant to be wireless. Wired headsets never bothered me as much as it bothers some people, but I do wish the Valve Index came out with a wireless module already. At minimum, it makes the “grab and play” experience a bit better. On the other hand, I suspect the Quest 2 is a much better device when tethered to a gaming box via Quest Link. I haven’t tried it in this mode, since it’s hard to imagine it being better than the Index in any metric so why bother… But if it’s a question of budget, then it seems like a viable option. While wearing the Quest 2, the built-in audio is fine. Not nearly as good as the Index, which is better than a lot of dedicated headphones! On the other hand, the sound is more audible outside of the headset than inside. Anyone else in the room gets to experience everything you experience at x1.25 the volume. One of the cool parts of audio inside of a head-tracked VR environment is that it’s easy to simulate 3D audio very well, this is great for the wearer but terrible for everyone else. I could feel a bit of nausea just sitting in the same room as someone playing the Oculus Quest 2, just from all of the simulated audio positioning shifting. Thankfully there is an AUX port on the headset so it’s easy to put on your own headphones. There are only two IDP presets, luckily one was pretty close to my head shape. My eyelashes hit the lens a bit, and luckily I do not need to wear glasses. With the Elite Strap, the device is a good weight and comfortable to wear for extended periods, no complaints. Not quite as comfortable and adjustable as the Valve Index, but a fair bit lighter. If the Valve Index is a Herman Miller Aeron Chair, the Oculus Quest 2 with Elite Strap is a budget look-alike from Staples that gets you most of the way there. Certainly more than good enough to sit back and enjoy a full-feature 3D film, or dance around for a couple hours. No complaints but they do have a “budget” feel to them coming from the Index Knuckles, or even the Vive Wands. They’re small, but reasonably comfortable. Not certain they would win in a fight against a TV, or certainly not against some drywall (which my old Vive wands have more than enjoyed taking on). Compared to Steam, the Oculus Store is a sad experience. It was difficult to find a hierarchical listing of games or experiences. The search worked well, but you need to know what to search for. Overall it did not feel like the people who designed it live on the same planet as I do. The device and all of your purchases are locked to your Facebook account. Fuck Facebook. This is a deal breaker for me and should be for everyone, I would never spend my own money on anything that profits Facebook. My only hope is that they’re selling the Quest 2 below cost and that it’s thoroughly jailbroken soon. I really wish there was another competitor in the “embedded wireless with optional tethered” tier that I could recommend as an alternative, but there isn’t. For people who can’t commit to a gaming box yet, there’s simply no other option. I can’t fault Facebook for cannibalizing and betraying the Rift team with the Quest line, it’s clearly the best move in our current VR ecosystem, especially at the price point they’re selling. Overall it feels like a $500-600 device being sold for $370 USD (with the Elite Strap), with clear intent and inevitable success of dominating the entry-level tier of VR. I’m sure Facebook will more than make up any losses through the walled-garden they’ve assembled.

0 views

Using Github Issues as a Hugo frontend with Github Actions and Netlify

I got into the habit of dumping quick blog post ideas into issues on my blog’s repo. It’s a convenient place to iterate on them and share with friends for feedback before actually publishing on my blog post. The drafts keep accumulating, how do I trick myself into publishing more? Perhaps by reducing the effort required for the next step? Let’s do it! My blog is statically generated using Hugo , the code is hosted on Github , then when a pull request comes in it is built, previewed, and published on merge by Netlify . The blog post drafts are posted as Github issues, so there is a clear gap: How do we convert issues into pull requests for Netlify? Enter Github Actions! My full workflow lives here if we want to jump ahead, but let’s break down the broad strokes. I decided to trigger the publishing process once an issue is labelled with ‘publish’, so let’s start with that: Next up we want to specify the steps, first thing is to check out the repository into the action’s environment: Once the source code is available, we want to generate the blog post from the issue metadata. Here is a very basic version of this, though I ended up doing more tweaking in the end: This shoves the body of the issue, which is already markdown, into a markdown file named based on the title of the issue. This is a good place to add frontmatter, or slugify the title, or whatever else your blog setup requires. Running the payload through environment variables helps with not needing to escape various characters like `. And finally, we make the pull request using Peter Evan’s create-pull-request action which makes this super easy: This is the minimum of what we need, but we can specify all kinds of additional options here: like auto-deleting the branch, setting a custom title, body, and whatever else. Here’s an example of what I’m doing: When my blog post draft is ready, I add the tag and the Github action takes it away, creating a pull request: The pull request automatically pings me as a reviewer, and includes a “Closes #X” line which will close the draft issue once the PR is merged. Very convenient! Once the pull request is ready, Netlify takes it away, builds everything and generates a handy preview: I can make sure everything looks right, and even apply edits directly inside the pull request. This is another great step to send a long blog post for feedback, using all of the wonderful Pull Request Review features! When all is said and done, merging the pull request triggers Netlify to publish my changes to my domain, and merging closes the original issue, and I’m done! Drag n’ drop images work in Github Issues, so it’s super easy to write a quick post with a bunch of screenshots or what have you. It’s important to me that I’m not too tightly coupled to third-party services, so the pull request and code merge flow makes sure that all of the published state continues to live inside of my Git repository. I can still make blog posts the way I used to: Pull the latest repo, write some markdown, and push to publish. I added a little frontmatterify script to process the incoming markdown and convert the remote Github Issue uploaded images into local images that are included in the pull request. The script also generates frontmatter that I use for Hugo. It’s a bit clunky but works for now. Alright, let’s do this.

0 views

Beginner Sourdough: Does anything really matter?

There are bazillions of sourdough recipes in the wild, foaming with traffic from eager bakers trapped in quarantine, ready for their sourdough starter baby with a punny name to graduate into a tube of sustenance and distraction during dark times. Sourdough recipes are more involved than regular bread recipes, that’s for sure. There are many steps, and bakers can’t seem to agree on many of them. That’s no accident, a great sourdough bake needs to be adjusted to our kitchen and our starter. This article is not a sourdough recipe or guide. It’s more of a guide for reading sourdough recipes. It’s everything I wish I knew after reading my first sourdough recipe. When we get that faithful link from our baker friend with the only caption saying “just do what it says,” read through it, take it in, then come back here for some added context. (For some decent recipes, check the links at the end of the article.) Let’s break down a standard sourdough bread recipe, look at what varies between them, and how to adjust the recipe fit our kitchen. A typical sourdough recipe comes down to four things: Whether our recipe measures in grams or ounces or cups, the thing that really matters is the ratio between ingredients: Starter to Water to Flour. For example, 50 grams of starter, 50 grams of water, and 100 grams of flour is 1:1:2 in ratio short-hand. It can also be expressed as a Baker’s Percentage : 100% flour, 50% starter, 50% water. When using Baker’s Percentages, everything is relative to the weight of the flour, so flour is always 100%. Once we have the ratio, we can scale the recipe to whatever total quantity we’d like, whether it’s a small 300g bun or a big ol’ 1.5kg absolute unit. The ratio also tells us the hydration of the bread (baker percentage of total water content). Higher hydration can make the bread’s crumb more open and fluffy, but it can also make the dough stickier and more difficult to work with. This is one of the common things that vary in recipes, whether explicitly or indirectly. We can use a Bread Calculator to figure all of this out ahead of time, and tweak the components to meet our goal. Tried at 60% hydration loaf last time but want to bring it up to 75% this time? Plug in our measurements and tweak the water or flour components until our hydration ratio where we want it. Other components are usually expressed in baker’s percentages too, like 2% salt is standard. Okay, so we have our ratio of flour and water, but what kind of flour? Most beginner sourdough recipes recommend using Bread Flour. It has more protein than All Purpose Flour which gives the starter yeast more nutrition to work with. There are many other kinds of flours out there, like Whole Wheat Flour and Rye Flour, which have even more protein. If all you have is All Purpose Flour and Whole Wheat or Rye, then considering mixing them to give your starter yeast more to work with. Too much Whole Wheat or Rye can be difficult to work with, so start with majority of your base flour (Bread Flour ideally, All Purpose otherwise). Maybe something like 5:1 base flour to other flour, and tweak from there. Avoid flours designed to be low on protein, like pastry flour. Once we’re ready to experiment more, we can toss in some herbs, cheese, seeds, the possibilities are endless! Any sourdough bread recipe will have a bunch of steps with specific timelines they recommend. Half hour to rest, a few hours to bulk ferment, another period to proof. The problem with most recipes is that they’re written for the author’s specific kitchen and starter. We’ll stumble on professional baker in one end of the world heatedly debating another baker on the other side, both with thousands of loafs under their belt but they just can’t agree on The Correct Way To Make Sourdough! “I’ve always proofed mine for no more than 3 hours and it comes out perfect every time,” one will insist. “No way, you need to proof for minimum 5 hours, until a flock of pigeons fly by the window,” the other will protest. While I’m just someone who read too many recipes and discussion boards on the topic, here are some of my notes on what I discovered truly matters: A cold kitchen will take a lot longer for a dough to rise than a warm kitchen. It can be the difference between a 3 hour bulk fermentation and an 8 hour. Or 10% sourdough starter and 30% sourdough starter. If we have a really warm kitchen, we can scale down the amount of starter we use to slow down the fermentation. A longer fermentation can yield a richer sour flavour, so we want that. It will also take a lot longer for a new sourdough starter to mature in a cold kitchen, one week can become two weeks. It can help to create a warm space for the dough to rise more efficiently: An oven with the oven light on, or a warming mat typically used for seedlings. Be careful to not go too warm, the yeast starts to die at around 100F. 80–85F is the recommended range for a good efficient rise. Periodically checking the dough’s core temperature is a good way to control this variable better. A less mature starter is less active, possibly not even ready to make bread with. If we used a low-protein flour for our starter, it can take longer to ramp up. When a starter is matured, it regularly doubles or triples in volume after a feeding. With consistent feedings, it gets a kind of metabolism where you can rely on it reaching peak activity at a specific time during the day. It can help to use the starter as it approaches its peak activity, especially if we’re working with a cold kitchen. Watch out for the infamous starter false start: Around day 3 or 4, the starter will spike with massive growth and beginners will rejoice about how ready their starter is but it is for for nought: This happens because an adjacent bacteria took hold of the starter and consumed all of the nutrients until it burned itself out. With consistent feedings, the desired yeast sourced from our flour grains will ultimately prevail and develop a healthy cadence. Aside from changing the temperature, and the quantity and activity of our starter, we’ll need to adjust the timing of our steps. If the kitchen is cold, ferment and proof longer. If the yeast is not too active, extend our timeline even further. It can help a lot to learn what the quality of the dough should be at each step, to know whether to proceed to the next step or wait longer. There are various tests we can do, such as the floating test to see if the starter is ready to be used; the windowpane test to see if we have sufficient gluten development; the poke test to see if the dough is sufficiently proofed for baking. We’ll go more into it in the Steps section.Our dough can be underproofed or overproofed, and we might not even realize it until we pull it out of the oven, wait the prerequisite hour, cut into it, share our joy on instagram and then receive a DM from a professional baker that just says “it’s underproofed.” An underproofed bread won’t have enough structure to fully rise during the bake, all of the gasses from the yeast will accumulate into few large bubbles, parts of it will be very dense and maybe even raw tasting. It’s much easier to underproof as a beginner, especially with an immature starter and a cold kitchen. An overproofed bread won’t have an ideal crumb, such as very small holes, or it could collapse under its own rise because the gluten relaxed too much. If we discover the dough is overproofed before baking, we can fold it a few more times to reactivate the gluten and give it another short proof before baking. I’ve found it hard to find consensus on what qualities an overproofed loaf has, so it’s safer to err on the side of overproofing than underproofing if we must. For other kinds of baking, when we just mix some ingredients and toss them into a temperature-controlled oven, it’s appropriate to follow the recipe to the letter. For sourdough bread baking, we need to be ready to adjust the recipe to fit our environment and our needs. Many recipes have a lot of steps in common. Some of them are necessary, others can be substituted or adjusted, others are purely ritual. It’s useful to know which is which, and what we can do with them. Maybe we started our bread too late in the day and can’t stay up until 2AM to do 6 additional folds, or maybe we don’t have time to bake in the morning and would prefer to avoid refrigerating overnight. Levain: This is a process for taking some of your existing starter and making a separate sister starter that we can use for making our loaf while keeping our original starter. 🤔 If we have a large enough discard from our original starter, we can use that instead. Autolyse: Typically done while the levain is prepared, it’s for mixing the bulk of the recipe’s flour and water separate from the starter and letting it rest until the levain is ready to be mixed. 🤔 If we’re skipping the levain step, skipping the autolyse step is not catastrophic for beginners. We can mix our discard all together, or I prefer to mix most of the water with the discard first and then add the bulk of the flour. Mixing Techniques: As a beginner, do whatever it takes to get everything combined into a shaggy doughy mess. Minimize how much flour spills over the counter and makes a mess, and try to scrape what we can from our hands, but it’s not a big deal. We can slap, we can fold, we can slam, we can even yell profanities at it while we pinch it a bunch. Rest and Add Salt: Many recipes are very serious about leaving the salt and a bit of water until after mixing the bulk of the dough and water. The theory is that salt leeches moisture from the flour so it won’t be absorbed as evenly if we do it all at once. Once we mix in the salt, the feel of the dough instantly changes into something much less shaggy and less sticky, it’s delightful. 🤔 Foodgeek did an A/B test on this , and it doesn’t seem to matter if we mix in the salt with the Autolyse step. Seems it’s more ritual than anything, so no need to panic if we mixed in salt prematurely. It is convenient to save a bit of our water reserve for mixing the salt, though. Stretch and Fold: During the “bulk fermentation” step, we’re usually instructed to perform a number of folds to help develop the gluten in the dough so that the webby network of the bread will have enough structure to support itself around all of the lovely bubbles produced by the yeast as it rises and bakes. Different recipes have different techniques here, with different timing. Some recipes suggest using oiled hands for working with dough at this point, but water-wet hands are even better and less messy and doesn’t introduce oil to the dough which can affect the recipe composition. 🤔 Another Foodgeek contribution on the topic , it certainly seems worth doing at least a couple of good dough perturbations over a few hours. With a sufficiently mature yeast, even doing nothing produces decent sourdough but not quite as good and we’re taking a gamble on the activity of our yeast to compensate. If we’re busy, do at least two nice gentle lift and coil sets spaced out by 30–60 min, and let it rest until it rises 50–100%. 🤔 Do the window pane test if we’re unsure if it needs to rest more. Colder environments or less mature yeast could require more time than prescribed here. Fold, Roll, and Stitch: By shaping our dough at this point, we create a tense form that will present the rough shape of the final loaf. With a good tight roll and stitch, the loaf will have a better chance to rise vertically rather than sprawl into a blob. 🤔 I’ve unintentionally totally for science skipped this step once and ended up with a very flat loaf that sprawled the circumference of the dutch oven. As far as technique goes, this is one of the harder steps but it pays off to get better at it. Banneton, Seam Side Up: The shape of the banneton helps reinforce the final form of the loaf as we give it additional time to rise. If we don’t have a banneton, we can use any form that is not too much bigger than the desired loaf size. I use a colander lined with a linen towel. 🤔 This is the companion step to the Fold, Roll, and Stitch. It’s similarly important, but the seam side up is not a huge deal if we messed up. The loaf won’t have that perfect smooth top to work with on the surface, but it will be just as good inside. Remember to dust the lining with flour so it won’t stick, or some corn meal if we’re feeling fancy. Overnight in the Fridge: Proofing in the fridge is a recurring step from all bakers who love to bake first thing in the morning. What about the hungry night owls? What about all the sourdough bakers who have been making this lovely bread for far long than refrigerators have existed in our society? 🤔 We can replace the fridge proofing with a 3–5 hour proof at room temperature, on the longer end if the temperature is low or the starter is less active. Do the poke test if we’re unsure if it’s ready: The recess in the dough from the poke shouldn’t spring back immediately when it’s ready. Cover in plastic or wrap in a bag to prevent the dough from drying out while it’s proofing, especially if we’re refrigerating overnight. Pre-heat and Bake From Cold: Pre-heat the oven to maximum heat, typically 500F, then quickly take out the dough from the fridge, flip onto parchment (since we did seam-side up), dust with some flour, spread the flour, score with a lame (a fancy razor holder), and commit it to the flames. 🤔 If we don’t have a dutch oven, or a covered cast iron pot, we can use a baking steel or even a baking sheet. We’ll need another pan placed at a lower rack full of boiling water, to add the moisture to the oven necessary for a nice crispy crust. The lidded dutch oven captures and retains the moisture from the dough, so it’s a bit simpler. 🤔 What if the dough is looking a little sad after an overnight in the fridge? This can happen if the yeast isn’t as active as it could be. Take it out from the fridge and give it one more room-temperature proof for 2–4 hours, bonus points if we can proof it somewhere a bit warmer (80–85F), like a seedling warming mat. 🤔 No lame? No problem. If we use a safety razor, carefully use a clean razor by hand. Worst case, just grab a nice sharp knife and use that to score. End to end, leaning 45 degrees, half-inch deep cut. Not a huge deal if the scoring step is botched or skipped, the bread will still form just fine but it won’t expand along a specific gorgeous seam of your making. Wait For The Hardest Hour Of Your Life: When we pull the loaf out of the oven, it’s not quite done baking. A rookie mistake is to huddle around the glorious steaming orb-treasure and tear into it like a pack of post-apocalyptic savages. A true intellectual will place an ear near the crust as it cools and hear the crackling transformation within. 🤔 If we don’t wait, it will still be edible but: The inside will be extra stretchy or gummy, it might have an “undercooked” vibe to it, some of the final notes of flavour won’t quite develop. Seriously, just wait at least an hour, or until it fully reaches room temperature. If it’s steamy when we cut into it, we didn’t wait long enough. Let’s be real, we’re all in this for the sourdough puns. Here’s a list of ideas we can try if it’s just not working out: Even if mistakes are made along the way, there’s a good chance we’ll end up with a decent tasting bread product in the end. We must eat our shame as we will our inevitable victory.

0 views

Open sourcing urllib3’s finances

Proud to be part of the team who built this chonk of vital internet infrastructure. ❤️ Big sincere thank you to our maintainers: @theavalkyrie @sethmlarson @haikuginger @Lukasaoz @sigmavirus24 , and to all the other contributors for this very important work. https://t.co/ea3iFClKD5 Since urllib3 was created in 2008, we have gained several amazing maintainers and hundreds of contributors. Any time you do anything that touches HTTP in Python, you’re probably using urllib3 behind the scenes. It is the 2nd most downloaded third-party Python package, after pip. Tens of thousands of engineering hours were donated by people working on their own time, unpaid. The whole world benefited from these important donations. Generous companies have helped the urllib3 contributors and maintainers financially over the years. In the spirit of transparency, I’m going to take a moment to talk about some of our recent funding and how that money was spent. Above all, I want to applaud employers who are actively encouraging their team to contribute back to open source as part of their job. You’re already part of our README , but once more for the active supporters: And thank you to all of the employers who have contributed your company’s resources to our project in the past: Akamai , Hewlett Packard Enterprise , and others. If your work on urllib3 is actively being funded by your employer, please send a pull request to be added to the README. We have received a variety of support in various shapes and sizes over the years. Most recently: Additionally, urllib3 joined Tidelift Subscription to provide better support guarantees for their paying customers. Our current revenue from the subscription is $417.00 per month ($10,000 over 2 years). This funding is going to Seth for handling support requests and releases. Previously, I was taking $208.50 per month for work as backup maintainer. Thank you Tidelift ! With over 50 million installs per month, urllib3 is a core part of the world’s infrastructure. There is a never ending stream of improvements and fixes that need to be done, especially as HTTP continues to evolve as a technology. If you can convince your management to allocate a budget to support the improvement of urllib3, please get in touch with me or Thea . We will find the right urllib3 project and contributor to make good use of the funding, and the whole world will benefit from it.

0 views