Posts in Ui (15 found)
codedge 4 days ago

Random wallpaper with swaybg

Setting a wallpaper in Sway, with swaybg, is easy. Unfortunately there is no way of setting a random wallpaper automatically out of the box. Here is a little helper script to do that. The script is based on a post from Silvain Durand 1 with some slight modifications. I just linked the script my sway config instead of setting a background there. Sway config : The script spawns a new instance, changes the wallpaper, and kills the old instance. With this approach there is no flickering of the background when changing. An always up-to-date version can be found in my dotfiles . Original script from Silvain Durand: https://sylvaindurand.org/dynamic-wallpapers-with-sway/   ↩︎ Original script from Silvain Durand: https://sylvaindurand.org/dynamic-wallpapers-with-sway/   ↩︎

0 views

LLMs Eat Scaffolding for Breakfast

We just deleted thousands of lines of code. Again. Each time a new LLM model comes out, that’s the same story. LLMs have limitations so we build scaffolding around them. Each models introduce new capabilities so that old scaffoldings must be deleted and new ones be added. But as we move closer to super intelligence, less scaffoldings are needed. This post is about what it takes to build successfully in AI today. Every line of scaffolding is a confession: the model wasn’t good enough. LLMs can’t read PDF? Let’s build a complex system to convert PDF to markdown LLMs can’t do math? Let’s build compute engine to return accurate numbers LLMs can’t handle structured output? Let’s build complex JSON validators and regex parsers LLMs can’t read images? Let’s use a specialized image to text model to describe the image to the LLM LLMs can’t read more than 3 pages? Let’s build a complex retrieval pipeline with a search engine to feed the best content to the LLM. LLMs can’t reason? Let’s build chain-of-thought logic with forced step-by-step breakdowns, verification loops, and self-consistency checks. etc, etc... millions of lines of code to add external capabilities to the model. But look at models today: GPT-5 is solving frontier mathematics, Grok-4 Fast can read 3000+ pages with its 2M context window, Claude 4.5 sonnet can ingest images or PDFs, all models have native reasoning capabilities and support structured outputs. The once essential scaffolding are now obsolete. Those tools are backed in the model capabilities. It’s nearly impossible to predict what scaffolding will become obsolete and when. What appears to be essential infrastructure and industry best practice today can transform into legacy technical debt within months. The best way to grasp how fast LLMs are eating scaffolding is to look at their system prompt (the top-level instruction that tells the AI how to behave). Looking at the prompt used in Codex, OpenAI coding agent from GPT-o3 model to GPT-5 is mind-blowing. GPT-o3 prompt: 310 lines GPT-5 prompt: 104 lines The new prompt removed 206 lines. A 66% reduction. GPT-5 needs way less handholding. The old prompt had complex instructions on how to behave as a coding agent (personality, preambles, when to plan, how to validate). The new prompt assumes GPT-5 already knows this and only specifies the Codex-specific technical requirements (sandboxing, tool usage, output formatting). The new prompt removed all the detailed guidance about autonomously resolving queries, coding guidelines, git usage. It’s also less prescriptive. Instead of “do this and this” it says “here are the tools at your disposal.” As we move closer to super intelligence, the models require more freedom and leeway (scary, lol!). Advanced models require simple instructions and tooling. Claude Code, the most sophisticated agent today, relies on a simple filesystem instead of a complex index and use bash commands (find, read, grep, glob) instead of complex tools. It moves so fast. Each model introduces a new paradigm shift. If you miss a paradigm shift, you’re dead. Having an edge in building AI applications require deep technical understanding, insatiable curiosity, and low ego. By the way, because everything changes, it’s good to focus on what won’t change Context window is how much text you can feed the model in a single conversation. Early model could only handle a couple of pages. Now it’s thousands of pages and it’s growing fast. Dario Amodei the founder of Anthropic expects 100M+ context windows while Sam Altman hinted at billions of context tokens . It means the LLMs can see more context so you need less scaffolding like retrieval augmented generation. November 2022 : GPT-3.5 could handle 4K context November 2023 : GPT-4 Turbo with 128K context June 2024 : Claude 3.5 Sonnet with 200K context June 2025 : Gemini 2.5 Pro with 1M context September 2025 : Grok-4 Fast with 2M context Models used to stream at 30-40 tokens per second. Today’s fastest models like Gemini 2.5 Flash and Grok-4 Fast hit 200+ tokens per second. A 5x improvement. On specialized AI chips (LPUs), providers like Cerebras push open-source models to 2,000 tokens per second. We’re approaching real-time LLM: full responses on complex task in under a second. LLMs are becoming exponentially smarter. With every new model, benchmarks get saturated. On the path to AGI, every benchmark will get saturated. Every job can be done and will be done by AI. As with humans, a key factor in intelligence is the ability to use tools to accomplish an objective. That is the current frontier: how well a model can use tools such as reading, writing, and searching to accomplish a task over a long period of time. This is important to grasp. Models will not improve their language translation skills (they are already at 100%), but they will improve how they chain translation tasks over time to accomplish a goal. For example, you can say, “Translate this blog post into every language on Earth,” and the model will work for a couple of hours on its own to make it happen. Tool use and long-horizon tasks are the new frontier. The uncomfortable truth: most engineers are maintaining infrastructure that shouldn’t exist. Models will make it obsolete and the survival of AI apps depends on how fast you can adapt to the new paradigm. That’s what startups have an edge over big companies. Bigcorp are late by at least two paradigms. Some examples of scaffolding that are on the decline: Vector databases : Companies paying thousands/month for when they could now just put docs in the prompt or use agentic-search instead of RAG ( my article on the topic ) LLM frameworks : These frameworks solved real problems in 2023. In 2025? They’re abstraction layers that slow you down. The best practice is now to use the model API directly. Prompt engineering teams : Companies hiring “prompt engineers” to craft perfect prompts when now current models just need clear instructions with open tools Model fine-tuning : Teams spending months fine-tuning models only for the next generation of out of the box models to outperform their fine-tune (cf my 2024 article on that ) Custom caching layers : Building Redis-backed semantic caches that add latency and complexity when prompt caching is built into the API. This cycle accelerates with every model release. The best AI teams master have critical skills: Deep model awareness : They understand exactly what today’s models can and cannot do, building only the minimal scaffolding needed to bridge capability gaps. Strategic foresight : They distinguish between infrastructure that solves today’s problems versus infrastructure that will survive the next model generation. Frontier vigilance : They treat model releases like breaking news. Missing a single capability announcement from OpenAI, Anthropic, or Google can render months of work obsolete. Ruthless iteration : They celebrate deleting code. When a new model makes their infrastructure redundant, they pivot in days, not months. It’s not easy. Teams are fighting powerful forces: Lack of awareness : Teams don’t realize models have improved enough to eliminate scaffolding (this is massive btw) Sunk cost fallacy : “We spent 3 years building this RAG pipeline!” Fear of regression : “What if the new approach is simple but doesn’t work as well on certain edge cases?” Organizational inertia : Getting approval to delete infrastructure is harder than building it Resume-driven development : “RAG pipeline with vector DB and reranking” looks better on a resume than “put files in prompt” In AI the best team builds for fast obsolescence and stay at the edge. Software engineering sits on top of a complex stack. More layers, more abstractions, more frameworks. Complexity was a sophistication. A simple web form in 2024? React for UI, Redux for state, TypeScript for types, Webpack for bundling, Jest for testing, ESLint for linting, Prettier for formatting, Docker for deployment…. AI is inverting this. The best AI code is simple and close to the model. Experienced engineers look at modern AI codebases and think: “This can’t be right. Where’s the architecture? Where’s the abstraction? Where’s the framework?” The answer: The model ate it bro, get over it. The worst AI codebases are the ones that were best practices 12 months ago. As models improve, the scaffolding becomes technical debt. The sophisticated architecture becomes the liability. The framework becomes the bottleneck. LLMs eat scaffolding for breakfast and the trend is accelerating. Thanks for reading! Subscribe for free to receive new posts and support my work. LLMs can’t read PDF? Let’s build a complex system to convert PDF to markdown LLMs can’t do math? Let’s build compute engine to return accurate numbers LLMs can’t handle structured output? Let’s build complex JSON validators and regex parsers LLMs can’t read images? Let’s use a specialized image to text model to describe the image to the LLM LLMs can’t read more than 3 pages? Let’s build a complex retrieval pipeline with a search engine to feed the best content to the LLM. LLMs can’t reason? Let’s build chain-of-thought logic with forced step-by-step breakdowns, verification loops, and self-consistency checks. Vector databases : Companies paying thousands/month for when they could now just put docs in the prompt or use agentic-search instead of RAG ( my article on the topic ) LLM frameworks : These frameworks solved real problems in 2023. In 2025? They’re abstraction layers that slow you down. The best practice is now to use the model API directly. Prompt engineering teams : Companies hiring “prompt engineers” to craft perfect prompts when now current models just need clear instructions with open tools Model fine-tuning : Teams spending months fine-tuning models only for the next generation of out of the box models to outperform their fine-tune (cf my 2024 article on that ) Custom caching layers : Building Redis-backed semantic caches that add latency and complexity when prompt caching is built into the API. Deep model awareness : They understand exactly what today’s models can and cannot do, building only the minimal scaffolding needed to bridge capability gaps. Strategic foresight : They distinguish between infrastructure that solves today’s problems versus infrastructure that will survive the next model generation. Frontier vigilance : They treat model releases like breaking news. Missing a single capability announcement from OpenAI, Anthropic, or Google can render months of work obsolete. Ruthless iteration : They celebrate deleting code. When a new model makes their infrastructure redundant, they pivot in days, not months. Lack of awareness : Teams don’t realize models have improved enough to eliminate scaffolding (this is massive btw) Sunk cost fallacy : “We spent 3 years building this RAG pipeline!” Fear of regression : “What if the new approach is simple but doesn’t work as well on certain edge cases?” Organizational inertia : Getting approval to delete infrastructure is harder than building it Resume-driven development : “RAG pipeline with vector DB and reranking” looks better on a resume than “put files in prompt”

0 views
ptrchm 1 weeks ago

Event-driven Modular Monolith

The main Rails app I currently work on has just turned eight. It’s not a huge app. It doesn’t deal with web-scale traffic or large volumes of data. Only six people working on it now. But eight years of pushing new code adds up. This is a quick overview of some of the strategies we use to keep the codebase maintainable. After the first few years, our codebase suffered from typical ailments: tight coupling between domains, complex database queries spread across various parts of the app, overgrown models, a maze of side effects triggered by ActiveRecord callbacks , endlessly chained associations (e.g. ) – with an all-encompassing model sitting on top of the pile. Modular Monolith Pub/Sub (Events) Patterns Service Objects Repositories for Database Queries Slim and Dumb Models Bonus: A Separate Frontend App How Do I Start?

0 views
Uros Popovic 1 months ago

RTL generation for custom CPU Mrav

Overview of how SystemVerilog RTL code is generated in the build flow for the Mrav custom CPU core.

0 views
underlap 2 months ago

Arch linux take two

After a SSD failure [1] , I have the pleasure of installing arch linux for the second time. [2] Last time was over two years ago (in other words I remember almost nothing of what was involved) and since then I’ve been enjoying frequent rolling upgrades (only a couple of which wouldn’t boot and needed repairing). While waiting for the new SSD to be delivered, I burned a USB stick with the latest arch iso in readiness. I followed the instructions to check the ISO signature using gpg: So this looks plausible, but to be on the safe side, I also checked that the sha256 sum of the ISO matched that on the arch website. My previous arch installation ran out of space in the boot partition, so I ended up fiddling with the configuration to avoid keeping a backup copy of the kernel. This time, I have double the size of SSD, so I could (at least) double the size of the boot partition. But what is a reasonable default size for the boot partition? According to the installation guide , a boot partition isn’t necessary. In fact, I only really need a root ( ) partition since my machine has a BIOS (rather than UEFI). Since there seem to be no particular downsides to using a single partition, I’ll probably go with that. Then I don’t need to choose the size of a boot partition. The partitioning guide states: If you are installing on older hardware, especially on old laptops, consider choosing MBR because its BIOS might not support GPT If you are partitioning a disk that is larger than 2 TiB (≈2.2 TB), you need to use GPT. My system BIOS was dated 2011 [3] and the new SSD has 2 TB capacity, so I decided to use BIOS/MBR layout, especially since this worked fine last time. Here are the steps I took after installing the new SSD. Boot from the USB stick containing the arch ISO. Check ethernet is connected using ping. It was already up to date. Launch and set the various options: I then chose the Install option. It complained that there was no boot partition, so I went back and added a 2 GB fat32 boot partition. Chose the install option again. The installation began by formatting and partitioning the SSD. Twelve minutes later, I took the option to reboot the system after installation completed. After Linux booted (with the slow-painting grub menu, which I’ll need to switch to text), I was presented with a graphical login for i3. After I logged in, it offered to create an i3 config for me, which I accepted. Reconfigured i3 based on the contents of my dotfiles git repository. Installed my cloud provider CLI in order to access restic/rclone backups from the previous arch installation. At this point I feel I have a usable arch installation and it’s simply a matter of setting up the tools I need and restoring data from backups. I wanted to start dropbox automatically on startup and Dropbox as a systemd service was just the ticket. The failed SSD had an endurance of 180 TBW and lasted 5 years. The new SSD has an endurance of 720 TBW, so I hope it would last longer, although 20 years (5*720/180) seems unlikely. ↩︎ I was being ironic: it was quite painful the first time around. But this time I know how great arch is, so I’ll be more patient installing it. Also, I have a backup and a git repo containing my dot files, so I won’t be starting from scratch. ↩︎ There was a BIOS update available to fix an Intel advisory about a side-channel attack. However, I couldn’t confirm that my specific hardware was compatible with the update, so it seemed too risky to apply the update. Also, browsers now mitigate the side-channel attack. In addition, creating a bootable DOS USB drive seems to involve either downloading an untrusted DOS ISO or attempting to create a bootable Windows drive (for Windows 10 or 11 which may require a license key), neither of which I relish. ↩︎

0 views
Jason Fried 2 months ago

A fly and luck

There was a tiny fly right by the drain, and I was about to wash my hands. Turning on the water would have sent it right down the hole. A quick end, or an eventual struggled drowning, hard to know. But that would be that, there was no getting out. Somehow, for a moment, I slipped into contemplation. I could just turn on the water, I could rescue it, I could use a different sink

0 views
Cassidy Williams 4 months ago

Generating open graph images in Astro

Something that always bugged me about this blog is that the open graph/social sharing images used this for every single post: I had made myself a blank SVG template (of just the rainbow-colored pattern) for each post literally years ago, but didn’t want to manually create an image per blog post. There are different solutions out there for this, like the Satori library, or using a service like Cloudinary , but they didn’t fit exactly how I wanted to build the images, and I clearly have a problem with control. So, I built myself my own solution! Last year, I made a small demo for Cosynd with Puppeteer that screenshotted websites and put it into a PDF for our website copyright offering, aptly named screenshot-demo . I liked how simple that script was, and thought I could follow a similar strategy for generating images. My idea was to: And then from there, I’d do this for every blog title I’ve written. Seemed simple enough? Reader, it was not. BUT it worked out in the end! Initially, I set up a fairly simple Astro page with HTML and CSS: With this, I was able to work out what size and positioning I wanted my text to be, and how I wanted it to adjust based on the length of the blog post title (both in spacing and in size). I used some dummy strings to do this pretty manually (like how I wanted it to change ever so slightly for titles that were 4 lines tall, etc.). Amusing note, this kind of particular design work is really fun for me, and basically impossible for AI tools to get right. They do not have my eyes nor my opinions! I liked feeling artistic as I scooted each individual pixel around (for probably too much time) and made it feel “perfect” to me (and moved things in a way that probably 0 other people will ever notice). Once I was happy with the dummy design I had going, I added a function to generate an HTML page for every post, so that Puppeteer could make a screenshot for each of them. With the previous strategy, everything worked well. But, my build times were somewhat long, because altogether the build was generating an HTML page per post (for people to read), a second HTML page per post (to be screenshotted), and then a screenshot image from that second HTML page. It was a bit too much. So, before I get into the Puppeteer script part with you, I’ll skip to the part where I changed up my strategy (as the kids say) to use a single page template that accepted the blog post title as a query parameter. The Astro page I showed you before is almost exactly the same, except: The new script on the page looked like this, which I put on the bottom of the page in a tag so it would run client-side: (That function is an interesting trick I learned a while back where tags treat content as plaintext to avoid accidental or dangerous script execution, and their gives you decoded text without any HTML tags. I had some blog post titles that had quotes and other special characters in them, and this small function fixed them from breaking in the rendered image!) Now, if you wanted to see a blog post image pre-screenshot, you can go to the open graph route here on my website and see the rendered card! In my folder, I have a script that looks mostly like this: This takes the template ( ), launches a browser, navigates to the template page, loops through each post, sizes it to the standard Open Graph size (1200x630px), and saves the screenshot to my designated output folder. From here, I added the script to my : I can now run to render the images, or have them render right after ! This is a GitHub Gist of the actual full code for both the script and the template! There was a lot of trial and error with this method, but I’m happy with it. I learned a bunch, and I can finally share my own blog posts without thinking, “gosh, I should eventually make those open graph images” (which I did literally every time I shared a post). If you need more resources on this strategy in general: I hope this is helpful for ya!

0 views
Rafael Camargo 4 months ago

Customizing checkboxes and radio buttons without hacks

For the longest time, I thought it was impossible to style native checkboxes and radio buttons without pulling off some kind of creative stunt — like what the big component libraries do, such as Material UI, Ant Design and VuetifyJS. But guess what. It's totally possible. Checkboxes and Radio Buttons can be customized easily without ha.

0 views
Michael Lynch 5 months ago

Refactoring English: Month 5

At the start of each month, I declare what I’d like to accomplish. Here’s how I did against those goals: I originally set out to write a guide that focused on Kickstarter, but the more I wrote, the less I felt like Kickstarter was the interesting part. I was more excited about crowdfunding as a path for self-published authors, and Kickstarter is just one way of crowdfunding

0 views
Evan Schwartz 6 months ago

Building a fast website with the MASH stack in Rust

I'm building Scour , a personalized content feed that sifts through noisy feeds like Hacker News Newest, subreddits, and blogs to find great content for you. It works pretty well -- and it's fast . Scour is written in Rust and if you're building a website or service in Rust, you should consider using this "stack". After evaluating various frameworks and libraries, I settled on a couple of key ones and then discovered that someone had written it up as a stack. Shantanu Mishra described the same set of libraries I landed on as the "mash 🥔 stack" and gave it the tagline "as simple as potatoes". This stack is fast and nice to work with, so I wanted to write up my experience building with it to help spread the word. TL;DR: The stack is made up of Maud , Axum , SQLx , and HTMX and, if you want, you can skip down to where I talk about synergies between these libraries. (Also, Scour is free to use and I'd love it if you tried it out and posted feedback on the suggestions board !) Scour uses server-side rendered HTML, as opposed to a Javascript or WebAssembly frontend framework. Why? First, browser are fast at rendering HTML. Really fast. Second, Scour doesn't need a ton of fancy interactivity and I've tried to apply the "You aren't gonna need it" principle while building it. Holding off on adding new tools helps me understand the tools I do use better. I've also tried to take some inspiration from Herman from BearBlog's approach to "Building software to last forever" . HTML templating is simple, reliable, and fast. Since I wanted server-side rendered HTML, I needed a templating library and Rust has plenty to choose from. The main two decisions to make were: Here is a non-exhaustive list of popular template engines and where they fall on these two axes: I initially picked because of its popularity, performance , and type safety. (I quickly passed on all of the runtime-evaluated options because I couldn't imagine going back to a world of runtime type errors. Part of the reason I'm writing Rust in the first place is compile-time type safety!) After two months of using , however, I got frustrated with its developer experience. Every addition to a page required editing both the Rust struct and the corresponding HTML template. Furthermore, extending a base template for the page header and footer was surprisingly tedious. templates can inherit from other templates . However, any values passed to the base template (such as whether a user is logged in) must be included in every page's Rust struct , which led to a lot of duplication. This experience sent me looking for alternatives. Maud is a macro for writing fast, type-safe HTML templates right in your Rust source code. The format is concise and makes it easy to include values from Rust code. The Hello World example shows how you can write HTML tags, classes, and attributes without the visual noise of angle brackets and closing tags: Rust values can be easily spliced into templates (HTML special characters are automatically escaped ): Control structures like , , , , and are also very straightforward: Partial templates are also easy to reuse by turning them into small functions that return : All in all, Maud provides a pleasant way to write HTML components and pages. It also ties in nicely with the rest of the stack (more on that later). Axum is a popular web framework built by the Tokio team. The framework uses functions with extractors to declaratively parse HTTP requests. The Hello World example illustrates building a router with multiple routes, including one that handles a POST request with a JSON body and returns a JSON response: Axum extractors make it easy to parse values from HTTP bodies, paths, and query parameters and turn them into well-defined Rust structs. And, as we'll see later, it plays nicely with the rest of this stack. Every named stack needs a persistence layer. SQLx is a library for working with SQLite, Postgres, and MySQL from async Rust. SQLx has a number of different ways of working with it, but I'll show one that gives a flavor of how I use it: You can derive the trait for structs to map between the database row and your Rust types. Note that you can derive both and 's and on the same structs to use them all the way from your database to the Axum layer. However, in practice I've often found that it is useful to separate the database types from those used in the server API -- but it's easy to define implementations to map between them. The last part of the stack is HTMX . It is a library that enables you to build fairly interactive websites using a handful of HTML attributes that control sending HTTP requests and handling their responses. While HTMX itself is a Javascript library, websites built with it often avoid needing to use custom Javascript directly. For example, this button means "When a user clicks on this button, issue an AJAX request to /clicked, and replace the entire button with the HTML response". Notably, this snippet will replace just this button with the HTML returned from , rather than the whole page like a plain HTML form would. HTMX has been having a moment, in part due to essays like The future of HTMX where they talked about "Stability as a Feature" and "No New Features as a Feature". This obviously stands in stark contrast to the churn that the world of frontend Javascript frameworks is known for. There is a lot that can and has been written about HTMX, but the logic clicked for me after watching this interview with the creator of it. The elegance of HTMX -- and the part that makes its promise of stability credible -- is that it was built from first principles to generalize the behavior already present in HTML forms and links . Specifically, (1) HTML forms and links (2) submit GET or POST HTTP requests (3) when you click a Submit button and (4) replace the entire screen with the response. HTMX asks and answers the questions: By generalizing these behaviors, HTMX makes it possible to build more interactive websites without writing custom Javascript -- and it plays nicely with backends written in other languages like Rust. Since we're talking about Rust and building fast websites, it's worth emphasizing that while HTMX is a Javascript library, it only needs to be loaded once. Updating your code or website behavior will have no effect on the HTMX libraries, so you can use the directive to tell browsers or other caches to indefinitely store the specific versions of HTMX and any extensions you're using. The first visit might look like this: But subsequent visits only need to load the HTML: This makes for even faster page loads for return users. Overall, I've had a good experience building with this stack, but I wanted to highlight a couple of places where the various components complemented one another in nice ways. Earlier, I mentioned my frustration with , specifically around reusing a base template that includes different top navigation bar items based on whether a user is logged in or not. I was wondering how to do this with Maud, when I came across this Reddit question: Users of maud (and axum): how do you handle partials/layouting? David Pedersen, the developer of Axum, had responded with this gist . In short, you can make a page layout struct that is an Axum extractor and provides a method that returns : When you use the extractor in your page handler functions, the base template automatically has access to the components it needs from the request: This approach makes it easy to reuse the base page template without needing to explicitly pass it any request data it might need. (Thanks David Pedersen for the write-up -- and for your work on Axum!) This is somewhat table stakes for HTML templating libraries, but it is a nice convenience that Maud has an Axum integration that enables directly return from Axum routes (as seen in the examples just above). HTMX has a number of very useful extensions , including the Preload extension . It preloads HTML pages and fragments into the browser's cache when users hover or start clicking on elements, such that the transitions happen nearly instantly. The Preload extension sends the header with every request it initiates, which pairs nicely with middleware that sets the cache response headers: (Of course, this same approach can be implemented with any HTTP framework, not just Axum.) Update: after writing this post, u/PwnMasterGeno on Reddit pointed out the crate to me. This library includes Axum extractors and responders for all of the headers that HTMX uses. For example, you can use the header to determine if you need to send the full page or just the body content. also has a nice feature for cache management . It has a that automatically sets the component of the HTTP cache headers based on the request headers you use, which will ensure the browser correctly resends the request when the request changes in a meaningful way. While I've overall been happy building with the MASH stack, here are the things that I've found to be less than ideal. I would be remiss talking up this stack without mentioning one of the top complaints about most Rust development: compile times. When building purely backend services, I've generally found that Rust Analyzer does the trick well enough that I don't need to recompile in my normal development flow. However, with frontend changes, you want to see the effects of your edits right away. During development, I use Bacon for recompiling and rerunning my code and I use to have the frontend automatically refresh. Using some of Corrode's Tips For Faster Rust Compile Times , I've gotten it down to around 2.5 seconds from save to page reload . I'd love if it were faster, but it's not a deal-breaker for me. For anyone building with the MASH stack, I would highly recommend splitting your code into smaller crates so that the compiler only has to recompile the code you actually changed. Also, there's an unmerged PR for Maud to enable updating templates without recompiling , but I'm not sure if that will end up being merged. If you have any other suggestions for bringing down compile times, I'd love to hear them! HTMX's focus on building interactivity through swapping HTML chunks sent from the backend sometimes feels overly clunky. For example, the Click To Edit example is a common pattern involving replacing an Edit button with a form to update some information such as a user's contact details. The stock HTMX way of doing this is fetching the form component from the backend when the user clicks the button and swapping out the button for the form. This feels inelegant because all of the necessary information is already present on the page, save for the actual form layout. It seems like some users of HTMX combine it with Alpine.js , Web Components, or a little custom Javascript to handle this. For the moment, I've opted for the pattern lifted from the HTMX docs but I don't love it. If you're building a website and using Rust, give the MASH stack a try! Maud is a pleasure to use. Axum and SQLx are excellent. And HTMX provides a refreshing rethink of web frontends. That said, I'm not yet sure if I would recommend this stack to everyone doing web development. If I were building a startup making a normal web app, there's a good chance that TypeScript is still your best bet. But if you are working on a solo project or have other reasons that you're already using Rust, give this stack a shot! If you're already building with these libraries, what do you think? I'd love to hear about others' experiences. Thanks to Alex Kesling for feedback on a draft of this post! Discuss on r/rust , r/htmx or Hacker News . If you haven't already signed up for Scour, give it a try and let me know what you think !

0 views
Nicolas Bustamante 12 months ago

How to build a shitty product

Everyone wants the recipe to build a great product. But if you take Charlie Munger's advice to "always invert," you might ask: How to you build a truly shitty product? One that's confusing, frustrating, hard to understand, and makes you want to throw your computer out the window. Every organization sets out with the intent to build a good product. So why do so many of them end up creating something average? The answer lies in the structure and approach of the product team. A typical product team is composed of product managers, designers, and developers. Product managers (PMs) are the main touchpoint with users; they gather feedback, create specifications, and organize the roadmap. Designers create what they believe is a user-friendly UI/UX based on the PM specs and their interpretation of user needs. Developers, who may include data engineers, backend, frontend, and full-stack specialists, take these specifications implement them into a product. Subscribe now Product teams often fall into the trap of designing and building based on assumptions or abstract user personas rather than real user interaction. PMs become gatekeepers of feedback, filtering and interpreting user needs before they ever reach designers or developers. By the time insights get translated into product decisions, they’ve lost touch with what users actually experience. This lack of direct feedback leads to products that don’t solve real problems because the team is too insulated from the people they're building for. Too often, product specifications are shaped by internal company constraints—usually engineering limitations—rather than customer needs. As Steve Jobs famously said, " You've got to start with the customer experience and work backwards to the technology. " Inverting this process, where the tech defines what’s possible instead of the customer's needs, is a fast track to building something nobody wants. Over-specifying also kills innovation because developers are reduced to coders implementing someone else's vision, without any flexibility to improve or innovate. The typical product team works sequentially: PMs specify, designers design, and developers build. This waterfall mentality feels efficient on paper but is inherently rigid. When each step is done in isolation, the process becomes fragile and slow to adapt to new information. The longer each team works in their silo without iteration, the more likely the end product will miss the mark. Who's ultimately responsible for the product's success or failure? PMs? Designers? Developers? Bureaucracy tends to dilute responsibility, and when everything is driven by consensus, mediocrity often follows. Consensus avoids disasters, but it also avoids greatness. True ownership, where someone is accountable for both success and failure, is missing. Some teams get so caught up in Agile, Scrum, or other project management frameworks that they forget the ultimate goal is building something users love. Meetings, standups, and sprint planning become bureaucratic rituals that distract from the real work. To build something truly great, you need craftsmen. Product builders who are deeply invested, who care about every detail, and who take responsibility from beginning to end. The builder has to be as close as possible to the customer. Talk to them, visit them in person, answer support queries, watch them use the product, and demo it to them. This kind of empathy—truly putting yourself in the customer's shoes—is rare. Builders also need to understand the customer’s underlying problem, not just the feature requests they articulate. Customers may ask for specific features, but often they don't know the best solution; they just know their pain points. The job of a great product builder is to uncover the real issue. As Paul Graham once said, " Empathy is probably the single most important difference between a good hacker and a great one. " You need to understand how little users understand, and if you assume they’ll figure it out on their own, you’re setting yourself up for failure. Builders need to use the product they create. That’s why B2C products are often better than B2B ones—builders use what they build and feel the pain of its shortcomings. Most great B2B products, like Vercel or GitHub, are made for developers by developers. It’s much harder to eat your own dog food when building vertical applications for niche users, like lawyers or doctors, but the best craftsmen find a way. The best products come from small, tight-knit teams with clear responsibility. When it’s easy to identify who’s responsible, it’s easier to make great things. Small teams can iterate quickly, and greatness comes through iterations. The boldest approach is to have the same person design, build, and refine the product. With AI coding tools, it's now possible to have a good engineer with taste and empathy that goes from listening to users to implementing a solution, without the need for PMs or designers. Instead of trying to launch a complete, polished product out of the gate, focus on building something small and functional. Once you have that, get it into the hands of users and iterate quickly based on their feedback. The magic happens in iteration, not in perfectionism. Real users will help you refine your ideas and identify what’s actually valuable. The faster you can cycle through feedback loops, the better your product becomes. Building a delightful product for a few core users is often better than trying to build something for everyone. By focusing on a specific audience, you can deeply understand their needs and create something truly valuable. A product that solves real problems for a small, dedicated group is more likely to gain traction and eventually appeal to a wider audience. When you build for core users, you create passionate advocates who can help drive growth organically. Paul Graham's "taste" metaphor from Hackers and Painters applies here: you should always strive for good taste in both code and design, removing unnecessary complexity. Simplicity doesn’t mean lacking features; it means that every feature has a purpose, and every line of code serves the user. Good taste in design and code means prioritizing what truly matters to users and avoiding bloat. A simple, elegant product is not only easier to maintain but also more delightful to use. It's also essential to kill features over time—removing what is no longer needed or valuable ensures the product remains focused and effective. You create great products with small teams, but it is also the pitfall of most companies. Big teams introduce layers of complexity, miscommunication, and slow decision-making. Small teams are nimble, communicate better, and move faster. When a team is small, it’s easier to stay aligned on the mission, and everyone has a clear stake in the product’s success. It also prevents diffusion of responsibility—everyone is accountable. This sounds ideal, but it's not the default approach—especially in large companies. Why? Because big companies prefer reducing the standard deviation of outcomes. Only a small percentage of developers can design great software independently, and it’s difficult for management to hire them - often they don't like to work for bureaucratic organizations. Instead of trusting one brilliant craftsman, most companies opt for a system where design is done by committee and developers just implement the designs. This approach reduces uncertainty. Great results are traded for predictably average ones. If a brilliant craftsman leaves, the company could be in trouble, but if a committee member leaves, it doesn't matter. There’s redundancy in every role. Take Google—you could fire half the workforce, and it would barely affect product quality. But if you fired someone like Jony Ive from Apple’s small design team, there would be no iPhone. Similarly, look at Telegram Messenger—one of the best digital products ever. They have close to 1 billion active users and yet a small team of just 30 engineers. Pavel Durov takes all the customer-facing decisions while his brother and co-founder, Nikolai, handles decisions regarding infrastructure, cryptography, and backend. They've created amazing results, but if Pavel, Nikolai, or key programmers were to leave, the product would stagnate. Big companies dampen oscillations; they avoid disaster, but they also miss the high points. And that’s fine, because their goal isn’t to make great products—it's to be slightly better than their competition. As a reminder, my new startup is called Fintool . We are building Warren Buffett as a service, leveraging large language models (LLMs) to perform the tasks of institutional investors. We follow an approach that emphasizes small teams with clear responsibilities, a lack of rigid roles like product managers, and a relentless focus on speed and iteration. We keep our team extremely lean, with each member responsible for a specific section of the product. For example, we have one team member focused on data engineering to ingest terabytes of financial documents, another on machine learning for search, retrieval, and LLMs, and a full-stack engineer working on the product interface. By assigning clear ownership to each team member, we ensure accountability and expertise in every aspect of our product. Our accountability is customer-first, with engineers often emailing and interacting directly with customers. This approach means customers know exactly who to blame if something doesn't work. We believe high-performing teams do their best work and have the most fun in person. Remote work is highly inefficient, requiring the whole team to jump on Zoom meetings, write notes to share information, and lacking serendipity. Serendipity is the lifeblood of startups—one good idea shared spontaneously at the coffee machine can change the destiny of the company. Additionally, we value each other's company too much to spend our days in boring Zoom calls. We encourage every craftsman on our team to talk directly with customers, visit them in person, and implement the best solutions. We value discussions and brainstorming, but we minimize meetings to maintain fast iterations and provide high freedom for team members to choose their approach. We follow the "Maker's Schedule," as described by Paul Graham: Makers need long, uninterrupted blocks of time to focus on deep work. A typical maker’s day is structured around productivity and creativity, where interruptions or frequent meetings can be disruptive (I hate meetings.) We value speed and push in production every day. One of our core values is to "Release early, release often, and listen to your customers." Speed matters in business, so we push better-than-perfect updates to customers as soon as possible. We believe mastery comes from repeated experiments and learning from mistakes—it's about 10,000 iterations, not 10,000 hours. Another company value is "Clone and improve the best." We don't reinvent the wheel; we enhance proven successesWe are shameless cloners standing on the shoulders of giants. If a design or an existing pattern works well for our use case, we will copy it. Using AI tools, like Cursor the AI code editor, is mandatory at Fintool. We believe AI provides a massive productivity advantage. Most people prefer sticking to their old ways of worker but it’s not how we operate. We won't hire or retain team members who aren't AI-first. With the speed of AI-assisted front-end coding, we believe that traditional design tools like Figma are becoming less necessary. Anyone can create a nice-looking Figma until they start implementing and discover UX challenges. By leveraging a standard component library like Shadcn UI and using tools that convert prompts directly into interfaces, we can iterate faster and achieve better outcomes. A skilled engineer with good taste can design efficient and visually pleasing interfaces without the need for a designer. It keeps the team smaller and increases the speed. Our approach at Fintool focuses on leveraging the strengths of a small, empowered team, with each member deeply connected to the product's success. This method allows for rapid iteration, close customer relationships, and the ability to deliver a product that truly meets user needs. However, the main drawbacks are the high dependency on our people. If a key team member is on holiday or leaves the company, progress slows down significantly. We also rely heavily on hiring exceptional individuals—those who are not only talented but also open-minded, like to interact with customers, have a craftsman's mindset and the discipline to work hard. Finding such people is extremely challenging but it’s essentiel for building something truly great. It’s hard but worth it. We are hiring . “ There is no easy way. There is only hard work, late nights, early mornings, practice, rehearsal, repetition, study, sweat, blood, toil, frustration, and discipline. ” - Jocko Willink Thanks for reading, you can subscribe for free to receive new posts

0 views
fasterthanli.me 1 years ago

Face cams: the missing guide

I try to avoid doing “meta” / “behind the scenes” stuff, because I usually feel like it has to be “earned”. How many YouTube channels are channels about making YouTube videos? Too many. Regardless, because I’ve had the opportunity to make my own mistakes now for a few years (I started doing the video thing in earnest in 2019), and because I’ve recently made a few leaps in quality-of-life re: shooting and editing video, I thought I’d publish a few notes, if only for reference for my future self.

0 views
Alex Molas 1 years ago

You need more than p-values

Explore the critical considerations and potential pitfalls of relying solely on A/B testing in making business decisions. While A/B testing is a valuable tool, this post challenges the assumption of exchangeability between past and future data, emphasizing the dynamic nature of the business environment. I propose two solutions (1) validating the exchangeability assumption through holdback groups and (2) advocating for a holistic decision-making approach that goes beyond statistical tests. Executives are urged not to blindly trust p-values, emphasizing the importance of intuition, market understanding, and forecasting in shaping successful business strategies. In conclusion, the post encourages a balanced approach, combining statistical rigor with real-world insights for effective decision-making.

0 views
codedge 4 years ago

Render images in Statamics Bard

The Bard fieldtype is a beautiful idea to create long texts containing images, code samples - basically any sort of content. While I was creating my blog I was not sure how to extract images from the Bard field. Thanks to the Glide tag you can just simply pass the field of your image and it automatically outputs the proper url. My image field is a set called image. In your Antlers template for the images just write For generating responsive images you can use the excellent Statamic Responsive Images addon provided by Spatie . With this the above snippet changes to that: This generates a tag with to render images for differrent breakpoints.

0 views