Latest Posts (20 found)
devansh Today

Reflections on my 5 years at HackerOne

Today marks 5 years at HackerOne for me. I joined in 2020 as a Product Security Analyst while I was still an undergrad student. I’m grateful to now be serving as a Team Lead (Technical Services). A few reflections: Grateful for the people at HackerOne who took chances on me, challenged my thinking, and trusted me with more responsibility than I thought I was ready for. An even bigger thanks to the hackers whose reports I’ve had the chance to read over all these years. Five years in, still learning, still a work in progress :) None of this is solo. Good managers, patient teammates, and sharp hackers did more for my growth than any “self-made” narrative. Title changes are visible; real growth is not. It’s in how you listen, decide, and own mistakes. Luck is underrated. Being in a high-trust, high-talent environment at the right time matters more than we admit. "I don’t know" is not a weakness. It’s usually the start of the right conversation. As an Individual contributor, you optimize for being right. As a lead, you optimize for the team being effective. Very different job. Escalations and incidents expose culture fast. Blame travels down; responsibility travels up. Saying "no" clearly is kinder than saying "yes" and disappearing. Tools change every year. Principles - ownership, clarity, curiosity - don’t. If you stop learning, your experience is just 1 year repeated 5 times. Constraints are not excuses, they are design inputs for how you grow. Reading reports from hackers is a privilege, a free, continuous education from some of the sharpest minds on the internet. The hardest shift is from “How do I prove myself?” to “How do I make others successful?”. Calm execution during chaos beats heroic last-minute rescue every single time. Depth compounds. Understanding one concept end-to-end teaches you more than skimming ten. Feedback that makes you uncomfortable is usually the feedback you needed two months ago. High standards without empathy create fear. Empathy without standards creates mediocrity. You need both. You outgrow roles faster than you outgrow habits. Updating your habits is the real promotion. If everything is urgent, nothing is important. Prioritization is a leadership skill, not a calendar trick. Writing forces clarity. If you can’t explain it simply, you probably don’t understand it yet. Most “communication issues” are unasked questions and unspoken assumptions. Systems outlive heroes. Fix the system, don’t search for a savior. Being technically right and practically useless is still a miss. A 1% better process, repeated daily, beats a once-a-year “big transformation”. You can borrow context, but you can’t outsource judgment. That part you have to earn. Your manager sees some of the picture. Customers see another part. Hackers see yet another. Listen to all three. Imposter syndrome never fully leaves. You just learn to move with it instead of freezing because of it. Generosity with knowledge is not optional. Someone did it for you when you had nothing to trade. Gratitude is a strategy, not just a feeling. It keeps you curious, grounded, and willing to start at zero again. Stay hungry, very very hungry . The real hunger for growth can’t be fully satisfied, the moment it feels “enough,” it was never true hunger. The goalpost should keep moving, not out of insecurity, but out of a genuine desire to keep stretching what you can learn, build, and contribute.

0 views

Double opt-in PSA

As of today, I run three different newsletters, all powered by Buttondown: there’s my recently announced Dealgorithmed , my outdoors-focused From the Summit , and the People and Blogs series. I also send my blog posts via email , if you prefer to consume content that way. They all require double opt-in. Which means that if you signed up for one of them, you should have received a second email, asking you to click a link to confirm your subscription. Sometimes those land in the spam folder, sometimes they don’t arrive at all. That’s just the unfortunate reality of emails in 2025. I just checked, and a solid 10% of the people who have signed up for Dealgorithmed have not confirmed their address. This is a reminder to check your inbox and click the confirmation link otherwise, you will not receive the first edition when it goes out on January 1st. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views

Great device, wrong problem: Two months with the Ultrahuman Air

I've been wearing an Apple Watch daily for the last 7-ish years now. It's kinda become part of my personality -- like, something feels off when I'm not wearing it. But lately, I thought I wanted a change. Maybe it’d be nice to wear a proper watch every now and then, or even go bare-wristed for a bit. So, a couple of months ago, I started hunting for an alternative device that could keep track of my health and stats -- which I figured was the main reason I wore my watch. After tons of research, I settled on the Ultrahuman Air. Some reviews mentioned that the Oura Ring seems generally more accurate, but the Ultrahuman does not require a subscription to fully utilize -- a total dealbreaker for me with the Oura Ring. I was stoked to try something new. It’s a fantastic device, no question. I was impressed. But as the weeks went on, I started to notice what it couldn’t do -- and that’s when I realized it's not replacing my watch. So I’ve come to realize that health tracking isn’t even the main thing I use my Apple Watch for -- it’s the alarms and notifications that keep my life together. This all means that while the Ultrahuman Air can definitely handle the health-tracking side of things, it can’t touch everything else I rely on the Apple Watch for. And now that both devices do a solid job at tracking stats, wearing two smart gadgets -- both needing charging and occasionally shining bright green lights at night -- feels redundant. I really love the Ultrahuman Air. It’s sleek, it’s smart, and it taught me a lot about my body. But it’s not the change I needed. So, it’ll probably be up on Facebook Marketplace soon. Maybe I’ll stick with my Apple Watch for now -- or who knows, maybe I’ll finally try going watch-free for a bit. We’ll see. Its app is fantastic. I love the design language they chose and how the stats are presented. Honestly, I didn’t know much about Heart Rate Variability (HRV) or VO2max until I wore the Ultrahuman Air, and now those are two stats I keep a close eye on. Handy features like Stress Rhythm and Caffeine Window feel a bit gimmicky, but they’ve got a ton of utility when you actually use them. The ring itself is super lightweight -- even lighter than a couple of carbide rings I like to wear. I barely notice it’s on, and its texturing makes it really scratch- and grime-resistant. Its reported stats are pretty close to what my Apple Watch shows, so without proper scientific gear to test it, I’m inclined to think they’re accurate enough. Its battery life is killer. I get about 4 and a half days on average -- compared to my Apple Watch, which I routinely charge before bed at night (thanks, low-battery anxiety issues). I wear my watch to bed so I can wake up at 5 AM without risking disturbing my wife and kid (who co-sleep with us). A phone alarm is a no-go since they’re both fairly light sleepers. I rely on reminders and message notifications to function. Seriously. With the amount of stuff I forget unless I write it down and set a reminder, the entire system I’ve built to manage my ADHD just breaks down without them. I use random Apple Watch features more than I realized: the handy flashlight that helps me navigate to the bathroom at midnight, the camera app that lets me take better group pics, and even the walkie-talkie that lets my wife and me ping each other quickly and directly.

1 views

Bending Emacs - Episode 7: Eshell built-in commands

With my recent rinku post and Bending Emacs episode 6 both fresh in mind, I figured I may as well make another Bending Emacs episode, so here we are: Bending Emacs Episode 7: Eshell built-in commands Check out the rinku post for a rundown of things covered in the video. Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Enjoying this content or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
ava's blog Yesterday

my data should not be your cookie jar

It’s 1970. You walk into the store, grab a bunch of apples, go to the cash register, pay with cash, and walk out. What kinds of data have been automatically processed about you while doing that? Very little. Most likely, none, as CCTV footage relied on the development of VHS to be viable, and IP cameras transmitting video over networks only took off in the 90s. Fast forward to today. Depending on where you live, your supermarket has good cameras everywhere; some, like the super fancy new experiments, have recognition technology that detects what items you grab so that you can just pay without scanning, or even just walk out, having it subtracted automatically from your account. This isn’t just Amazon stores; German store Rewe is trying to get into that too, as I know someone personally who works in their sub-company Lekkerland’s “Smart Store Rollout” department. A more mundane but very common thing for the big stores is tracking you with RFID technology: They track where you are and how long you stay at specific spots by using a network of fixed RFID readers via the RFID tag on the loyalty card or shopping cart (or the individual scanners Rewe offers nowadays!). By noting the time and location of each tag read, the system can create a map of your path and duration of stay within the store. Your supermarket might also have an app to get specific sales and offers. Mine, for a little period, even made it seem as if you could only buy specific products if you pay through their app instead. They dropped that after a while, but I’m sure it got many to download it and make an account - as, of course, you could not use it without one. At the checkout, you might opt for self-checkout now. I’ve seen that stores in the US distinctly record your face and your hands scanning the products, so in case you try to sneak something, they have clear proof and identification options. That video gets analyzed and stored for a while. Either way, you might use a loyalty card you signed up for with your real name and address to collect points or get a discount, tracking exactly what you bought, and you’ll likely pay via card. Your bank account has a bit more information about where you shop and when than if you had just withdrawn cash. If you’re like me, you also pay contactless via phone or watch, giving the processor like Google Pay or Apple Pay some info as well. All that for quickly getting something at the grocery store, something that would not have given the companies much meaningful data about you specifically even just 55 years ago. Of course, some of these things are avoidable and no one forces you to use apps, bank and loyalty cards, but still. These things are not presented as the data harvesters they are, but as convenience and a way to save money or time, targeting vulnerable groups the most. But why even go to the store? Maybe you live in a country with delivery options like Instacart and the like. One more service related to the groceries you buy that is an app, a user account. What if you can’t or don’t wanna cook? Just get a delivery via DoorDash, UberEats, Lieferando or the equivalent in your country. More data about you, and that’s just food. What if you aren’t buying apples at the grocery store, but you’re buying lamps, frames, or a new bed cover? Nowadays, you’d most likely either have a similar shop experience as in the grocery stores, or you’ll online shop on the company’s website or app, that may or may not also show ads and place tracking cookies or reads other data on your phone. They might get you with a 5% off coupon if you just sign up for their newsletter! So you do. Not many use a throwaway mail address or immediately unsubscribe. Now they have constant access to you and your attention if they want to, not just while you’re at their physical stores. A marketing email popping up at the right time creates desires and a suggestion to do some online window shopping, again creating data as you use their website or app. And then there’s the shipping companies… What about the news? You can still buy magazines and newspapers at the store and the corner shop/kiosk, or maybe those little newspaper vending machines that drop one if you put in a coin. But everything is moving to digital nowadays, saving waste and printing costs, so to read the same newspaper online, you have to either pay with your data or pay more than print used to cost, and even then, still pay with your data . Subscribing to the digital version or unlocking a single article via a one-time-payment still tracks you and still shows you ads on many, many news sites. And what do you pay for? If you’re unlucky, it is the same article copy pasted across 10 different newspapers, or a completely AI generated article with zero human effort. For comparison: Just buying the print at a coin vending machine leaves them completely in the dark about you. That was just normal . I notice this in all kinds of industries and parts of life now - it’s why everything now requires an app and a sign-up. Your local café, your hairdresser, your e-scooter. Hell, I even saw nailbiter nail polish now comes with an app. New washing machines and refrigerators are reporting back to their companies. Why is every place, every product company now accepted to be a data aggregation company as well? Why is my data the cookie jar that companies frequently get their hand stuck in while acting entitled? Hello, I already paid you, why are you not ashamed of your obvious greed? What tires me about all of this is that we are supposed to pretend this is all normal and as if it has always been that way, and pretend that this isn’t just double-dipping . I pay money, and then I also generate money with my data. In cases of the loyalty cards and discounts, you could say that there is a fair trade as the price gets lowered, but this is the minority. The majority of the time, we are tracked and profiled with no advantage for us, no compensation. And even if there is, and default pricing is higher if you don‘t share data, that ends up being financial discrimination and affects your choice significantly. As prices rise everywhere, paying with our data gets us almost no relief and is just an ever-growing additional income stream on the side for these companies. Despite having this pile of digital gold to pad their wallets, they still pretend that they have to raise prices all the time for all kinds of situations, and then never lower them when they resolve, as the profit of doing so and selling to advertisers and AI companies is concentrated at the top of the chain. Companies used to be fine selling via means that did not track and invade your life this hard, now we’re supposed to pretend these things are essential. Essential for what? More ads? More manipulation? Better sales numbers? More money for the CEO? They are not essential. We could drop 3/4ths of these mechanisms with no discernible changes to the user experience or product access. The reality is that literal essentials are gatekept by being subjected to this constant harassment and evaluation. How long until not complying with this surveillance regime downright hurts? When you cannot pay cash, or you cannot get into the store without scanning a QR code via their app for authentication, or pricing is personalized based on the profile they have about you - compiled with not just the store data, but other data they bought from data brokers? Your loyalty status, past purchases, your income information, credit score, propensity-to-pay algorithms, Meta social media info, …? Premium loyalty tiers where you ironically pay for more privacy? Predictive technology wrongfully classifying you as a high risk for stealing and banning you from the store? I’m tired of every niche jumping on this opportunity to be the next Cambridge Analytica. You are a hardware store, not a data broker company! I keep swatting your hand out of the jar, but you are just back in there every time I look. Reply via email Published 29 Nov, 2025

0 views
iDiallo Yesterday

Demerdez-vous: A response to Enshittification

There is an RSS reader that I often used in the past and have become very reliant on. I would share the name with you, but as they grew more popular, they have decided to follow the enshittification route. They've changed their UI, hidden several popular links behind multilayered menus, and they have revamped their API. Features that I used to rely on have disappeared, and the API is close to useless. My first instinct was to find a new app that will satisfy my needs. But being so familiar with this reader, I've decided to test a few things in the API first. Even though their documentation doesn't mention older versions anymore, I've discovered that the old API is still active. All I had to do was add a version number to the URL. It's been over 10 years, and that API is still very much active. I'm sorry I won't share it here, but this has served as a lesson for me when it comes to software that becomes worse over time. Don't let them screw you, unscrew yourself! We talk a lot about "enshittification"these days. I've even written about it a couple of times. It's about how platforms start great, get greedy, and slowly turn into user-hostile sludge. But what we rarely talk about is the alternative. What do you do when the product you rely on rots from the inside? The French have a phrase for this: Demerdez-vous. The literal translation is "unshit yourself". What it actually means is to find a way, even if no one is helping you. When a company becomes too big to fail, or simply becomes dominant in its market, drip by drip, it starts to become worse. You don't even notice it at first. It changes in ways that most people tolerate because the cost of switching is high, and the vendor knows it. But before you despair, before you give up, before you let the system drag you into its pit, try to unscrew yourself with the tools available. If the UI changes, try to find the old UI. Patch the inconvenience. Disable the bullshit. Bend the app back into something humane. It might sound impossible at first, but the tools to accomplish this exist and are widely being used. Sometimes the escape hatch is sitting right there, buried under three layers of "Advanced" menus. On the web I hate auto-playing videos, I don't want to receive twelve notifications a day from an app, I don't care about personalization. But for the most part, these can be disabled. When I download an app, I actually spend time going through settings. If I care enough to download an app, or if I'm forced, I'll spend the extra time to ensure that an app works to my advantage, not the other way around. When that RSS reader removes features from the UI, but not from their code, I was still able to continue using them. Another example of this is reddit. Their new UI is riddled with dark patterns, infinite scroll, and popups. But, go to , and you are greeted with that old UI that may not look fancy, but it was designed with the user in mind, not the company's metrics. I also like YouTube removed the dislike button. While it might be hurtful to content creators to see the number of dislikes, as a consumer, this piece of data served as a filter for lots of spam content. For that of course there is the "Return Youtube Dislike" browser extension. Extensions often can help you regain control when popular websites remove functionality useful to users, but the service no longer wants to support. There are several tools that enhance youtube, fix twitter, and of course uBlock. It's not always possible to combat enshittification. Sometimes the developer actively enforces their new annoying features and prevents anyone from removing them. In cases like these, there is still something that users can do. They can walk away. You don’t have to stay in an abusive relationship. You are allowed to leave. When you do, you'll discover that there was an open-source alternative. Or that a small independent app survived quietly in the corner of the internet. Or even sometimes, you'll find that you don't need the app at all. You break your addiction. In the end, "Demerdez-vous" is a reminder that we still have agency in a world designed to take it away. Enshittification may be inevitable, but surrender isn’t. There’s always a switch to flip, a setting to tweak, a backdoor to exploit, or a path to walk away entirely. Companies may keep trying to box us in, but as long as we can still think, poke, and tinker, we don’t have to live with the shit they shovel. At the end of the day "On se demerde"

0 views

Self-hosting my photos with Immich

For every cloud service I use, I want to have a local copy of my data for backup purposes and independence. Unfortunately, the tool stopped working in March 2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup. In this post, I describe how I have set up Immich , a self-hostable photo manager. Here is the end result: a few (live) photos from NixCon 2025 : I am running Immich on my Ryzen 7 Mini PC (ASRock DeskMini X600) , which consumes less than 10 W of power in idle and has plenty of resources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024: When I saw the first reviews of the ASRock DeskMini X600 barebone, I was immediately interested in building a home-lab hypervisor (VM host) with it. Apparently, the DeskMini X600 uses less than 10W of power but supports latest-generation AMD CPUs like the Ryzen 7 8700G! Read more → I installed Proxmox , an Open Source virtualization platform, to divide this mini server into VMs, but you could of course also install Immich directly on any server. I created a VM (named “photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM. For the initial import, you could assign more CPU and RAM, but for normal usage, that’s enough. I (declaratively) installed NixOS on that VM as described in this blog post: For one of my network storage PC builds, I was looking for an alternative to Flatcar Container Linux and tried out NixOS again (after an almost 10 year break). There are many ways to install NixOS, and in this article I will outline how I like to install NixOS on physical hardware or virtual machines: over the network and fully declaratively. Read more → Afterwards, I enabled Immich, with this exact configuration: At this point, Immich is available on , but not over the network, because NixOS enables a firewall by default. I could enable the option, but I actually want Immich to only be available via my Tailscale VPN, for which I don’t need to open firewall access — instead, I use to forward traffic to : Because I have Tailscale’s MagicDNS and TLS certificate provisioning enabled, that means I can now open https://photos.example.ts.net in my browser on my PC, laptop or phone. At first, I tried importing my photos using the official Immich CLI: Unfortunately, the upload was not running reliably and had to be restarted manually a few times after running into a timeout. Later I realized that this was because the Immich server runs background jobs like thumbnail creation, metadata extraction or face detection, and these background jobs slow down the upload to the extent that the upload can fail with a timeout. The other issue was that even after the upload was done, I realized that Google Takeout archives for Google Photos contain metadata in separate JSON files next to the original image files: Unfortunately, these files are not considered by . Luckily, there is a great third-party tool called immich-go , which solves both of these issues! It pauses background tasks before uploading and restarts them afterwards, which works much better, and it does its best to understand Google Takeout archives. I ran as follows and it worked beautifully: My main source of new photos is my phone, so I installed the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and enabled automatic backup of new photos via the icon at the top right. I am not 100% sure whether these settings are correct, but it seems like camera photos generally go into Live Photos, and Recent should cover other files…?! If anyone knows, please send an explanation (or a link!) and I will update the article. I also strongly recommend to disable notifications for Immich, because otherwise you get notifications whenever it uploads images in the background. These notifications are not required for background upload to work, as an Immich developer confirmed on Reddit . Open Settings → Apps → Immich → Notifications and un-tick the permission checkbox: Immich’s documentation on backups contains some good recommendations. The Immich developers recommend backing up the entire contents of , which is on NixOS. The subdirectory contains SQL dumps, whereas the 3 directories , and contain all user-uploaded data. Hence, I have set up a systemd timer that runs to copy onto my PC, which is enrolled in a 3-2-1 backup scheme . Immich (currently?) does not contain photo editing features, so to rotate or crop an image, I download the image and use GIMP . To share images, I still upload them to Google Photos (depending on who I share them with). The two most promising options in the space of self-hosted image management tools seem to be Immich and Ente . I got the impression that Immich is more popular in my bubble, and Ente made the impression on me that its scope is far larger than what I am looking for: Ente is a service that provides a fully open source, end-to-end encrypted platform for you to store your data in the cloud without needing to trust the service provider. On top of this platform, we have built two apps so far: Ente Photos (an alternative to Apple and Google Photos) and Ente Auth (a 2FA alternative to the deprecated Authy). I don’t need an end-to-end encrypted platform. I already have encryption on the transit layer (Tailscale) and disk layer (LUKS), no need for more complexity. Immich is a delightful app! It’s very fast and generally seems to work well. The initial import is smooth, but only if you use the right tool. Ideally, the official could be improved. Or maybe could be made the official one. I think the auto backup is too hard to configure on an iPhone, so that could also be improved. But aside from these initial stumbling blocks, I have no complaints.

0 views

Coverage

Sometimes, the question arises: which tests trigger this code here? Maybe I've found a block of code that doesn't look like it can't be hit, but it's hard to prove. Or I want to answer the age-old question of which subset of quick tests might be useful to run if the full test suite is kinda slow. So, run each test with coverage by itself. Then, instead of merging all the coverage data, find which tests cover the line in question. Oddly enough, though some of the Java tools (e.g., Clover) support per-test coverage, the tools here in general are somewhat lacking. , part of the suite, supports a ("test name") marker, but only displays the per test data on a per-file level: This is the kind of thing where in 2025, you can ask a coding agent to vibe-code or vibe-modify a generator, and it'll work fine. I have not found the equivalent of Profilerpedia for coverage file formats, but the lowest common denominator seems to be . The file format is described at geninfo(1) . Most language ecosystems can either produce LCOV output directly or have pre-existing conversion tools.

0 views
xenodium Yesterday

Rinku: CLI link previews

In my last Bending Emacs episode, I talked about overlays and used them to render link previews in an Emacs buffer. While the overlays merely render an image, the actual link preview image is generated by rinku , a tiny command line utility I built recently. leverages macOS APIs to do the actual heavy lifting, rendering/capturing a view off screen, and saving to disk. Similarly, it can fetch preview metadata, also saving the related thumbnail to disk. In both cases, outputs to JSON. By default, fetches metadata for you. In this instance, the image looks a little something like this: On the other hand, the flag generates a preview, very much like the ones you see in native macOS and iOS apps. Similarly, the preview renders as follows: While overlays is one way to integrate anywhere in Emacs, I had been meaning to look into what I can do for eshell in particular. Eshell is just another buffer , and while overlays could do the job, I wanted a shell-like experience. After all, I already knew we can echo images into an eshell buffer . Before getting to on , there's a related hack I'd been meaning to get to for some time… While we're all likely familiar with the cat command, I remember being a little surprised to find that offers an alternative elisp implementation. Surprised too? Go check it! Where am I going with this? Well, if eshell's command is an elisp implementation, we know its internals are up for grabs , so we can technically extend it to display images too. is just another function, so we can advice it to add image superpowers. I was pleasantly surprised at how little code was needed. It basically scans for image arguments to handle within advice and otherwise delegates to 's original implementation. And with that, we can see our freshly powered-up command in action: By now, you may wonder why the detour when the post was really about ? You see, this is Emacs, and everything compounds! We can now leverage our revamped command to give similar superpowers to , by merely adding an function. As we now know, outputs things to JSON, so we can use to parse the process output and subsequently feed the image path to . can also output link titles, so we can show that too whenever possible. With that, we can see the lot in action: While non-Emacs users are often puzzled by how frequently we bring user flows and integrations on to our beloved editor, once you learn a little elisp, you start realising how relatively easily things can integrate with one another and pretty much everything is up for grabs . Reckon and these tips will be useful to you? Enjoying this blog or my projects ? I am an 👉 indie dev 👈. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
Sean Goedecke Yesterday

How good engineers write bad code at big companies

Every couple of years somebody notices that large tech companies sometimes produce surprisingly sloppy code. If you haven’t worked at a big company, it might be hard to understand how this happens. Big tech companies pay well enough to attract many competent engineers. They move slowly enough that it looks like they’re able to take their time and do solid work. How does bad code happen? I think the main reason is that big companies are full of engineers working outside their area of expertise . The average big tech employee stays for only a year or two 1 . In fact, big tech compensation packages are typically designed to put a four-year cap on engineer tenure: after four years, the initial share grant is fully vested, causing engineers to take what can be a 50% pay cut. Companies do extend temporary yearly refreshes, but it obviously incentivizes engineers to go find another job where they don’t have to wonder if they’re going to get the other half of their compensation each year. If you count internal mobility, it’s even worse. The longest I have ever stayed on a single team or codebase was three years, near the start of my career. I expect to be re-orged at least every year, and often much more frequently. However, the average tenure of a codebase in a big tech company is a lot longer than that. Many of the services I work on are a decade old or more, and have had many, many different owners over the years. That means many big tech engineers are constantly “figuring it out”. A pretty high percentage of code changes are made by “beginners”: people who have onboarded to the company, the codebase, or even the programming language in the past six months. To some extent, this problem is mitigated by “old hands”: engineers who happen to have been in the orbit of a particular system for long enough to develop real expertise. These engineers can give deep code reviews and reliably catch obvious problems. But relying on “old hands” has two problems. First, this process is entirely informal . Big tech companies make surprisingly little effort to develop long-term expertise in individual systems, and once they’ve got it they seem to barely care at all about retaining it. Often the engineers in question are moved to different services, and have to either keep up their “old hand” duties on an effectively volunteer basis, or abandon them and become a relative beginner on a brand new system. Second, experienced engineers are always overloaded . It is a busy job being one of the few engineers who has deep expertise on a particular service. You don’t have enough time to personally review every software change, or to be actively involved in every decision-making process. Remember that you also have your own work to do : if you spend all your time reviewing changes and being involved in discussions, you’ll likely be punished by the company for not having enough individual output. Putting all this together, what does the median productive 2 engineer at a big tech company look like? They are usually: They are almost certainly working to a deadline, or to a series of overlapping deadlines for different projects. In other words, they are trying to do their best in an environment that is not set up to produce quality code. That’s how “obviously” bad code happens. For instance, a junior engineer picks up a ticket for an annoying bug in a codebase they’re barely familiar with. They spend a few days figuring it out and come up with a hacky solution. One of the more senior “old hands” (if they’re lucky) glances over it in a spare half-hour, vetoes it, and suggests something slightly better that would at least work. The junior engineer implements that as best they can, tests that it works, it gets briefly reviewed and shipped, and everyone involved immediately moves on to higher-priority work. Five years later somebody notices this 3 and thinks “wow, that’s hacky - how did such bad code get written at such a big software company”? I have written a lot about the internal tech company dynamics that contribute to this. Most directly, in Seeing like a software company I argue that big tech companies consistently prioritize internal legibility - the ability to see at a glance who’s working on what and to change it at will - over productivity. Big companies know that treating engineers as fungible and moving them around destroys their ability to develop long-term expertise in a single codebase. That’s a deliberate tradeoff. They’re giving up some amount of expertise and software quality in order to gain the ability to rapidly deploy skilled engineers onto whatever the problem-of-the-month is. I don’t know if this is a good idea or a bad idea. It certainly seems to be working for the big tech companies, particularly now that “how fast can you pivot to something AI-related” is so important. But if you’re doing this, then of course you’re going to produce some genuinely bad code. That’s what happens when you ask engineers to rush out work on systems they’re unfamiliar with. Individual engineers are entirely powerless to alter this dynamic . This is particularly true in 2025, when the balance of power has tilted away from engineers and towards tech company leadership. The most you can do as an individual engineer is to try and become an “old hand”: to develop expertise in at least one area, and to use it to block the worst changes and steer people towards at least minimally-sensible technical decisions. But even that is often swimming against the current of the organization, and if inexpertly done can cause you to get PIP-ed or worse. I think a lot of this comes down to the distinction between pure and impure software engineering . To pure engineers - engineers working on self-contained technical projects, like a programming language - the only explanation for bad code is incompetence. But impure engineers operate more like plumbers or electricians. They’re working to deadlines on projects that are relatively new to them, and even if their technical fundamentals are impeccable, there’s always something about the particular setup of this situation that’s awkward or surprising. To impure engineers, bad code is inevitable. As long as the overall system works well enough, the project is a success. At big tech companies, engineers don’t get to decide if they’re working on pure or impure engineering work. It’s not their codebase ! If the company wants to move you from working on database infrastructure to building the new payments system, they’re fully entitled to do that. The fact that you might make some mistakes in an unfamiliar system - or that your old colleagues on the database infra team might suffer without your expertise - is a deliberate tradeoff being made by the company, not the engineer . It’s fine to point out examples of bad code at big companies. If nothing else, it can be an effective way to get those specific examples fixed, since execs usually jump at the chance to turn bad PR into good PR. But I think it’s a mistake 4 to attribute primary responsibility to the engineers at those companies. If you could wave a magic wand and make every engineer twice as strong, you would still have bad code , because almost nobody can come into a brand new codebase and quickly make changes with zero mistakes. The root cause is that most big company engineers are forced to do most of their work in unfamiliar codebases . I struggled to find a good original source on this. There’s a 2013 PayScale report citing a 1.1 year median turnover at Google, which seems low. Many engineers at big tech companies are not productive, but that’s a post all to itself. I don’t want to get into it here for two reasons. First, I think competent engineers produce enough bad code that it’s fine to be a bit generous and just scope the discussion to them. Second, even if an incompetent engineer wrote the code, there’s almost always competent engineers who could have reviewed it, and the question of why that didn’t happen is still interesting. The example I’m thinking of here is not the recent GitHub Actions one , which I have no first-hand experience of. I can think of at least ten separate instances of this happening to me. In my view, mainly a failure of imagination : thinking that your own work environment must be pretty similar to everyone else’s. competent enough to pass the hiring bar and be able to do the work, but either working on a codebase or language that is largely new to them, or trying to stay on top of a flood of code changes while also juggling their own work. I struggled to find a good original source on this. There’s a 2013 PayScale report citing a 1.1 year median turnover at Google, which seems low. ↩ Many engineers at big tech companies are not productive, but that’s a post all to itself. I don’t want to get into it here for two reasons. First, I think competent engineers produce enough bad code that it’s fine to be a bit generous and just scope the discussion to them. Second, even if an incompetent engineer wrote the code, there’s almost always competent engineers who could have reviewed it, and the question of why that didn’t happen is still interesting. ↩ The example I’m thinking of here is not the recent GitHub Actions one , which I have no first-hand experience of. I can think of at least ten separate instances of this happening to me. ↩ In my view, mainly a failure of imagination : thinking that your own work environment must be pretty similar to everyone else’s. ↩

1 views
Rik Huijzer Yesterday

Google Shenanigans

For years it has been a common theme among programmers that Google's search results have changed for the worse. It feels like the suggestions are becoming less and less applicable over time. Today, I spotted one of the worst cases that I have seen so far when searching for a documentary called "Flatten the Curve Flat Earth" (2022). This documentary is about flattening the curve of the earth and has nothing to do with medicine. However, Google automatically interprets it as a kind of medical statement: ![google.png](/files/af5746d53a67434a) Notice especially the second search result where "m...

0 views
ava's blog 2 days ago

job market websites suck

Both my wife and I are going through the annoying process of trying to search for new jobs. I’m still at my current one, but I want to see if something better is out there in the region we wanna move to. But it is absolutely nuts . How have these services not gone out of business? My experience, almost entirely the same across many websites like StepStone, Indeed, and more: And for what? So that you’ll make an account and give up all that data and get harassed via e-mail for information that is freely available on the company’s own career page! What are we doing? This is silly. Companies have already revealed that they hate and don’t consider the “Quick Apply” options these platforms offer, so what gives? And why are companies still using these slop platforms? It can’t be about getting more applicants, because these websites do anything they can to not show you the relevant job postings or information about them. If the search sucks, the preview is half-blurred and I can’t click on it without a pop-up urging me to make an account, you guessed it: I am actively discouraged from applying, prevented even, in many cases. I shouldn’t have to make an account with a third party just to even consider an employer! No, I will not create an account, and I will also not make a LinkedIn or Xing. These services are not helping anyone, they are leeches. They have an interest in keeping you on their site searching jobs for a long time, and that goal is antithetical to connecting you with potential employers quickly. Companies are better off advertising elsewhere and keeping it on their own website so all potential candidates can access it. It’s a bad look seeing you on these enshittified platforms. The way I have been coping with this job search is just using these websites to get a list of companies that have jobs in my niche and then researching them separately, using their own career pages and application portals. Also, relying on job listing websites ran by the government as these don’t use deceptive tactics to get you to sign up. If you’re lucky, your professional niche has their own jobmarket websites (see, for example, rustjobs.fyi ), or popular magazine publishers that are relevant in your field have a career sub-category on their website. If you have any other tips I missed, let me know (email or your own post is fine!) and I’ll add it. Reply via email Published 28 Nov, 2025 The on-site search is terrible to use. Search engine is easier. Most of the results are ads. Anything after the first 3 results is completely unrelated to the field you want. You aren’t allowed to see the job posting without an account, or only as “Recruiter”. The preview is almost completely useless. Partially blurred-out information that you are only allowed to see with an account.

0 views
pabloecortez 2 days ago

Black Friday for You and Me

Yesterday it was Thanksgiving and I had the privilege of spending the holiday with my family. We have a tradition of doing a toast going around the table and sharing at least one thing for which we are grateful. I want to share with you a story that started last year, in January of 2024, when a family friend named Germán reached out to me for help with a website for his business. Germán is in his 50s, he went to school for mechanical engineering in Mexico and about twenty years ago he moved to the United States. Today he owns a restaurant in Las Vegas with his wife and also runs a logistics company for distributing produce. We met the last week of January, he told me that he was looking to build a website for his restaurant and eventually build up his infrastructure so most of his business could be automated. His current workflow required his two sons to run the business along with him. They managed everything manually on expensive proprietary software. There were lots of things that could be optimized, so I agreed to jump on board and we have been collaborating ever since. What I assumed would be a developer type of position instead became more of a peer-mentorship relationship. Germán is curious, intelligent, and hard working. It didn't take long for me to notice that he didn't just want to have software or services running "in the background" while he occupied himself with other tasks. He wanted to have a thorough understanding of all the software he adopted. "I want to learn but I simply don't have the patience," he told me during one of our first meetings. At first I admit I thought this was a bit of a red flag (sorry Germán haha) but it all began to make sense when he showed me his books. He had paid thousands of dollars for a Wordpress website that only listed his services and contact information. The company he had hired offered an expensive SEO package for a monthly fee. My time in open source and the indieweb had blinded me to how abusive the "web development" industry had become. I'm referring to those local agencies that take advantage of unsuspecting clients and charge them for every little thing. I began making Germán's website and we went back and forth on assets, copy, menus, we began putting together a project and everything went smoothly. He was happy that he got to see how I built things. During this time I would journal through my work on his project and e-mail my notes to him. He loved it. Next came a new proposition. While the static site was nice to have an online presence, what he was after was getting into e-commerce. His wife, Sarah, makes artisanal beauty products and custom clothes. Her friends would message her on Facebook to ask what new stuff she was working on and she would send pictures to them from her phone. She would have benefitted from having a website, but after the bad experience they had had with the agency, they weren't too enthused about the prospect of hiring them for another project. I met with both of them again for this new project and we talked for hours, more like coworkers this time around. We eventually came to the conclusion that it would be more rewarding for them to really learn how to put their own shop together. I acted more as a coach or mentor than a developer. We'd sit together and activate accounts, fill out pages, choose themes. I was providing a safe space for them to be curious about technology, make mistakes, learn from them, and immediately get feedback on technical details so they could stay on a safe path. I'm so grateful for that opportunity afforded to me by Germán and his family. I've thought about how that approach would look if applied to the indieweb. It's always so exciting for me to see what the friends I've made here are working on. I know the open web becomes stronger when more independent projects are released, as we have more options to free ourselves from the corporate web that has stifled so much of the creativity and passion that I love and miss from the internet. I want to keep doing this. If you are building something on your own, have been out of the programming world for a while but want to start again, or maybe you are almost done and need a little boost in confidence (or accountability!) to reach the finish line and ship, I'm here to help. Check out my coaching page to find out more. I'm excited about the prospect of a community of builders who care about self-reliance and releasing software that puts people first. Perhaps this Black Friday you could choose to invest in yourself :-)

0 views
Manuel Moreale 2 days ago

On eating shit

You’re sitting at a table. In front of you, a series of plates. They’re full of shit (like some people). Not the same shit, mind you. It’s different types, produced by different animals, in different quantities. The unfortunate reality of the situation is that you have to eat the contents of one of those plates. Yeah, it sucks, I’m sorry. But you just have to. So you understandably start going through the thought process of figuring out which one is the “best” one. You start examining the shape, the texture, the animal that produced it. You start finding reasons to pick one over another. You start rationalising, trying to justify your decision to the other people who, like you, also need to pick which one to eat. It’s a process. A shitty one, I might say. But in going through this ordeal, you start losing track of the only thing that really matters: this situation fucking sucks, and there’s no good answer. The only reasonable thing to do is to pick the plate with the least steamy, smelly, nasty pile of shit and then figure out a way not to find yourself in that situation ever again. Sometimes eating shit is unavoidable. The only thing you can do is make it as painless as possible. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Stone Tools 2 days ago

Bank Street Writer on the Apple II

Stop me if you're heard this one . In 1978, a young man wandered into a Tandy Radio Shack and found himself transfixed by the TRS-80 systems on display. He bought one just to play around with, and it wound up transforming his life from there on. As it went with so many, so too did it go with lawyer Doug Carlston. His brother, Gary, initially unimpressed, warmed up to the machine during a long Maine winter. The two thus smitten mused, "Can we make money off of this?" Together they formed a developer-sales relationship, with Doug developing Galactic Saga and third brother Don developing Tank Command . Gary's sales acumen brought early success and Broderbund was officially underway. Meanwhile in New York, Richard Ruopp, president of Bank Street College of Education, a kind of research center for experimental and progressive education, was thinking about how emerging technology fit into the college's mission. Writing was an important part of their curriculum, but according to Ruopp , "We tested the available word processors and found we couldn’t use any of them." So, experts from Bank Street College worked closely with consultant Franklin Smith and software development firm Intentional Educations Inc. to build a better word processor for kids. The fruit of that labor, Bank Street Writer , was published by Scholastic exclusively to schools at first, with Broderbund taking up the home distribution market a little later. Bank Street Writer would dominate home software sales charts for years and its name would live on as one of the sacred texts, like Lemonade Stand or The Oregon Trail . Let's see what lessons there are to learn from it yet. 1916 Founded by Lucy Sprague Mitchell, Wesley Mitchell, and Harriet Johnson as the “Bureau of Educational Experiments” (BEE) with the goal of understanding in what environment children best learn and develop, and to help adults learn to cultivate that environment. 1930 BEE moves to 69 Bank Street. (Will move to 112th Street in 1971, for space reasons.) 1937 The Writer’s Lab, which connects writers and students, is formed. 1950 BEE is renamed to Bank Street College of Education. 1973 Minnesota Educational Computing Consortium (MECC) is founded. This group would later go on to produce The Oregon Trail . 1983 Bank Street Writer, developed by Intentional Educations Inc., published by Broderbund Software, and “thoroughly tested by the academics at Bank Street College of Education.” Price: $70. 1985 Writer is a success! Time to capitalize! Bank Street Speller $50, Bank Street Filer $50, Bank Street Mailer $50, Bank Street Music Writer $50, Bank Street Prewriter (published by Scholastic) $60. 1986 Bank Street Writer Plus $100. Bank Street Writer III (published by Scholastic) $90. It’s basically Plus with classroom-oriented additions, including a 20-column mode and additional teaching aides. 1987 Bank Street Storybook, $40. 1992 Bank Street Writer for the Macintosh (published by Scholastic) $130. Adds limited page layout options, Hypercard-style hypertext, clip art, punctuation checker, image import with text wrap, full color, sound support, “Classroom Publishing” of fliers and pamphlets, and electronic mail. With word processors, I want to give them a chance to present their best possible experience. I do put a little time into trying the baseline experience many would have had with the software during the height of its popularity. "Does the software still have utility today?" can only be fairly answered by giving the software a fighting chance. To that end, I've gifted myself a top-of-the-line (virtual) Apple //e running the last update to Writer , the Plus edition. You probably already know how to use Bank Street Writer Plus . You don't know you know, but you do know because you have familiarity with GUI menus and basic word processing skills. All you're lacking is an understanding of the vagaries of data storage and retrieval as necessitated by the hardware of the time, but once armed with that knowledge you could start using this program without touching the manual again. It really is as easy as the makers claim. The simplicity is driven by very a subtle, forward-thinking user interface. Of primary interest is the upper prompt area. The top 3 lines of the screen serve as an ever-present, contextual "here's the situation" helper. What's going on? What am I looking at? What options are available? How do I navigate this screen? How do I use this tool? Whatever you're doing, whatever menu option you've chosen, the prompt area is already displaying information about which actions are available right now in the current context . As the manual states, "When in doubt, look for instructions in the prompt area." The manual speaks truth. For some, the constant on-screen prompting could be a touch overbearing, but I personally don't think it's so terrible to know that the program is paying attention to my actions and wants me to succeed. The assistance isn't front-loaded, like so many mobile apps, nor does it interrupt, like Clippy. I simply can't fault the good intentions, nor can I really think of anything in modern software that takes this approach to user-friendliness. The remainder of the screen is devoted to your writing and works like any other word processor you've used. Just type, move the cursor with the arrow keys, and type some more. I think most writers will find it behaves "as expected." There are no Electric Pencil -style over-type surprises, nor VisiCalc -style arrow key manipulations. What seems to have happened is that in making a word processor that is easy for children to use, they accidentally made a word processor that is just plain easy. The basic functionality is drop-dead simple to pick up by just poking around, but there's quite a bit more to learn here. To do so, we have a few options for getting to know Bank Street Writer in more detail. There are two manuals by virtue of the program's educational roots. Bank Street Writer was published by both Broderbund (for the home market) and Scholastic (for schools). Each tailored their own manual to their respective demographic. Broderbund's manual is cleanly designed, easy to understand, and gets right to the point. It is not as "child focused" as reviews at the time might have you believe. Scholastic's is more of a curriculum to teach word processing, part of the 80s push for "computers in the classroom." It's packed with student activities, pages that can be copied and distributed, and (tellingly) information for the teacher explaining "What is a word processor?" Our other option for learning is on side 2 of the main program disk. Quite apart from the program proper, the disk contains an interactive tutorial. I love this commitment to the user's success, though I breezed through it in just a few minutes, being a cultured word processing pro of the 21st century. I am quite familiar with "menus" thank you very much. As I mentioned at the top, the screen is split into two areas: prompt and writing. The prompt area is fixed, and can neither be hidden nor turned off. This means there's no "full screen" option, for example. The writing area runs in high-res graphics mode so as to bless us with the gift of an 80-character wide display. Being a graphics display also means the developer could have put anything on screen, including a ruler which would have been a nice formatting helper. Alas. Bank Street offers limited preference settings; there's not much we can do to customize the program's display or functionality. The upshot is that as I gain confidence with the program, the program doesn't offer to match my ability. There is one notable trick, which I'll discuss later, but overall there is a missed opportunity here for adapting to a user's increasing skill. Kids do grow up, after all. As with Electric Pencil , I'm writing this entirely in Bank Street Writer . Unlike the keyboard/software troubles there, here in 128K Apple //e world I have Markdown luxuries like . The emulator's amber mode is soothing to the eyes and soul. Mouse control is turned on and works perfectly, though it's much easier and faster to navigate by keyboard, as God intended. This is an enjoyable writing experience. Which is not to say the program is without quirks. Perhaps the most unfortunate one is how little writing space 128K RAM buys for a document. At this point in the write-up I'm at about 1,500 words and BSW's memory check function reports I'm already at 40% of capacity. So the largest document one could keep resident in memory at one time would run about 4,000 words max? Put bluntly, that ain't a lot. Splitting documents into multiple files is pretty much forced upon anyone wanting to write anything of length. Given floppy disk fragility, especially with children handling them, perhaps that's not such a bad idea. However, from an editing point of view, it is frustrating to recall which document I need to load to review any given piece of text. Remember also, there's no copy/paste as we understand it today. Moving a block of text between documents is tricky, but possible. BSW can save a selected portion of text to its own file, which can then be "retrieved" (inserted) at the current cursor position in another file. In this way the diskette functions as a memory buffer for cross-document "copy/paste." Hey, at least there is some option available. Flipping through old magazines of the time, it's interesting just how often Bank Street Writer comes up as the comparative reference point for home word processors over the years. If a new program had even the slightest whiff of trying to be "easy to use" it was invariably compared to Bank Street Writer . Likewise, there were any number of writers and readers of those magazines talking about how they continued to use Bank Street Writer , even though so-called "better" options existed. I don't want to oversell its adoption by adults, but it most definitely was not a children-only word processor, by any stretch. I think the release of Plus embraced a more mature audience. In schools it reigned supreme for years, including the Scholastic-branded version of Plus called Bank Street Writer III . There were add-on "packs" of teacher materials for use with it. There was also Bank Street Prewriter , a tool for helping to organize themes and thoughts before committing to the act of writing, including an outliner, as popularized by ThinkTank . (always interesting when influences ripple through the industry like this) Of course, the Scholastic approach was built around the idea of teachers having access to computers in the classroom. And THAT was build on the idea of teachers feeling comfortable enough with computers to seamlessly merge them into a lesson-plan. Sure, the kids needed something simple to learn, but let's be honest, so did the adults. There was a time when attaching a computer to anything meant a fundamental transformation of that thing was assured and imminent. For example, the "office of the future" (as discussed in the Superbase post ) had a counterpart in the "classroom of tomorrow." In 1983, Popular Computing said, "Schools are in the grip of a computer mania." Steve Jobs took advantage of this, skating to where the puck would be, by donating Apple 2s to California schools. In October 1983, Creative Computing did a little math on that plan. $20M in retail donations brought $4M in tax credits against $5M in gross donations. Apple could donate a computer to every elementary, middle, and high school in California for an outlay of only $1M. Jobs lobbied Congress hard to pass a national version of the same "Kids Can't Wait" bill, which would have extended federal tax credits for such donations. That never made it to law, for various political reasons. But the California initiative certainly helped position Apple as the go-to system for computers in education. By 1985, Apple would dominate fully half of the education market. That would continue into the Macintosh era, though Apple's dominance diminished slowly as cheaper, "good enough" alternatives entered the market. Today, Apple is #3 in the education market, behind Windows and Chromebooks . It is a fair question to ask, "How useful could a single donated computer be to a school?" Once it's in place, then what? Does it have function? Does anyone have a plan for it? Come to think of it, does anyone on staff even know how to use it? When Apple put a computer into (almost) every school in California, they did require training. Well, let's say lip-service was paid to the idea of the aspiration of training. One teacher from each school had to receive one day's worth of training to attain a certificate which allowed the school to receive the computer. That teacher was then tasked with training their coworkers. Wait, did I say "one day?" Sorry, I meant about one HOUR of training. It's not too hard to see where Larry Cuban was coming from when he published Oversold & Underused: Computers in the Classroom in 2001. Even of schools with more than a single system, he notes, "Why, then, does a school's high access (to computers) yield limited use? Nationally and in our case studies, teachers... mentioned that training in relevant software and applications was seldom offered... (Teachers) felt that the generic training available was often irrelevant to their specific and immediate needs." From my perspective, and I'm no historian, it seems to me there were four ways computers were introduced into the school setting. The three most obvious were: I personally attended schools of all three types. What I can say the schools had in common was how little attention, if any, was given to the computer and how little my teachers understood them. An impromptu poll of friends aligned with my own experience. Schools didn't integrate computers into classwork, except when classwork was explicitly about computers. I sincerely doubt my time playing Trillium's Shadowkeep during recess was anything close to Apple's vision of a "classroom of tomorrow." The fourth approach to computers into the classroom was significantly more ambitious. Apple tried an experiment in which five public school sites were chosen for a long-term research project. In 1986, the sites were given computers for every child in class and at home. They reasoned that for computers to truly make an impact on children, the computer couldn't just be a fun toy they occasionally interacted with. Rather, it required full integration into their lives. Now, it is darkly funny to me that having achieved this integration today through smartphones, adults work hard to remove computers from school. It is also interesting to me that Apple kind of led the way in making that happen, although in fairness they don't seem to consider the iPhone to be a computer . America wasn't alone in trying to give its children a technological leg up. In England, the BBC spearheaded a major drive to get computers into classrooms via a countrywide computer literacy program. Even in the States, I remember watching episodes of BBC's The Computer Programme on PBS. Regardless of Apple's or the BBC's efforts, the long-term data on the effectiveness of computers in the classroom has been mixed, at best, or even an outright failure. Apple's own assessment of their "Apple Classrooms of Tomorrow" (ACOT) program after a couple of years concluded, "Results showed that ACOT students maintained their performance levels on standard measures of educational achievement in basic skills, and they sustained positive attitudes as judged by measures addressing the traditional activities of schooling." Which is a "we continue to maintain the dream of selling more computers to schools" way of saying, "Nothing changed." In 2001, the BBC reported , "England's schools are beginning to use computers more in teaching - but teachers are making "slow progress" in learning about them." Then in 2015 the results were "disappointing, "Even where computers are used in the classroom, their impact on student performance is mixed at best." Informatique pour tous, France 1985: Pedagogy, Industry and Politics by Clémence Cardon-Quint noted the French attempt at computers in the classroom as being, "an operation that can be considered both as a milestone and a failure." Computers in the Classrooms of an Authoritarian Country: The Case of Soviet Latvia (1980s–1991) by Iveta Kestere, Katrina Elizabete Purina-Bieza shows the introduction of computers to have drawn stark power and social divides, while pushing prescribed gender roles of computers being "for boys." Teachers Translating and Circumventing the Computer in Lower and Upper Secondary Swedish Schools in the 1970s and 1980 s by Rosalía Guerrero Cantarell noted, "the role of teachers as agents of change was crucial. But teachers also acted as opponents, hindering the diffusion of computer use in schools." Now, I should be clear that things were different in the higher education market, as with PLATO in the universities. But in the primary and secondary markets, Bank Street Writer 's primary demographic, nobody really knew what to do with the machines once they had them. The most straightforwardly damning assessment is from Oversold & Underused where Cuban says in the chapter "Are Computers in Schools Worth the Investment?", "Although promoters of new technologies often spout the rhetoric of fundamental change, few have pursued deep and comprehensive changes in the existing system of schooling." Throughout the book he notes how most teachers struggle to integrate computers into their lessons and teaching methodologies. The lack of guidance in developing new ways of teaching means computers will continue to be relegated to occasional auxiliary tools trotted out from time to time, not integral to the teaching process. "Should my conclusions and predictions be accurate, both champions and skeptics will be disappointed. They may conclude, as I have, that the investment of billions of dollars over the last decade has yet to produce worthy outcomes," he concludes. Thanks to my sweet four-drive virtual machine, I can summon both the dictionary and thesaurus immediately. Put the cursor at the start of a word and hit or to get an instant spot check of spelling or synonyms. Without the reality of actual floppy disk access speed, word searches are fast. Spelling can be performed on the full document, which does take noticeable time to finish. One thing I really love is how cancelling an action or moving forward on the next step of a process is responsive and immediate. If you're growing bored of an action taking too long, just cancel it with ; it will stop immediately . The program feels robust and unbreakable in that way. There is a word lookup, which accepts wildcards, for when you kinda-sorta know how to spell a word but need help. Attached to this function is an anagram checker which benefits greatly from a virtual CPU boost. But it can only do its trick on single words, not phrases. Earlier I mentioned how little the program offers a user who has gained confidence and skill. That's not entirely accurate, thanks to its most surprising super power: macros. Yes, you read that right. This word processor designed for children includes macros. They are stored at the application level, not the document level, so do keep that in mind. Twenty can be defined, each consisting of up to 32 keystrokes. Running keystrokes in a macro is functionally identical to typing by hand. Because the program can be driven 100% by keyboard alone, macros can trigger menu selections and step through tedious parts of those commands. For example, to save our document periodically we need to do the following every time: That looks like a job for to me. 0:00 / 0:23 1× Defining a macro to save, with overwrite, the current file. After it is defined, I execute it which happens very quickly in the emulator. Watch carefully. If you can perform an action through a series of discrete keyboard commands, you can make a macro from it. This is freeing, but also works to highlight what you cannot do with the program. For example, there is no concept of an active selection, so a word is the smallest unit you can directly manipulate due to keyboard control limitations. It's not nothin' but it's not quite enough. I started setting up markdown macros, so I could wrap the current word in or for italic and bold. Doing the actions in the writing area and noting the minimal steps necessary to achieve the desired outcome translated into perfect macros. I was even able to make a kind of rudimentary "undo" for when I wrap something in italic but intended to use bold. This reminded me that I haven't touched macro functionality in modern apps since my AppleScript days. Lemme check something real quick. I've popped open LibreOffice and feel immediately put off by its Macros function. It looks super powerful; a full dedicated code editor with watched variables for authoring in its scripting language. Or is it languages? Is it Macros or ScriptForge? What are "Gimmicks?" Just what is going on? Google Docs is about the same, using Javascript for its "Apps Script" functionality. Here's a Stack Overflow post where someone wants to select text and set it to "blue and bold" with a keystroke and is presented with 32 lines of Javascript. Many programs seem to have taken a "make the simple things difficult, and the hard things possible" approach to macros. Microsoft Word reportedly has a "record" function for creating macros, which will watch what you do and let you play back those actions in sequence. (a la Adobe Photoshop's "actions") This sounds like a nice evolution of the BSW method. I say "reportedly" because it is not available in the online version and so I couldn't try it for myself without purchasing Microsoft 365. I certainly don't doubt the sky's the limit with these modern macro systems. I'm sure amazing utilities can be created, with custom dialog boxes, internet data retrieval, and more. The flip-side is that a lot of power has has been stripped from the writer and handed over to the programmer, which I think is unfortunate. Bank Street Writer allows an author to use the same keyboard commands for creating a macro as for writing a document. There is a forgotten lesson in that. Yes, BSW's macros are limited compared to modern tools, but they are immediately accessible and intuitive. They leverage skills the user is already known to possess . The learning curve is a straight, flat line. Like any good word processor, user-definable tab stops are possible. Bringing up the editor for tabs displays a ruler showing tab stops and their type (normal vs. decimal-aligned). Using the same tools for writing, the ruler is similarly editable. Just type a or a anywhere along the ruler. So, the lack of a ruler I noted at the beginning is now doubly-frustrating, because it exists! Perhaps it was determined to be too much visual clutter for younger users? Again, this is where the Options screen could have allowed advanced users to toggle on features as they grow in comfort and ambition. From what I can tell in the product catalogs, the only major revision after this was for the Macintosh which added a whole host of publishing features. If I think about my experience with BSW these past two weeks, and think about what my wish-list for a hypothetical update might be, "desktop publishing" has never crossed my mind. Having said all of that, I've really enjoyed using it to write this post. It has been solid, snappy, and utterly crash free. To be completely frank, when I switched over into LibreOffice , a predominantly native app for Windows, it felt laggy and sluggish. Bank Street Writer feels smooth and purpose-built, even in an emulator. Features are discoverable and the UI always makes it clear what action can be taken next. I never feel lost nor do I worry that an inadvertent action will have unknowable consequences. The impression of it being an assistant to my writing process is strong, probably more so than many modern word processors. This is cleanly illustrated by the prompt area which feels like a "good idea we forgot." (I also noted this in my ThinkTank examination) I cannot lavish such praise upon the original Bank Street Writer , only on this Plus revision. The original is 40-columns only, spell-checking is a completely separate program, no thesaurus, no macros, a kind of bizarre modal switch between writing/editing/transfer modes, no arrow key support, and other quirks of its time and target system (the original Apple 2). Plus is an incredibly smart update to that original, increasing its utility 10-fold, without sacrificing ease of use. In fact, it's actually easier to use, in my opinion than the original and comes just shy of being something I could use on a regular basis. Bank Street Writer is very good! But it's not quite great . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). AppleWin 32bit 1.31.0.0 on Windows 11 Emulating an Enhanced Apple //e Authentic machine speed (enhanced disk access speed) Monochrome (amber) for clean 80-column display Disk II controller in slot 5 (enables four floppies, total) Mouse interface in slot 4 Bank Street Writer Plus At the classroom level there are one or more computers. At the school level there is a "computer lab" with one or more systems. There were no computers. Hit (open the File menu) Hit (select Save File) Hit three times (stepping through default confirmation dialogs) I find that running at 300% CPU speed in AppleWin works great. No repeating key issues and the program is well-behaved. Spell check works quickly enough to not be annoying and I honestly enjoyed watching it work its way through the document. Sometimes there's something to be said about slowing the computer down to swift human-speed, to form a stronger sense of connection between your own work and the computer's work. I did mention that I used a 4-disk setup, but in truth I never really touched the thesaurus. A 3-disk setup is probably sufficient. The application never crashed; the emulator was rock-solid. CiderPress2 works perfectly for opening the files on an Apple ][ disk image. Files are of file extension, which CiderPress2 tries to open as disassembly, not text. Switch "Conversion" to "Plain Text" and you'll be fine. This is a program that would benefit greatly from one more revision. It's very close to being enough for a "minimalist" crowd. There are four, key pieces missing for completeness: Much longer document handling Smarter, expanded dictionary, with definitions Customizable UI, display/hide: prompts, ruler, word count, etc. Extra formatting options, like line spacing, visual centering, and so on. For a modern writer using hyperlinks, this can trip up the spell-checker quite ferociously. It doesn't understand, nor can it be taught, pattern-matching against URLs to skip them.

0 views
fLaMEd fury 2 days ago

Contain The Web With Firefox Containers

What’s going on, Internet? While tech circles are grumbling about Mozilla stuffing AI features into Firefox that nobody asked for (lol), I figured I’d write about a feature people might actually like if they’re not already using it. This is how I’m containing the messy sprawl of the modern web using Firefox Containers. After the ability to run uBlock Origin, containers are easily one of Firefox’s best features. I’m happy to share my setup that helps contain the big bad evil and annoying across the web. Not because I visit these sites often or on purpose. I usually avoid them. But for the moments where I click something without paying attention, or I need to open a site just to get a piece of information and failing (lol, login walls), or I end up somewhere I don’t wanta to be. Containers stop that one slip from bleeding into the rest of my tabs. Firefox holds each site in its own space so nothing spills into the rest of my browsing. Here’s how I’ve split things up. Nothing fancy. Just tidy and logical. Nothing here is about avoiding these sites forever. It’s about containing them so they can’t follow me around. I use two extensions together: MAC handles the visuals. Containerise handles the rules. You can skip MAC and let Containerise auto create containers, but you lose control over colours and icons, so everything ends up looking the same. I leave MAC’s site lists empty so it doesn’t clash with Containerise. Containerise becomes the single source of truth. If I need to open something in a specific container, I just right click and choose Open in Container. Containers don’t fix the surveillance web, but they do reduce the blast radius. One random visit to Google, Meta, Reddit or Amazon won’t bleed into my other tabs. Cookies stay contained. Identity stays isolated. Tracking systems get far less to work with. Well, that’s my understanding of it anyway. It feels like one of the last features in modern browsers that still puts control back in the user’s hands, without having to give up the open web. Just letting you know that I used ChatGPT (in a container) to help me create the regex here - there was no way I was going to be able to figure that out myself. So while Firefox keeps pandering to the industry with AI features nobody asked for (lol), there’s still a lot to like about the browser. Containers, uBlock Origin, and the general flexibility of Firefox still give you real control over your internet experience. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Firefox Multi Account Containers (MAC) for creating and customising the containers (names, colours, icons). Containerise for all the routing logic using regex rules.

0 views
Kix Panganiban 2 days ago

Utteranc.es is really neat

It's hard to find privacy-respecting (read: not Disqus) commenting systems out there. A couple of good ones recommended by Bear are Cusdis and Komments -- but I'm not a huge fan of either of them: Then I realized that there's a great alternative that I've used in the past: utteranc.es . Its execution is elegant: you embed a tiny JS file on your blog posts, and it will map every page to Github Issues in a Github repo. In my case, I created this repo specifically for that purpose. Neat! I'm including utteranc.es in all my blog posts moving forward. You can check out how it looks below: Cusdis styling is very limited. You can only set it to dark or light mode, with no control over the specific HTML elements and styling. It's fine but I prefer something that looks a little neater. Komments requires manually creating a new page for every new post that you make. The idea is that wherever you want comments, you create a page in Komments and embed that page into your webpage. So you can have 1 Komments page per blog post, or even 1 Komments page for your entire blog.

0 views
Manuel Moreale 2 days ago

Karen

This week on the People and Blogs series we have an interview with Karen, whose blog can be found at chronosaur.us . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Pete Millspaugh and the other 127 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hello! My name is Karen. I work in IT support for a large company’s legal department, and am currently working on my Bachelors in Cybersecurity and Information Assurance. I live near New Orleans, Louisiana, with my husband and two dogs - Daisy, A Pembroke Welsh Corgi, and Mia, a Chihuahua. Daisy is The Most Serious Corgi ever (tm), and Mia has the personality of an old lady who chain smokes, plays Bingo every week at the rec center, and still records her soap operas on a VHS daily. My husband is an avid maker (woodworking and 3D printing, mostly), video gamer, and has an extensive collection of board games that takes up the entire back wall of our livingroom. As for me, outside of work, I’m a huge camera nerd and photographer. I love film photography, and recently learned how to develop my own negatives at home! I also do digital - I will never turn my nose up at one versus the other. I’ve always been into assorted fandoms, and used to volunteer at local sci-fi/fantasy/comic conventions up to a few years ago. I got into K-Pop back in 2022, and am now an active participant in the local New Orleans fan community, providing Instax photo booth services for events. I’ve also hosted K-Pop events here in NOLA as well. My ult K-Pop group is ATEEZ, but I’m a proud multi fan and listen to whatever groups or music catch my attention, including Stray Kids, SHINee, and Mamamoo. I also love 80s and 90s alternative, mainly Depeche Mode, Nine Inch Nails, and Garbage. And yes, I may be named Karen but I refuse to BE a “Karen”. I don’t get upset when people use the term, I find it hilarious. So I have been blogging off and on since 2001 or so - back when they were still called “weblogs” and “online journals”. Originally, I was using LiveJournal, but even with a paid account, I wanted to learn more customization and make a site that was truly my own. My husband - then boyfriend - had their own server, and gave me some space on it. I started out creating sites in Microsoft Frontpage and Dreamweaver (BEFORE Adobe owned them!), and moved to using Greymatter blog software, which I loved and miss dearly. I moved to Wordpress in - 2004 maybe? - and used that for all my personal sites until 2024. I’d been reading more and more about the Indieweb for a while and found Bear , and I loved the simplicity. I’ve had sites ranging from a basic daily online journal, to a fashion blog, to a food blog, to a K-Pop and fandom-centric blog, to what it is today - my online space for everything and anything I like. I taught myself HTML and CSS in order to customize and create my sites. No classes, no courses, no books, no certifications, just Google and looking at other people’s sites to see what I liked and how they did it. My previous job before this one, I was a web administrator for a local marketing company that built sites using DNN and Wordpress, and I’m proud to say that I got that job and my current one with my self-developed skills and being willing to learn and grow. I would not be where I am today, professionally, if it wasn’t for blogging. I’ll be totally honest - I don’t have a writing process. I get inspiration from random thoughts, seeing things online, wanting to share the day-to-day of my life. I don’t draft or have someone proof read, I just type out what I feel like writing. When I had blogs focusing on specific things - plus size fashion and K-Pop, respectively - I kept a list of topics and ideas to refer back to when I was stuck for ideas. That was when I was really focused on playing the SEO and search engine algorithm game, though, where I was trying to stick to the “two-three posts a week” rule in an attempt to boost my search engine results. I don’t do that now. I do still have a list of ideas on my phone, but it’s nothing I am feeling FORCED to stick to. It’s more along the lines of that I had an idea while I was out, and wanted to note it so I don’t forget. Memory is a fickle thing in your late 40s, LOL. My space absolutely influences my mindset for writing. I prefer to write in the early morning, because my brain operates best then. (I know I am an exception to the rule by being an early bird.) I love weekend mornings when I can get up really early and settle into my recliner with my laptop and coffee, and just listen to some lofi music and just feel topics and ideas out. I also made my office/guest bedroom into a cozy little space, with a daybed full of soft blankets and fluffy pillows and cushions, and a lap desk. In all honesty, my preferred location to write is at a coffeeshop first thing in the morning. I love sitting tucked in a booth with a coffee and muffin, headphones on and listening to music, when the sun is just on the cusp of rising and the shop is still a little too chilly. That’s when the creative ideas light up the brightest and the synapses are firing on all cylinders. Currently, my site is hosted on Bear . I used to be a self-hosted Wordpress devotee, but in mid-late 2024, I got really tired of the bloat that the apps had become. In order to use it efficiently for me, I had to install entirely too many plugins to make it “simpler”. (Shout-out to the Indieweb Wordpress team, though - they work so hard on those plugins!) Of course, the more plugins you have, the less secure your site… My domain is registered through Hostinger . To write my posts, I use Bear Markdown Notes. I heard about this program after seeing a few others talking about using it for drafts, notes, etc. I honestly don’t think I’d change much! I really love using Bear Blog. It reminds me of the very old school LiveJournal days, or when I used Greymatter. It takes me back to the web being simpler, more straightforward, more fun. I also like Bear’s manifesto , and that he built the service for longevity . I would probably structure my site differently, especially after seeing some personal sites set up with more of a “digital garden” format. I will eventually adjust my site at some point, but for now, I’m fine with it. (That and between school and work, it’s kind of low on the priority list.) I purchased a lifetime subscription to Bear after a week of using it, which ran around $200 - I don’t remember exactly. I knew that I was going to be using the service for a while and thought I should invest in a place that I believed in. My Hostinger domain renewals run around $8.99 annually. My blog is just my personal site - I don’t generate any revenue or monetise in any way. I don’t mind when people monetize their site - it’s their site and they can do what they choose. As long as it’s not invading others’ privacy or harmful, I have absolutely no issue. Make that money however you like. Ooooh I have three really good suggestions for both checking out and interviewing! Binary Digit - B is kind of an influence for me to play with my site again. They have just this super cool and early 2000s vibe and style that I really love. Their site reminds me of me when I first started blogging, when I was learning new things and implementing what I thought was cool on my site, joining fanlistings, making new online friends. Kevin Spencer - I love Kevin’s writing and especially his photography. Not only that, he has fantastic taste in music. I’ve left many a comment on his site about 80s and 90s synthpop and industrial music. A Parenthetical Departure - Sylvia was one of the first sites I started reading when I started looking up info on Bear Blog. They are EXTREMELY talented and have an excellent knack for playing with design, and showing others how it works. One of my side projects is Burn Like A Flame , which is my local K-pop and fandom photography site. I actualy just started a project there that is more than slightly based on People and Blogs - The Fandom Story Project . I’m interviewing local fans to talk about what they love and what their feelings are on fandom culture now, and I’m accompanying that with a photoshoot with that person. It’s a way to introduce people to each other within the community. Two of my favorite YouTube channels that I have recently been watching are focused on fashion discussion and history - Bliss Foster and understitch, . If you like learning and listening to information on fashion, I highly recommend these creators. I know a TON of people have now seen K-Pop Demon Hunters (which I love, and the movie has a great message for not only kids, but adults). If you’ve seen this and are interested in getting into K-Pop, I suggest checking out my favorite group, ATEEZ. If you think that most K-Pop is all chirpy bubbly cutesy songs, let me suggest two by this group that aren’t what you’d expect: Guerrilla and Turbulence . I strongly suggesting watching without the translations, and then watching again with them. Their lyrics are the thing that really drew me into this group, and had me learning more about the deeper meaning behind a lot of K-Pop songs. And finally, THANK YOU to Manu for People and Blogs! I always find some really great new sites to check out after reading these interviews, and I am truly honored to be asked to join this list of great bloggers. It’s inspiring me to work harder on my blog and to post more often. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 117 interviews . Make sure to also say thank you to Benny and the other 127 supporters for making this series possible.

0 views

Imgur Geo-Blocked the UK, So I Geo-Unblocked My Entire Network

Imgur decided to block UK users. Honestly? I don’t really care that much. I haven’t actively browsed the site in years. But it used to be everywhere. Back when Reddit embedded everything on Imgur, maybe fifteen years ago, it was genuinely useful. Then Reddit built their own image hosting, Discord did the same, and Imgur slowly faded into the background. Except it never fully disappeared. And since the block, I keep stumbling across Imgur links that just show “unavailable.” It’s mildly infuriating.

0 views
Brain Baking 2 days ago

Using Energy Prediction To Better Plan Cron Jobs

Since the Belgian government mandated the use of digitized smart energy meters we’ve been more carefully monitoring our daily energy demand. Before, we’d simply chuck all the dishes in the machine and program it to run at night: no more noise when we’re around. But now, consuming energy at night is costing us much more. The trick is to take as little as possible from the grid, but also put as little as possible back. In short, consume (or store) energy when our solar panels produce it. That dishwasher will have to run at noon instead. The same principle applies to running demanding software: CPU or GPU-intensive tasks consume an awful amount of energy, so why run them when there’s less energy available locally, thus paying more? Traditionally, these kinds of background jobs are always scheduled at night using a simple cron expression like that says “At 03:00 AM, kick things in gear”. But we can do better. At 03:00 AM, our solar panels are asleep too. Why not run the job when the sun is shining? Probably because you don’t want to interfere with the heavy load of your software system during the day thanks to your end users. It’s usually not a good idea to start generating PDF files en masse , clogging up all available threads, severely slowing down the handling of incoming HTTP requests. But there’s still a big margin to improve the planning of the job: instead of saying “At 03:00 AM exactly ”, why can’t we say “Between 01:00 AM and 07:00 AM”? That’s still before the big HTTP rush, and in the early morning, chances are there’s more cheap energy available to you. Cooking up a simple version of this for home use is easy with the help of Home Assistant. The following historical graph shows our typical energy demand during the last week (dreadful Belgian weather included): Home Assistant history of P1 Energy Meter Demand from 24 Nov to 28 Nov. Care to guess what these spikes represent? Evenings. Turning on the stove, the oven, the lights, the TV obviously creates a big spike in energy consumption, and at the same time, the moon replacing the sun results in us taking instead of giving from the energy grid. This is the reason the government charges more then: if everybody creates spikes at the same time, there’s much more pressure on the general grid. But I can’t bake my fries at noon when I’m work and we aren’t supposed to watch TV when we’re working from home… That data is available through the Home Assistant API: . Use an authorization header with a Bearer token created in your Home Assistant profile. If you collect this for a few weeks and average the results you can make an estimated guess when demand will be going up or down. If you want things to get a bit more fancy, you can use the EMHASS Home Assistant plug-in that includes a power production forecast module. This thing uses machine learning and other APIs such as https://solcast.com/ that predicts solar power—or weather in general: the better the weather, the more power available to burn through (given you’ve got solar panels installed). EMHASS also internalizes your power consumption habits. Combined, its prediction model can help to better plan your jobs when energy demand is low and availability is high. You don’t need Home Assistant to do this, but the software does help smooth things over with centralized access to data using a streamlined API. Our energy consumption and generation is measured using HomeWizard’s P1 Meter that plugs into our provider’s digital meter and sends the data over to Home Assistant. That’s cool if you are running software in your own basement, but will hardly do on a bigger scale. Instead of monitoring your own energy usage, you can rely on grid data from the providers. In Europe, the European Network of Transmission System Operators for Electricity (ENTSO-E) provides APIs to access power statistics based on your region—including a day-ahead forecast! In USA, there’s the U.S. Energy Information Administration (EIA) providing the equivalent, also including a forecast, depending on the state. ENTSO-E returns a day-ahead pricing model while EIA returns consumption in megawatthours, but both statistics can be used for the same thing: to better plan that cron job. And that’s exactly what we at JobRunr managed to do. JobRunr is an open-source Java library for easy asynchronous background job scheduling that I’ve had the pleasure to work on the last year. Using JobRunr, planning a job with a cron expression is trivial: But we don’t want that thing to trigger at 3 AM, remember? Instead, we want it to trigger between an interval, when the energy prices are at their lowest, meaning when the CPU-intensive job will produce the least amount of CO2 . In JobRunr v8, we introduced the concept of Carbon Aware Job Processing that uses energy prediction of the aforementioned APIs to better plan your cron jobs. The configuration for this is ridiculously easy: (1) tell JobRunr which region you’re in, (2) adjust that cron. Done. Instead of , use : this means “plan at somewhere between an hour before 3 AM to four hours later than 3 AM, when the lowest amount of CO2 will be generated”. That string is not a valid cron expression but a custom extension on it we invented to minimize configuration. Behind the scene, JobRunr will look up the energy forecasts for your region and plan the job according to your specified time range. There are other ways to plan jobs (e.g. fire-and-forget, providing s instaed of a cron, …), but you get the gist. JobRunr’s dashboard can be consulted to inspect when the job is due for processing. Since the scheduled picks can sometimes be confusing—why did it plan this at 6 AM and not at 7?—the dashboard also visualizes the predictions. In the following screenshot, you can see being planned at 15:00 PM, with an initial interval between 09:39 and 17:39 (GMT+2): The JobRunr dashboard: a pending job, to be processed on Mon Jul 07 2025 at 15:00 PM. There’s also a practical guide that helps you get started if you’re interested in fooling around with the system. The idea here is simple: postpone firing up that CPU to the moments with more sunshine, when energy is more readily available, and when less CO2 will be generated 1 . If you’re living in Europe/Belgium, you’re probably already trying to optimize the energy consumption in your household the exact same way because of the digital meters. Why not applying this principle on a grander scale? Amazon offers EC2 Spot Instances to “optimize compute usage” which is also marketed as more sustainable, but this is not the same thing. Shifting your cloud workout to a Spot Instance will use “spare energy” that was already being generated. JobRunr, and hopefully soon other software that optimized jobs based on energy availability, plans using marginal changes. In theory, the decision can determine the fuel resource as high spikes force high-emission plants to burn more fuel. In always-on infrastructure, spare compute capacity is sold as the Spot product—there’s no marginal change. The environmental impact of planning your job to align with low grid carbon intensity is much higher—in a good way—compared to shifting cloud instance types from on-demand/reserved to Spot. Still, it’s better than nothing, I guess. If the recent outages of these big cloud providers have taught us anything, it’s that on-premise self-hosting is not dead yet. If you happen to be rocking Java, give JobRunr a try. And if you’re not, we challenge you to implement something similar and make the world a better place! You probably already noticed that in this article I’ve interchanged carbon intensity with energy availability. It’s a lot more complicated than that, but for the purpose of Carbon Aware Job Processing, we assume a strong relationship between the electricity price and CO2 emissions.  ↩︎ Related topics: / java / By Wouter Groeneveld on 28 November 2025.  Reply via email . You probably already noticed that in this article I’ve interchanged carbon intensity with energy availability. It’s a lot more complicated than that, but for the purpose of Carbon Aware Job Processing, we assume a strong relationship between the electricity price and CO2 emissions.  ↩︎

0 views