Latest Posts (20 found)
Jason Scheirer 1 weeks ago

Learn To Live With The Defaults

Every deviation from default is slowing you down from getting started and making it harder to help others. I use about 5 distinct laptops/desktops on an average day, not to mention the VMs within them and other machines I shell into. Having a consistent experience is useful, but equally important is my ability to roll with the punches and get productive on a new computer without much ceremony. One thing I do to cope with this is a dotfiles repo with a dead-simple installation method , but also note how conservative it is. No huge vim plugin setup. Very minimal tmux config (which is still bad, and I’ll explain why later). Not a lot going on. Moving from the defaults to a custom setup might make you more effective in the immediate term, but it makes it harder long-term. You have additional complexity in terms of packages installed, keymaps, etc. that you need to reproduce regularly on every system you use. As I complained about in Framework Syndrome , flexible software just moves the problem along, it does not solve the problem. Having a tool that’s flexible enough to get out of the way so that you can solve the problem yourself is double-edged: it does not provide the solution you want, it provides an environment to implement your solution. This seems to mean that everyone new to the software will not see it as useful as it seems to you, right? To them it’s a blank slate, and is only useful with significant customization. This also affects teachability! With your hyper-customized setup you can’t be as effective a mentor or guide. One thing that makes it harder for me to advocate tmux to new devs is that I use one thing sightly idiomatically: coming from the older tool screen means I remap Ctl-B to Ctl-A for consistency. This has bitten me many a time! One example: Once I had set up a shared VM at work and had long-running tasks in tmux that my teammates could check in on. The entire setup was stymied by the fact that nobody but me could use tmux due to that one customization I had set up. Learn to lean in and be as functional as possible with the default setup. A kitted-out vim is great but learn the basics as muscle memory. Prefer tools with good defaults over good enough tools with the flexibility to make them as good as the ones with good defaults.

0 views
Jason Scheirer 1 months ago

A Series of Vignettes From My Childhood and Early Career

A short set of anecdotes, apropos of nothing. When I was younger, I really liked programming! I loved the sense of accomplishment, I loved the problem solving, I loved sharing what I made with the people around me to both amuse and assist. One particularly wise adult (somewhere around 1996) took me aside and said, “You know, you’re lucky you enjoy programming, because you won’t be able to make a living on it in the future. Doing it for love over money is a good idea.” “Coding is over, with Object Oriented programming one person who is much smarter than any of us could hope to be will develop the library just once and we will all use it going forward, forever. Once a problem is solved it never needs solving again. “In 5 years there’s going to be a library of objects, like books on a bookshelf, and every software problem will be solved by business people just snapping the object libraries they need together like LEGOs. They won’t need you at all.” I thought about this advice, and how Software Engineering would be ending by the time I entered school. I realized I had not even thought about my education yet. I was in middle school. Programming was not it, though, I knew that. I’m here nearly 30 years later and software continues to pay my bills, despite everything. Open source exists, there are libraries I can use to piece things together to solve all the time. New problem sets not covered by the garden path come up all the time. Clicking the LEGOs together continues to be a hard task. Every time we fix it at one level of abstraction we operate one level higher and the world keeps turning. Whenever I’m threatened with a good time and someone proclaims “this is it for you” all that happens is my job becomes more annoying. Haven’t gotten the sweet release of extinction quite yet. Around 1993 or so was the advent of the “Multimedia Age.” Multimedia was the buzzword. Software has to be multimedia ready . Education had to teach children to be ready for the multimedia age . If your tool, however inappropriate as it was, did not have multimedia features, you were going to be left behind. You needed a video guide. You needed to be on CD-ROM. This is just the new normal. “Multimedia” just means “sound and video.” We had a high concept term for a very direct, low concept concept. And the multimedia boom fizzled out. It became boring. Nobody is impressed by a video on a website and nobody thinks less of a website that doesn’t use sound and video if it’s not appropriate. You pop a tag in your HTML and your job is done. The amazing thing became mundane. The dream of “multimedia” became commonplace and everyone just accepted it as normal. I’m not aware of any industries that collapsed dramatically due to multimedia. Nobody really reskilled. Video editing is still a pretty rare thing to find, and we don’t commonly have sound engineers working on the audio UX of software products. In 2000 a coworker took me aside and showed me his brand-new copy of IntelliJ IDE. “It’s over for us,” he said, “this thing makes it so programmers aren’t strictly necessary, like one person can operate this tool and they can lay the rest of us off.” I was pretty awestruck, he got some amazing autocomplete right in the IDE. Without having to have a separate JavaDocs window open to the side, and without having to manually open the page for the class he needed documentation on, it just was there inline. It gave him feedback before the compile cycle on a bunch of issues that you normally don’t see until build. That was a nice bit of preventative work and seemed to have the potential to keep a developer in flow longer. And then he showed me the killer feature “that’s going to get us all out of a job:” the refactoring tools. He then proceeded to show me the tools, easily moving around code to new files, renaming classes across the codebase, all kinds of manual things that would have taken a person a few days to do on their own. It was magical. After some thought I said, “that’s amazing, but does it write new logic too or does it just move code around?” He didn’t seem fazed by that, and doubled down on the insistence that these powerful tools were our doom. I made a distinction between “useful” code and “filler” code, but apparently what is valued is not the quality and nature of the code but its volume and presence. This tool definitely gave both volume and presence to the tiny human-written nuggets within. At my first job in High School I was working in an office in a suburban office park with programmers from many different local agencies. One guy I chatted up was a contractor: these people were highly regarded, somewhat feared specialists. The guy in question was working on a multi-year migration of some county health computer system from MUMPS to a more modern relational system. He showed me the main family of problems he was solving to show off how smart he was for solving them; they were largely rote problems of migrating table schemas and records in a pretty uniform way. But there were a lot of them, and he was working hard to meet his deadline! I thought about it, and seeking his approval and validation, set out to help him. To show what I could do. I wrote a Python script that could solve the 85% case (it was mostly string manipulation) and even put a little TkInter dialog around it so he could select the files he wanted to migrate visually. It ran great, but he looked a little afraid when I demonstrated it to him: “You didn’t show this to anyone else, did you?” “Nope.” “Oh thank God.” I take it he used my tool because he had a lot more free time to goof off for the remaining six months of his contract. I don’t think he told anyone else what he had either, but I’m guessing that he had a lot more MUMPS migration contracts lined up when he could finish them in a matter of days. At the same job, I was paid to maintain a series of government agency web sites. One of my main tasks was to keep a list of mental health providers up-to-date on an HTML page and upload it to the server. This process was pretty mechanical: take Excel sheet from inbox, open in Excel, copy Excel table to HTML table. Within a month I had a fully automated workflow: I lived in fear of being found out, and told no one that the thing I was getting paid to do was no longer being done by me. About 9 months later the department in question hired a full-time web developer for $45k/yr to bring their website in-house. I was costing them about $25/hr, probably skating under $2000/yr for my outsourced services. This was clearly not about money. And what I feared did not happen. When I no longer had that work to sustain me my managers just put me on something else. There’s always more work. In my last years of undergraduate education and my first couple of years out of college I worked on projects that did some sort of Natural Language Processing tasks. For these we required training data, and the more the better. On that, though, we had responsibilities. We had to make sure the data we had also came with some sort of license or implicit permission. You didn’t just steal a pile of PDFs or scoop up a person’s web site and put it in your training set. There were ethical constrains, and legal consequences. You acted above-board when training your AI models. There were times we’d train models on Wikipedia dumps. They were always comparatively amazing results when we trained on good, large data like that. Cogent. Interesting. Even a simple Markov chain on Wikipedia looked smart. When we wrote web crawlers, we wrote them to respect . We kept them on local domains. The field of the crawlers included our email address, and if an angry webmaster didn’t like the way we were crawling them we’d fix it. Getting crawled aggressively at once taxed servers and spammed logs so we’d space it out to hours or days. If their was missing or malformed and they still didn’t want us there, we’d block the site from crawling. We made sure we had explicit permission to collect data for our training corpora. The dot com boom was a crazy time. The internet has just become mainstream and there was a new gold rush. Money was there just for the taking, so many VC funded business plans were just “ traditional business X, but on the internet! ” and the money flowed . How it flowed. Most of these companies, however, didn’t really have a solid business model other than buying some servers and a domain name and “we’ll put this thing on the internet.” Out of this crash came green shoots: Web 2.0, which used the web natively, organically, gave a good web-native experience. Eventually the dream of the internet, the promise of the hype, was made manifest after a lot of people learned a lot of really unnecessary, really painful lessons. They spent less and put their things on the internet because they made sense on the internet of the present, not because the internet was the next big thing. The dream of the widespread, ubiquitous internet came true, and there were very few fatalities. Some businesses died, but it was more glacial than volcanic in time scale. When ubiquitous online services became commonplace it just felt mundane. It didn’t feel forced. It was the opposite of the dot com boom just five years later: the internet is here and we’re here to build a solid business within it in contrast with we should put this solid business on the internet somehow, because it’s coming . This is indeed a set of passive-aggressive jabs on the continuing assault on our senses by the LLM hype lobby. I used Windows Automation to watch my Outlook inbox When an email came in from the person who sent me the Excels it would download it Open the Excel file in excel using Windows Automation Export it to CSV from Excel (the automation did this, I simply watched a ghost remote control an Excel window that opened and closed itself) Run a Python script that would inject that CSV data as an HTML table into the file Run another Python script that would connect to the FTP server and upload the file. It would randomly pause and issue typos so it looked like the FTP session was being operated by a human at a keyboard so nobody thought anything on my plot.

0 views
Jason Scheirer 2 months ago

The Innioasis Y1 Music Player

I’ve been enjoying standalong MP3 players! The Innioasis Y1 kept coming across my radar, I like the the form factor, it was $50. What the heck, why not. The community for this thing is insane , it’s just as active as the people doing weird things with my RG35XX . It’s really cool seeing so many people doing neat things with such a simple piece of hardware. And like the RG35XX, part of the value proposition is this is a cheap peice of commodity hardware that would not have been possible in this way even 5 years ago, but is now inexpensive enough and flexible enough to be an incredible product for the money. I saw you could put a flavor of Rockbox on the thing so I did that. The UI out of the box isn’t nearly as polished but there’s a neat community-supported updater that goes so far as to install skins for you. I’m currently using another Adwaita adaptation I found on the y1 subreddit which handles CJK correctly, which turns out to be important to me. Rockbox has the ability to create a play log file, so I can scrobble my commute/work listening again ! I use the LastFMLog plugin to manually create a file, then use rb-scrobbler to upload it. It’s a manual process I only do every week or so, but it’s okay. This is awesome. Three surprises. Not quite complaint territory but worth knowing about: This thing is a lot of fun to use, though! The novelty will eventually wear off but it feels good to have something iPod shaped in my life again. No external storage. My Shanlings had a TF card slot so I could expand and swap the storage easily. This is internal. 128GB so it won’t hold my whole library but it holds everything I care about. No touchscreen. Again, coming from Shanling this took a little bit of getting used to. Pure iPod classic ergonomics, buttons only. Build quality is not super solid. The screen is plastic, not glass, and it scratched almost immediately. You can definitely “feel” a center of gravity while the majority of the device fgeels light. No metal in its construction. It doesn’t feel brittle but by no means is it a luxury experience.

0 views
Jason Scheirer 2 months ago

The Shanling M0s Music Player

My Shanling Q1 died after a couple of years of heavy use and I think it was probably fixable with some soldering but I don’t have time for that. In a rush, I bought an M0s not reading the page and thinking it was the higher-end M0 Pro , but ultimately this was not a big deal! This still works like the Q1: it has a custom proprietary Shanling OS which works pretty well. It does Bluetooth fine, it manages music via TF card the same. The thing I did not realize was how much I’d like how small it is, it’s a tiny little 1.5inc square that’s about half an inch thick and has barely any mass. It just dangles at the end of the end of my headphone cable. It would be very easy to lose if I were more careless. In all, it has served me for a year. No complaints.

0 views
Jason Scheirer 2 months ago

What You Do and Who You Are

Similar to The Wrong Conclusion is another cognitive anti-pattern: in the pursuit of identity we see ourselves as being something as an inherent quality of ourselves versus doing something as a role, a quality being temporarily practiced. This can lead to bad behavior on our part and it limits our ability to grow and engage in introspection. I have this hard-won lesson over my lifetime: when I stake my identity as being a thing, when I can no longer do the thing I lose my sense of self and spiral into crisis. I am a good programmer, so I am a thing with a clear identity only when I am programming . If I spend a week in the hospital, as I did a few years ago, I am no longer programming, I am infirm . I have lost my anchor. This is upsetting! I need to find a better I am or accept that programming is an I do . Alternatively, I never really thought of myself as a parent or a future parent. Then I had a kid! Whether I cop to it or not, I am a parent – I do parenting things every day . People who take moral stances often consider themselves to be the good guys , a quality inherent in themselves, versus being people who happen to strive to do good things . When you are tautologically good, this is bad! You can do evil but by definition you are doing good because good is what you are . I see it in this quote: Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect. Francis M. Wilhoit Entirely an “we are” versus “you do” mentality: the law is meant to protect citizens from harm . We are citizens, you are harm. We do bad but do not run afoul of the law, you do bad and have committed a crime. That is; if you cannot see what you are doing is counter to your identity , you can be convinced that what you are doing is inconsequential to your identity . Going from an I am to an I do mindset has been a tough lesson for me, but it’s been transformative in my worldview. It is, in many ways, tied to a growth mindset versus a fixed mindset: (I am / I am only) and (I do / I can become) being very closely related. I don’t ‘consider myself’ a Javascript Guy, for instance, but if I make a presentation about some Javascript thing I researched and shared I have done Javascript Guy things , and I am technically a Javascript guy. As far as this goes, back to the growth mindset: you are only alive so long as you are active, and there is plenty of life left in you to be active. You are not a (metaphoric) flag planted in the (metaphoric) ground, you have (metaphoric) legs and can (metaphorically) walk to the next (metaphoric) place. You are capable of cruelty and laziness, and you are capable of kindness and industry. You need to be constantly vigilant to make sure you are currently doing good and do not rest on the laurels of I once did good or I am good so what I do is, by definition, good . Anyway, these were a few short paragraphs summarizing a spiritual crisis I spent 4 years mulling over. Enjoy! Hope you learn the lesson the easy way (from the mistakes of others, namely your supremely handsome narrator) rather than the hard way (having your own mistakes serve as a warning to others).

0 views
Jason Scheirer 2 months ago

I Don't Like that I Like Starship

I have my seed Starship config up as a Gist. Starship is a tool that frustrates me because it seems so bikesheddy and unneeded: a custom prompt manager. We already had shell prompt customization! I blindly install on new machines for prompt customization! And Starship is written in Rust. People just use Rust to be cute. Then I realize that it’s okay to have nice things. The command line environment from the 90s can change. doesn’t suck. is right up there with Perl in opaque tools other people do interesting things with and then I steal the interesting things for myself. The TUIs don’t suck. I mock Textual and Charm openly and unrepentantly and still hypocritically use and enjoy the tools built with them. The terminal is changing because the people using it have the agency and hubris to use it differently. I can have a growth mindset and accept that But its killer features: So here it is. All this talk for something that doesn’t look substantively different but compounds into feeling different over the course of the days and weeks I use it: The preferred command line tool to do a thing can change in my lifetime and The preferred workflow to accomplish a task can change in the face of new tools and The people who invented the old tools we’re replacing were not gods, they were just as fallible as us so Writing new tools that learn hard lessons from decades of using the old ones is fine. It’s not sacrilege. Limited/Constrained/Opinionated : Other prompt customization schemes I’ve used have let you do anything , but you had to know how to do anything . I can’t think of cleverness I want in my prompt, I just want to see which Git branch I’m on. We’re all doing that. We all want that. Starship has a way to do that which isn’t brittle bash I have to maintain myself. Multiplatform : I have Windows Bash, Windows PowerShell, Linux Bash/Zsh, and macOS Bash/Zsh all driven by the same seed config. I can have it show the same information everywhere. I can use little emblems to let me know if I’m on my Mac, on a Linux machine, if I’m on Windows and if that’s PowerShell or Bash all right there. The Nerd Font Dark Horse : This system takes advantage of the glyphs in Nerd Fonts and normalizes abusing them. This adds the additional “burden” of installing a Nerd Font enabled typeface on all my apps with terminal editors, but I’ve already normalized that.

0 views
Jason Scheirer 5 months ago

There and Back Again: My Journey Into (and out of) Tailwind

I’ve been using the Tailwind CSS Framework for about two and a half years (as of July 2025) for my personal projects, and I used it professionally in my time at unstructured as well. In all, I think it’s a good thing. However, I don’t find myself thinking I’d use it for new projects at this point: I believe I have outgrown it. Here is a short story. Tailwind has a set of that encourages a specific styling worldview that gets you a “modern” looking UI which doesn’t feel offensive versus any other site on the internet. The pre-baked color choices and reasonable framing around layout options like columns give you a framework (ha!) within which to work and make a decent site. It’s less greenfield than a blank page and an open documentation tab . I’ve been doing web development on and off for…well, since my teenage days but I don’t specialize in frontend dev and every few years when I come back to focus on it, things have changed. I don’t know what’s popular, I don’t know what’s possible, I don’t know what’s popular (hint: if it’s easy to do in CSS, it’s popular). Flexbox in particular is something I had a somewhat superstitious, vague understanding of. I could do little bits and pieces and sort of make it work, but being able to set the properties with just a few attributes made it faster. Getting that coding live-reloading in my browser as I made mistakes was the best interactive, REPL-like experience I’ve had to gain an intuitive understanding of the new CSS layout options. For that, Tailwind was invaluable. It certainly is a lot less work to type than . This, I think, along with popular UI tricks like blurred, semitransparent elements being baked in as simple attributes are the killer feature of Tailwind. When I wanted a button-like element, I found myself going through a series of stages: And then custom elements that further refine the style: I think this is where I begin to stray from the way most people use Tailwind: I have a tendency to write lots of very small components and I am a bikeshedder in my visual styles so keeping all the elements in sync requires the type of DRY the on every tag makes difficult. Eventually I find myself wanting a having a class with all the inline constants pulled out. For the sake of readability and some light DRY, I’d rather my buttons be than ; I then find myself doing this: That is; the @apply directive gives me Tailwind-style class macros in my CSS. I can now use a more traditional approach to CSS: rather than a blast of appearance-based attributes on an element I can give it a semantic role which has an associated set of visual properties attributed to it. I also needed this because I wanted to do some things with nested selectors that Tailwind did not entirely make possible (or at least easy). I found in places where it could do selectors the shortened Tailwind attrs beyond and were harder to skim than fully-expressed CSS versions. Anyway, is awesome and I don’t see its use encouraged much in the literature I’ve seen online. There are probably pedantic reasons for it that are all very reasonable but don’t work for me. This worked for me. And, once I am happy with the style and relatively certain they won’t change much going forward, I transform the core CSS with d styles to fully expanded CSS and cut out Tailwind as a build time dependency. I have used Tailwind as a way to bootstrap my design system, and then cut it out of my build when it has overstayed its welcome. So that has been my last two years with Tailwind. I like it, it helped me get back up to speed with modern CSS and get a working visual prototype out the door quickly, I got tired of its class soup usage pattern, and I found that once I returned to old-fashioned CSS classes I could wean myself off it. Style a single JSX component as-is with individual tailwind styles iteratively until I get to something that I like Start on a second component that I want to make look similar to it Copy/paste the style Eventually I want to do a refresh system wide, so I find all the overlapping classes, cut them out and reuse them in backticked strings; I have something like

0 views
Jason Scheirer 5 months ago

Here's The Interesting Part

For the 500 lines of boilerplate, what are the five truly interesting lines that solve the problem ? In the course of my problem solving, especially when I am solving a problem I think shouldn’t be hard , I think back to the essential core of the problem Typically a fix is a couple of API calls or a single clever data structure. As a consequence of this, I typically annotate my Pull Requests with an annotated “this is the interesting part” section. I recommend this approach: you are asking your reviewers for special attention on the original thoughts, and signaling the other code you had to write is something you do not feel is important, and are far more open to nitpicking on (versus thoughful review). Pair this with This Shouldn’t Be Hard : You just need those five lines of beauty and then you need to do the 500 lines of ceremony to make them real.

0 views
Jason Scheirer 5 months ago

This Shouldn't Be Hard

As you are working your way to the goal, you should be able to be clearminded about how you aim on getting there and when you are slowed down for silly reasons do not accept them as excuses. Say to yourself, “I know what I need to do, it shouldn’t be this hard.” Overall, you should be working toward writing functional code that achieves a goal. Oftentimes we get stuck on the actual specifics of a task: how do we set up the branch, how the dev environment gets up and running, etc. These are obstacles. Keep your eyes on the prize. If you can imagine a problem, if you have a clear vision for how to do it, this is all that matters. Process and tools are a means to the goal, not necessarily a requisite of getting to it. You must achieve your end goals despite everything in front of you. You have a vision and it should be as little friction as possible. This means minimizing unnecessary steps and busywork. Automate it. Don’t do it if you don’t have to. Always stop and think to yourself: I know how to solve this, it shouldn’t be hard to get that solution out into the world . Treat each hindrance with the utmost of contempt. Document and work around pain points, make sure they do not remain painful for long. Can you make them less painful via changing the broken parts? Can you make them less painful via “sharpening your tools;” i.e. getting better at working with the system via practice? Break the rules if the process gets in your way: you have a goal and it is not your goal to get stopped by the journey.

0 views
Jason Scheirer 7 months ago

Chains: My Attempt at an Itertools for Go

Top Matter : Codeberg for the library , doc for the library . It’s been six months since I’ve done this, but I’m finally writing about it! Go recently added proper iterator support to the language, which is something of an improvement over the prior pattern of spinning up a goroutine and communicating via a channel in a to get a stream of values. One of the tools in my toolbox that I use in coding interviews and some light data processing work is Python’s itertools . The nice thing about this library is it gives you a good set of conceptual building blocks to use as a frame around a problem and a fairly clean way to use them. Once you’re familiar with, say, and you can take a harder problem and decompose it into those recognizable parts and then have a stdlib function that’s already bug free and readily available. While I was inspired to author this, I was writing a lot of Ruby and Typescript. Both Ruby and Javascript do processing over lists in a very chainy way; for example Ruby likes to add lots of compact calls etc. as well to handle bad data. Having this syntactic sugar makes it easier to write complex logic, and it also helps conform the logic to one’s brain. Anyway, Go has iterators now, and I like using iterators. The first thing I wanted was my brain poisoning syntactic sugar from Ruby/Typescript; how could I go about doing in Go? Once I had a framework cooking I could start thinking of examples. My test suite took a backseat to being a test of cases I cared about , in cookbook form. Some things I wanted: Not much to write home about here, anyone can write these and I encourage each person to do it themselves using Go iterators. There’s the usual suspects here along with various types of that all boil down to the base . Similar to the above, pretty trivial to write. Can even treat these as specific cases of . is one such case . I wanted and , so those were high on the list. I found myself writing the code more and more generically as I went along, eventually ending with the mess that is . Funnily enough, each combinatorial case was some combination of: Not as high-level as Combinatorics, but I wanted to take a window of N at as time. I like to concatenate iterators sometimes; e.g. to process the results of two tasks in a single queue. works in Python, it was a breeze to write here as well. Got the base case down. works in a pinch for “gluing” together iterators in Python , and that was pretty trivial to write ( I called it “Flatten” because the first iteration took an iterator of iterators, thanks to Ruby/Javascript brain, then made one that took variadic iterator args as too). Some common use cases I find myself writing a lot just aren’t in the stdlib in Python. They generally involve taking many iterators and unifying then in ways dependent on the structure of the iterators themselves; either by length or by value. Another use case is Round-Robining a set of iterators until they are all exhausted. We’ve got a set of inputs and want to consume from all of them until we run out. For that, there’s , which works exactly as expected. Each iterator can have a variable number of entries but all entries are considered by index, so we don’t exhaust one before going to the next but try to consume them all equally. Use case: I have 3000 CSVs, each has rows in order of date. The time ranges may overlap in some cases. I wanted a unified stream of all the rows in order. For that, I wrote , which is at its core a pull-on-demand heap. Once the smallest value has been pulled off the heap, yield it, then grab the next value from the iterator that provided the value to place on the heap. I wanted to be able to chain calls like I can in those other languages, so the first thing I thought to do was design some sort of struct or interface with various / / /etc methods – so like Immediately there’s a problem because Interfaces can’t use generics in Go, so we do a struct instead: This sort of works! You can see in the type we do this. But to do two types (say we’re mapping from to ), we have to have an that does two, and so was born . Now how to get from an to an ? I decided on a top-level function to create a type called a to go from a one-typed chainable to a two-typed one. Now what if we need a third type? This is getting messy 1 . This pattern works for simple cases just fine, but it falls down once we get into the variadic world. Go’s obviously stunted-on-purpose generics are preventing us from doing this syntactic sugar in a clean way, but it is also suggesting a different way to do it. I was in love with my ability to do chained iterators, but they got clunky. Go generics only apply to functions and you can’t template an interface. So while is fun and cute, in Go you’re better off doing something like: …which, quite frankly, feels a lot more Go-like and less foreign than the cute way we do it in other languages. You can see I gave up on chaining in the above examples and just do individual iterators. And so, instead I found myself using single iterators via the adapter function and back to slices with . As I got further into implementing the various functions I wanted, I moved away from the pattern into simple functions. It’s still ugly to do because the pipeline appears in opposite order but doing each as an assignment keeps the order at the expense of slightly more verbosity. It’s not as aesthetic but it works. I think the most practical example I can give is the test in the cookbook that generates a sequence of fights in Street Fighter. If you play as a playable character you can fight all the other playable characters and each of the bosses. You cannot play as the bosses. As such, we have two separate matchup types: And gluing the two together is pretty clean: Once I started doing things the Go way, it really increased the pace of development as well. Being able to implement each iterator as a simple function meant I could focus on implementation and not boilerplate. Constraining myself to and relieved me of the analysis paralysis of variadic iterators: one value, and when it made sense, two. I’ve made this available as an importable library, but many of these patterns are easier to just copy and paste into your code. They should also be inspiration: this is a fun problem to solve! Solve it yourself! After posting this, I discovered another person had done something similar but used runtime reflection to sacrifice compile-time safety for the syntactic cleanliness I was going after. I respect the approach and I like the depth of knowledge of the language needed to do this. I’m going to say which implementation is better is a matter of knowing the tradeoffs: are you willing to sacrifice compile-time safety for convenience? It’s probably an unequivocal yes if you have good test coverage.  ↩︎ Map/Filter/Reduce Cleanups (compacts, nonzeroes, etc) Combinatorics Higher-level stream processing (various merges) There’s an and an for testing for conditions satisfied by the entire sequence; gets just the length. All are useful with ! and do what they say; is the CAR/CDR you didn’t know you needed. You can and to start/end at a particular point. You can an item N times, a slice so the first N items are moved to the back, each element N times, and the iterator which is just infinite repeats. takes an iterable of iterables and turns it into a flat iterable , but it doesn’t do it to arbitrary levels of nesting like Ruby does. There are rules here dude. is largely useless , kind of like a forEach or a visitor that passes the item along. splits an iterator into two based on a partition function, allowing you to e.g. split good/bad inputs into separate pipelines. A simpler function just returns the first value of each key grouping instead. Similarly, takes an ordered set of items and “bins” them based on a key function, allowing you to There’s also a to get a tear-off copy of the iterator. Length (one, fixed, variable) Ordering (in order of occurrence, free variance) Repetition of elements (on/off) Player v Player, unordered Player v Boss, unordered After posting this, I discovered another person had done something similar but used runtime reflection to sacrifice compile-time safety for the syntactic cleanliness I was going after. I respect the approach and I like the depth of knowledge of the language needed to do this. I’m going to say which implementation is better is a matter of knowing the tradeoffs: are you willing to sacrifice compile-time safety for convenience? It’s probably an unequivocal yes if you have good test coverage.  ↩︎

0 views
Jason Scheirer 8 months ago

Hate What You Know

You should be familiar enough with your tools to be able list critical things about them, even though they’re what you still choose to use every day. The title Hate What You Know is admittedly vague, I know! There are a handful of directions I’ll go with it, bear with me. And at least to some extent I don’t mean “hate,” I just mean “know well.” I’m an advocate of being a craftsperson out of anger. Do things because they need doing and dammit, you can’t live in a world where this problem isn’t solved anymore. I’ll make a deliberately outrageous claim: doing something out of need is craft, doing something out of compulsion is art. A craftsman makes a work because they see something missing in the world. An artist makes a work because they see something of a possibility in the world. Professionally, at least, you’re hopefully operating in the former camp. In the latter, you’re either outrageously lucky doing what you love in a place of psychological safety or secretly pissing everyone off around you by being a perfectionist primadonna. Hate is a strong feeling that can only really be fostered from a sense of strongly personal experience with a topic, person, or a tool. To truly hate a tool, that means you understand it intimately. To understand it intimately means you can use it well. It’s easy to dismiss a technology based on a pro/con list or based on some surface feature. “I could never use Python,” developers say, “the indentation-based syntax feels too weird.” After a month of actually using Python, a surface judgment like that probably goes away, overshadowed by actual deeper issues with the language. There’s plenty of worse things down the line once you’re familiar with the system. I’ve seen the same said about Go and its (lack of) solid OOP principles. Once you’ve used Go and you’ve gotten things done, there’s a whole new world you discover: how to live without that core thing, and brand new things to dislike but tolerate further down. In this, actually using a tool gives you real things to dislike . An analogy I like to give when discussing this: imagine two people, a person who cut you off in line at the bank and a close relative who happens to be a tremendous fuckup. You’re standing in line at a bank, you’re 45 minutes in, you’re one or two people away from being helped. Someone walks in front of you, and saunters right up to the counter and starts getting helped. You’re seething with rage. All you feel for this person is the singular emotion of anger . They could be someone’s most important person, it may have been a mistake, it may have been for an excellent reason, but that person cut you off and 100% of your interactions with them is that act of being cut off . All you know about this person is this one moment where they wronged you. Then think about a loved one, a close relative. You grew up with them. You watched the process of them developing as a person alongside you. You love this person. Now, this person may be something of a fuckup: they’ve been in and out of jail, they do things you know are bad for them, they continue to hurt themselves, with you as collateral damage. Last time they stayed at your place you came home and they’d sold your TV. More often than not, they hurt you. But you remember the good times. You have a history. This is a fully developed human being and a core character in your life. Their rate of wronging you isn’t close to 100%, but the number of times they’ve wronged you dwarfs that stranger at the bank. You should hate this person with the same intensity as that person at the bank, maybe more. But you don’t, it’s more nuanced because of your shared experience. You should aspire to withhold loud, public judgment of a technology until you can look at it with a “well, sure, but…” and a long wistful sigh. Hate, as opposed to love, comes with a much different place of origin when setting expectations: you presume the worst rather than the best when approaching it. This sets you up for disillusionment when the thing you can see no wrong in eventually fails you in some way. Never meet your heroes. Heck, never watch a hero on a bad day. You should look for ways to hate your tools. By this, I mean the following: Every time you have solved a problem with a tool, try solving it with a different one. The contrast should give you a wider perspective on how your preferred way of getting things done could be better. I’ve gone in a full circle with CSS frameworks recently: I forced myself to use Tailwind until I liked it, then at work I had to use Sass. Sass sucks in a lot of ways that Tailwind doesn’t. The converse is true, too. But knowing how other people solved the same problem, you can recognize the clumsy parts of other approaches and take the good parts of one and try to apply them to the other. And the conclusion to this, of course, was me becoming better at vanilla CSS, because the world should not make sense. Again with the not-quite-hatred talk: I suppose I should write a conclusion. Here it is. No more words. In what ways was it easier? In what ways was it harder? Are you growing in your use of the tool? Is it making you hate it uncomfortable? Is it because you’re in a place where you’re being challenged by it and growing? Are you putting in the appropriate effort to give the tool a fair shake? Is it because you’re outgrowing it? Have you grown out of it for Absolute reasons (it is not a useful tool) or Relative ones (you’ve mastered it or you want to grow in a different direction)? After you’ve outgrown it, can you still advocate it to others who are earlier in their journey?

0 views
Jason Scheirer 8 months ago

Kicking the Tires on Harper

I’m trying out the Harper tool in VS Code . I’ve been aware of its existence for about 6 months and didn’t think much of it, just another rehash of existing tools. But I’ve come around to the idea that rewriting old tools isn’t a horrible notion: the people who crafted them were no smarter or dumber than ourselves, they were just before us. Rewriting the same thing is a lot like repaving an old road: core infrastructure, but with new materials maybe we can do it better. If anything, we should share in some of the fun the people who came before us got to have. Anyway, so far so good. It’s a little less annoying than piping my posts through , which is about as sophisticated as I’ve managed to get myself in this writing workflow. It’s nice. I thought it was too invasive and preachy on first use long ago, but this time around, at least on first shake, it’s just enough for grammar/spell smell testing. It can run ubiquitously as it builds to WASM and an LSP, so I can learn to lean on it in a wide variety of environments.

0 views
Jason Scheirer 8 months ago

The Wrong Conclusion

A common pattern I have come to recognize everywhere as an anti-pattern: I see this a lot because step 1 seems to short-circuit the thinking behind the rest of the logic. You got a narrative going that you feel compelled to continue to listen to, and assume it’s only going to stay sane. However, just because you have identified a problem does not mean your solution is correct . Stop yourself when a person opens a paragraph and remember that where they are going is not necessarily the only path, or even a sensible one . The example I see most currently is about AI somehow matching human intelligence, usually in the context of LLMs. I’ve decorated these talking points with some straw men that are usually left implicit when usually presented, but you get the idea. Just because you agree with the problem statement does not mean that the conclusion is correct just because it occurs a few sentences later. Here’s an interesting blog post elsewhere on narratives and being misleading as well. Person starts a dialogue establishing a problem, upon which we all agree Person continues on down the same line of conversation, outlining a solution Person comes to a conclusion which is a call to action LLMs exist, and seem to have novel properties that mimic humans producing language Therefore, LLMs are the same as humans producing language Therefore, LLMs have all the other capacities of humans and have been endowed with human nature and will replace us all

0 views
Jason Scheirer 10 months ago

Throw your Team a Bone

You like little treats , so do other people! Sometimes you gotta give your peers something to make them feel good too. Sometimes this is called the hairy arm technique , sometimes it’s a manifestation of Cunningham’s Law . What you do is this: present something obviously wrong or easily correctable when you make a presentation. This lets the other person feel smart (“you need to fix this broken thing that you made broken”) and changes the focus of the argument so that you get the nitpicking out of the way early and the uneeded but inevitable stage where it happens doesn’t derail the conversation when discussing actual work. Some examples I’ve used over the years: Just a suggestion! Name a new source file slightly wrong in the PR. You’ll be asked to correct the name and then you won’t have to have a long conversation where you get the reviewer up-to-speed about the logic of the method that you spent a week thinking through, they’re satisfied with the feedback they’ve given and do not feel obligated to go further. Just ship a new UI without being provided a design. You’ll get some minor markup corrections but you don’t have to go through a lengthy planning phase.

0 views
Jason Scheirer 10 months ago

Give Yourself Little Treats Sometimes

Palate cleanser tasks are morale boosts I’ve found that my productivity varies wildly based on the time of year, the phase of the moon, my mental state, the quality of my morning shower thought session, and how enthusiastic I am about what I’m working on . I like to give myself “little treat” issues occasionally to keep my motivation up: issues I pick up outside of the product backlog that are quick to fix, fun to do, and give me little dopamine hits. These keep my average motivation level up, give me a chance to remember why I’m a software engineer, and don’t really eat into the time I already have budgeted to work on other stuff: I’m highly motivated to get it done, so I do it quickly and enthusiastically and it doesn’t make my slog work any slower to deliver. Give yourself treats. Side projects. Little tweaks. Fun stuff.

0 views
Jason Scheirer 10 months ago

A Children's Treasury of Critiques and Concerns about the Current LLM Hype Cycle

We’re a couple of years into the LLM hype cycle and the volume at this point is deafening . I use LLMs, and I find them interesting, and they have played a major part of my professional life for the past two and a half years. That said, I can’t declare them an unqualified success or a miracle cure for anything. I approach them with a combination of skepticism and grief. I understand they are here to stay and I will do my best to use them effectively in the use cases where they are indeed effective. However, I think it is both foolish and irresponsible to myself/my employers/my friends/my peers/society in general to see them solely in a positive light and I think we need to consider critically how we want to approach them. As an undergraduate and about three years into my post-collegiate life, I did research in what could be considered “natural language processing” or “computational linguistics”: I worked on the HAL Model , which was an early high-dimensional model for approximating language understanding, and later a software system employing all kinds of cool cutting-edge language-in-computers techniques (early Support Vector Machine tooling being one I worked on) and there was one lesson I learned from both: Better models are good, but better data is better . Far and away, the best way to get better results was not to develop a better model, it was to develop toolchains for collecting more and better data. We got good results even with okay models when we fed them Wikipedia. We got bad results even with cutting edge models if we just fed them a couple of emails. Good data was what made these things effective. That said, it paints my experience of the current success of LLMs: they are interesting technology from a modeling perspective, but the more interesting part is all the data that went into making them . Even “open-source” LLM models do not share their training sets. We have no idea of the full provenance of these troves of training data driving current models. We know that there are pirated books in Meta’s Llama models for sure. Screenwriters’ Guild protected screenplays have shows up in other datasets by virtue of the fact that there are pirated copies on the internet, which the training sets then integrate. Autocompletes obviously trained on code with restrictive licenses show up in coding assistants. Nobody’s getting paid money if their work is added to an LLM. The value is contributed by armies of human authors, and the profits, whenever they may come, are collected by the companies using the data. Spotify’s model is genius in that it barely pays its artists. OpenAI’s model is even better: it doesn’t pay its artists at all. It doesn’t even acknowledge that humans made the fuel it needs to go. You can argue that people consented to use their ‘content,’ which is a sterile way of simplifying any product a human being can express into words, be it art, opinion, fact, conversation, etc., but not everybody consented , there is non-consensually acquired material in them right now , and many people, knowing now what their data is being used for, want ways to possibly opt out of consenting in the future . A Reddit user is screwed, essentially, in that a community they have spent decades of sweat and tears contributing to is now just training data to a robot somewhere. They’re going to have to sever their relationship with their friends and community on Reddit in order to stop feeding a machine they do not like (or are fundamentally, ethically opposed to), which did not exist 7 years ago. All models rely on their training sets, which are fixed at a point in time. To get accurate information beyond the date a model was trained, you need to do one of three things: Even a model that was trained last week will not be able to tell you the current weather. LLMs are not predictable, and I mean this in two ways: If we can’t depend on an LLM to reliably do the same thing the same way every time, it’s not dependable like a simple machine in a factory. Or a computer running deterministic software. We have added nondeterminism to a set of tools that used to be deterministic. We’ve given up a property of general-purpose computing. Manufactured goods are made within tolerance margins for the dimensions of their parts. Classical computation models are usually accurate, and in cases where they may not be fully accurate (see IEEE 754 floating point numbers) they are still inaccurate in a predictable, regular way. We’ve seen emergent issues with LLMs that give us pause and act as comical, widely-published issues that suggest these things aren’t armed with the unlimited potential they claim to have: In every case, we’re finding limits to the technology that get fixed in the next release, but are something of an indicator that we are no longer in the world of boundless possibilities and uninterruptible optimism for what they are capable of: the problems get fixed in subsequent models with what are essentially “patches” to work around these cases. We aren’t greenfield, discovering staggering capabilities with new LLM models anymore, we are in a stable “bugfix” phase . I’m copying a graph on the technology S-Curve from an old book ( Putt’s Law ) to demonstrate my thoughts here: we’re rounding the success intersection point, everyone is projecting a straight line up, we’re going to slow down to a crawl at some point if we aren’t already. One way that we’re getting around the recency and context problem is with tool-call enabled models . We now have a higher level construct for bundling tools called the Model Context Protocol. Just as a human being can use a pocket calculator, we can give an LLM access to a calculator tool to do math, which LLMs structurally can’t do like humans can. Just as a human being can use a web browser to look up weather, we can give an LLM access to a weather API (or, more roundabout, a browser automation that can open a weather web site). Just as a human being can use a search engine to find relevant information, we can give an LLM a tool to search through masses of text and use that to enhance its answers. Unlike (sober) human beings, models can imagine that their pocket calculators have a button labelled “40,” or forget the exact spelling of the files it just listed and not bother double-checking. Very similar to the last point above with the search engine, we can make an LLM “know” things not baked into its model by preëmptively doing search engine queries on its behalf and by injecting snippets into its conversation context it can “know about” in its context to appear to be able to comment on user-relevant data. Some models now have larger context models in which the RAG approach is ‘obsolete,’ but technically the methodology is the same: inject massive amounts of non-model data into the context to get the LLM to be able to make mention of it. This is where “synthetic data” is training sets can also come in in a big way: injecting thousands of copies of times tables and lookup tables for the values of trig functions so that an innumerate model can emulate numeracy . LLM APIs as a business plan don’t seem very viable long-term unless they engage in some major enshittification . Enjoy ChatGPT and Claude at affordable rates and without annoying limitations now, because not long after you’ve let them become an indispensable part of your workflow they’re going to get real bad, real fast . Using the term “Open Source” to describe openly available models is, frankly, an incorrect term. Not overly generous, not iffy, straight up factually false. A lie. A freely available model as a digital artifact is not the source code that was used to generate the model and it is not a delivery mechanism for unveiling the data that went into them. Open models are largely produced by large organizations as a strategic hedge against the larger commercial players – IBM, Meta, etc don’t want to find themselves in a place where they are dependent on LLMs – if they succeed long term – hosted by a semi-hostile third party. When the commercial gravy train ends, the open access model gravy train will end as well. One credit and a voice in the dark in favor of these open models is this: you can control the hardware they run on, you can control your own uptime, you can control the fact that you are still operating on the same underlying model file from day-to-day . If the commercial players go dark for whatever reason, you can still use these models on your own servers to drive LLM functionality where it makes sense. If LLMs are ever fully forbidden you still have a local copy you can run in secret for research/entertainment purposes. I end the section with this because it’s my biggest concern with open models over cloud hosted: open models still suffer from the same ethical problems of the black box training sets ( IBM alleges it’s less guilty, though ). If there is an intellectual property reckoning for LLMs and the big players are forced to leave the market and you still have LLMs running, you may be legally obligated to disclose which models you use, similarly to how much commercial software in modern times offers a very large colophon of open source license agreements for all the libraries they consume. It’s very confusing to see something that does not access the internet post the cUrl license in its documentation, it will be equally confusing to see things like accounting software disclose they use Deepseek somehow in the future. I like to tell people that commercial LLMs are in their Uber phase, and I mean this in two ways: 1) they are blatantly illegal and hoping to normalize their bad behavior through mass adoption and 2) priced in a way to hook you, to get you dependent on using it in fiscally unsustainable ways. When Uber and AirBnB were first entering widespread release, there was immediate backlash: these things were illegal. Uber famously paid its drivers’ fines and tickets for operating as drivers, and AirBnB lobbied the hell out of local municipalities. Both also played heavily into FOMO: “San Francisco is doing it and San Francisco is a world class city, don’t you want to be a world class city?” Famous smart-on-paper people pushed real hard on this , even claiming that Uber’s right to break the law was akin to the civil rights struggle – a stultifyingly wrong claim that can be immediately and viscerally rejected by anyone with common sense. We’re all engaging in intellectual property theft right now when we use these models. They’re betting on a “they can’t arrest all of us” approach in which the norms and laws which worked are forcefully changed to further enable this theft of creators’ property. Again: the value of the LLMs is the data, not the model, and human beings have to provide that data. Large LLM companies are only going to be profitable if they are allowed to use data at very small margins, if not zero cost (stealing training data with impunity) . I have not heard a single compelling case for the unequivocal legality of what’s going on: every assertion I’ve seen that OpenAI has a ‘right’ to this data is qualified with an apparently or a probably . I remember when every Uber ride was heavily subsidized by its investors. I could Uber Pool to multiple venues across San Francisco for about $10 on an evening out. Now that Uber has to be profitable, I don’t even consider it except as a strategic last choice, and it’s significantly more expensive now (that same night out now would run me $75). Don’t fool yourselves into thinking that once your default choice is ‘I’ll ask ChatGPT’ that ChatGPT won’t find a way to squeeze for as much as it can because it’s insinuated itself into your lifestyle. LLMs are, in essence, a lossily-compressed, fuzzy searchable copy of a lot of contents found on the internet. Google used to be good. We used to be able to type these types of question into Google and get answers. Using an LLM as a search engine is just an indictment of that slow decay of Google’s core product and the fact that we simply can’t have nice things without other people messing it up – SEO monsters started posting garbage on the internet to game the search engines and ruined it for all of us. Your chat with an AI session is a sad, watered-down echo of a quality user experience we once had a quarter of a century ago with a single text input box on . This is something that I see regularly on my radar as a software engineer. I use LLM code assistance , but I do not think it is worth the trouble at least 75% of the time . Languages and frameworks that exist today can be handled no problem, but if we limit ourselves to what models are trained on, we can’t adopt new languages, frameworks, libraries. An LLM-guided software engineering world is a world where software engineering is frozen in time. I’ve already seen open-source projects which provide RAG-like pipelines trained on their documentation since they are not old enough to be embedded in models’ training sets. This seems like a regression. Open source developers will be expected to write and maintain code, then write and maintain documentation, then write and maintain toolchains to enable other developers to supervise coding assistants clumsily attempting to write code. Burnout is high enough for open source projects. This will, in my estimation, make open source even less attractive of a pastime as the sheer amount of time and effort and toil will bias toward fully commercial solutions. Writing code is usually an exploratory process for me in which I ‘feel out’ the problem space. I very seldom use plain English to describe my problems because English is less precise and less able to express to the computer what I want to do. It’s also harder to write. We use programming languages because they’re good at telling computers what to do. They are formal, tool-manipulable machine readable symbols that are as expressive as anything could be for doing computation. Telling a computer to write its own code by asking it in English is just adding a step. Not to mention that fact that reading code is always harder than writing it . We have let the computer do the easy part and asked it to inflict the hard part on us from minute one of the code existing. By letting the tool do the coding we’re adding cognitive load to a process , and that process already worked fine as-is for many of us . We’ve made life harder because it’s cool. LLM-generated code can only be up to about two years old at this point (“this point” being early March 2025): we’re in very early days. In my 2+ decades of software engineering, getting a product out is important, but then there are years, if not decades, of follow-up fixes and expansion on that software. We have no idea how maintainable LLM-assisted codebases will be, and there are already examples of LLM-driven projects growing beyond the scope of e.g. Claude’s ability to understand them and leaving the developers high and dry; unable to understand or extend the existing codebase. I’ve fixed Fortran that was older than myself at one point. Getting a new React app out with trivial functionality is not the hard part, it’s the toil of maintenance that happens over the course of years that is where most of what I would consider “software engineering” happens and it does not seem like that has been a focus or a strong point of LLM-driven development thus far. I don’t think an LLM is going to steal my job . Every change in technology that made developers more productive over the last 50 years has only increased the demand for software, and every productivity boost that was supposed to cut the number of people needed to make and run software has only grown the number of people involved. Examples: your startup probably has more Infra/DevOps engineers than a startup of the equivalent size had Sysadmins in the pre-cloud era. CRUD apps becoming easy to make with better tools led to a Cambrian Explosion of people writing CRUD apps. Usually improvements to the craft of software engineering have been via better tools and languages . In the instance of the LLM-assisted coding world, we have given up on the idea of inventing a better programming language and have instead decided we can’t do any better and are using non-deterministic macro expansion/copy-paste agents in the form of LLM-enabled development tools. This would be like C never taking off because someone invented a better assembly macro processor, or Python/Ruby/etc never taking off because we never trusted the interpreters to get memory management right on our behalf. We have always come to trust the abstractions below us, we didn’t stop where we were because we didn’t think things could be easier. We’re just asking an LLM to write more and more code at the same abstraction level, rather than find an appropriate way of expressing these computing concepts in a better way. C compilers wrote mountains of assembly, but did it deterministically and hid the details from us with an abstraction layer. Python interpreters run massive amounts of C under the hood (also deterministically), but as a developer you’re shielded from the runtime through abstractions. Decent web frameworks never expose raw TCP sockets to developers. We used to climb the mountain, not make camp halfway up and decide this was high enough. We had “agentic” software before, it was just called “computer programs” or sometimes “automations” or “APIs” or “integrations” or “cronjobs.” The generic term “AI” was hijacked to represent LLMs and only LLMs, and the terms for traditional software were unnecessarily thrown away for a new set of terms for the same thing. There is so much marketing baked into the terminology . We’ve gone from arguing the virtues of the thing to arguing based on pure enthusiasm for the thing. I’ll say this is my least objective or reasonable prong critiquing LLMs as they are currently practiced, but the strongest one I go to when asked what I think about LLMs: cheerleaders are fucking annoying, and FOMO is not a good reason to do something . When I hear ridiculously stupid, not-backed-up-by-evidence claims about what LLMs can and will do I immediately knee-jerk block the person for being so stupid. The people who know the least about what they are talking about are being the loudest. I pose the following questions (the answer to some of these are unequivocally yes , but they are still important to keep in mind): You may come to the conclusion that yes, LLMs are worth it. That’s fine. I just hope you went though a process of thought and introspection and considered the pros and cons of the technology before loudly and enthusiastically adopting them into your life. Here is a similar blog post that shares many of my points . One particularly resonant quote: …I’m more concerned with Dickens-style harms: people losing jobs not because AI can do their work, but because people in charge will think AI can do other people’s work. Harms due to people misunderstanding what AI does and doesn’t do well and misusing it. Document edit history – There may have been changes since you last visited. Create a new model with the additional up-to-date information (expensive) Fine-tune an existing model with additional up-to-date information (expensive, error-prone) Inject contextual data into your prompt sessions with things like extended context windows and RAG architectures (these work all right in practical use, I would say this is the least bad trade-off for now) Given an input, there is no reliable way of getting the same output every time. You can’t even get a sentence back from the training data fully verbatim in a reliable way. Facts can’t happen when a machine is dreaming half-truths based on superpositions of sentences. This is exacerbated by models being a ‘black box’ – OpenAI or Anthropic can make tweaks to the model, putting out a slightly model on the same product offering, change operating parameters like temperature, even by hosting the same system on a new data center you can get wildly varying, unpredictable results. Prompt injection : Get an LLM to betray its initial instructions with plain text. The Strawberry problem : Shows limitations of an LLM’s ability to reason (none) – if a problem is not solved enough times in the training set, it cannot pattern match an appropriate solution without explicit intervention in the training stage. They are an incredibly expensive product to make (millions of dollars to train one) Sold for well below the price point needed to recoup the cost of their production (OpenAI and Anthropic are running comically massive losses) Replaced with newer models at a dizzying pace, with no real way of comparing them apples-to-apples (see the disjoint and inconsistent “benchmarks,” which themselves are gamed by vendors and can’t be trusted) While themselves becoming commodity products within days of their release (each model immediately is replaced with a new one) All the while, the supply of quality data used to train them has dried up and newer models are already worse than older ones in fundamental ways Are LLMs, in general, an improvement to what was there before? Are LLMs, in your specific cases, worth the financial cost? Will they be worth it at 2x the cost? Are the benefits of LLMs worth the social cost? Are you willing and able to live in a world in which creative work is impacted in a transformatively negative way? Are you able to look to your friends and peers and tell them that anything they produce in written form belongs as a contribution to the public good without compensation? Doubly so: that private companies can use their work for a private good without compensation? Do LLMs make coding better , or just different ? Are you actually more productive using an LLM agent to code, or is it just novel? Are you aware of what you are giving up in terms of control and understanding when yielding control to an LLM? Are you willing to give up that understanding and control long-term? Are you willing to take personal responsibility for code you did not personally write? Are you willing to take the risk that generated code derived from models trained on incompatibly-licensed code may taint your codebase?

0 views
Jason Scheirer 10 months ago

You Need to Break the Rules

Over the course of your job, you are going to need to operate outside of the range of your defined permissions and responsibilities. You should do this sparingly and secretly . You should be able to operate outside of the system when you need to, and you need to know how to do it in non-obvious ways so you don’t get your escape hatches taken away from you. Some benign examples of this: Operating 100% within the rules is career death: you will not be able to move the needle if you spend all of your effort coloring within the lines. Rule following or not, you will be laid off when things get bad. Nobody gets promoted for good behavior, they get promoted for being well-liked. Alternately, judicious application of breaking the rules, if the results are good, will not result in you losing your job (unless you are not popular). Occasionally merging a hotfix PR without code review Knowing a backdoor for read-only access to the prod database Being buddies with the devops team and being able to spin up new infra outside of roadmaps and plans

0 views
Jason Scheirer 10 months ago

Passively Transparent

Nobody should have to ask what you’re working on. You should leave an obvious, loud trail. Encourage this pattern in others. One thing that is important to me as an engineer that I don’t think I’ve seen put in writing is something I have distilled into the term passively transparent . It should be obvious to the people in leadership what is motivating you to work and what you are working on (transparency), and it should not require an on-demand effort on your part to communicate this to them (passive access). You should be transparent for the reason that you are setting a baseline of what you are expecting to share with your manager: if they feel like you aren’t giving good status it gives them tacit permission to ask too many questions or questions that are inappropriately deep . Calibrate the level of communication by setting it yourself: keep your management at an arm’s length in how they use your time and attention. “I refer you to this query on GitHub, which you may access at any time” when someone asks about your deliverables should be a response you aspire for and your leadership accepts. You should be passively available for the reason that you are setting a baseline of when and how you are making yourself available to your manager: you, as my lead, should not have to wait for things to escalate to me openly broadcasting a crisis state before reaching out. You should also be able to do a pulse check without requiring my intervention. My time is precious, and I should not be forced to choose which time I give to you via a scheduled 1:1 if I can choose to give you the time that matters, when I have it to share progress. A lot of the agile software movement and its offshoots attempt to codify and formalize processes of behavior that boil down to expressing these values: If a manager or peer wants to see what you are working on, they shouldn’t have to ask for a status update: they should be able to figure it out on their own at any time on demand. This is something that has always mattered to me, doubly so with the forced async-by-default working world inflicted on us by the pandemic years and working with multi-time-zone teams. Make decisions (choose one end, these are opposite ends of a continuum): One way to promote this way of working is to practice it yourself: when working in a team, engage with your peers with the assumption that they are working in a passively transparent manner themselves. Quick, no-expectations messages on Slack can do wonders in a non-forceful way by gently framing your working style as conventional: Open a draft PR the second you begin coding, push to it regularly. Hourly, during a standard working day, if you can. Broadcast your current status on your issue tracking site: it is more work in the short run, but give you warm fuzzies (line go down on burndown) and lets you give high-value opportunities to course correct before going too far. Keep tickets at a 1-3 working day granularity and Move them along the started -> in progress -> finished pipeline quickly to telegraph to your management that you are working Independently On the right things At a reasonable pace. Set permissions by default to commenter or editor on every document and web site you start. Assume if someone is curious enough to want to know something that they are a stakeholder, whether they like it or not. Let them know. Pull them in. Share everything: use public Slack channels. Private channels and DMs are for shit talking and performance reviews, not for day-to-day work. As quickly as possible with minimal consensus but easy to change course on As late as possible with as long as reasonable to get input on “I was reading your draft PR, looks good” “Is there a Jira queue I’m not watching where you’re doing daily updates?”

0 views
Jason Scheirer 11 months ago

Be An Iceberg

Be overprepared. As the phrase goes, don’t let your mouth write checks your ass can’t cash . Send along curated information that is a good summary of what you are doing, but be able to support it with research, opinion, or buy-in when pushed. Reveal enough to convey that you are a force, but do not reveal everything . Do not operate at 100% capacity at all times, aim to operate at 80% so that when you really need to you can crank up your productivity in a sustainable way. Always have a strategic reserve of competence. This is not an advocacy of actual laziness, it is pace-setting to prevent burn out and preparation to in anticipation of future needs.

0 views
Jason Scheirer 1 years ago

Tools I Use to Live My Glamorous Life

I have a handful of computers I use regularly! This is missing in ChimeraOS for some reason and the only part absent from a working podman setup on my handheld  ↩︎ I am primarily a Python expert (25 years!) I am primarily a FastAPI user, and I am better than average at doing async programming (which sucks and is bad, but it’s fun to do bad things) I primarily choose Postgres as my database I do most of my personal dev projects in Go Over the last 3-5 years, Typescript has become the dominant language I use at work If you ask me to write something in C++ I will say no but I will write C++ to spite you I prefer Linux on the desktop to Mac on the desktop at this point I do not value my time or sanity If something is reliable, I get bored and self-destructive so I need to be in a state of crisis in all aspects of my life and a Linux desktop fills that need in this space I now prefer Fedora and Fedoralikes for desktop Linux I usually prefer Debian stable or one of its relatives for servers and Alpine for container base images I use Tailwind at home for CSS, often tableflipping and reverting to vanilla CSS I use Solid and plain old JS at home, React at work I use Ollama on Windows, Mac and Linux to locally run LLMs My muscle memory is Vim , I use it everywhere I have hit the point of no return and use Zed more than anoy other development tool I no longer use LLM code assistance in any measurable amount, I find a lot more value using LLMs as code review tools than code generation tools Much of my polyglot development is via VS Code I like using Ebitengine to make silly 2D games I also (rarely) play around with Picotron , Pico-8 , TIC-80 and LÖVE for the same Here is the bootstrap set of dotfiles I use on new computers On Windows, I’m doing the same thing but different with setting up a new machine. I use zsh and bash almost equally, though I think I have more zsh machines now I usually start out with oh-my-zsh or oh-my-bash on new systems I use nvm , rbenv , and pyenv to manage node/ruby/python installs – I can’t get the hang of uv but that looks like the future at this point for Python. I use , and in descending order of frequency I like git-delta for command line diffing I like lazygit for some easy to explain but harder to do than necessary git operations. Sometimes a GUI (or a TUI) is nice! Not everything has to be commands or code! lazydocker loads a hell of a lot faster than any other offering for looking at running containers in a reasonably high level way I always add and to my so I can manage my own binaries without superuser perms I download VS Code and Go (setting to ) from tarballs and manage them myself, adding and to – that way I don’t need to deal with native packages or elevated install permissions Same with Deno Currently in : 1 (static binaries acquired from their release pages) In the past, I did Terraform A couple of jobs ago, I learned a little Pulumi At most places of employment, I use AWS I had to learn Azure at my last job At home, I use GCP for personal projects The theme for this weblog is a no-longer-recognizable fork of smol , built in Hugo Hosted on Github Pages through the magic of a CNAME . Fonts : I’m using Forevs , Instrument Sans and 0xProto (hosted here with this page and not on Google Fonts). Xara Photo and Graphic Designer as I have muscle memory and it’s fast to make drawings in The Gimp for quick raster touchups Inkscape to touch up SVGs Mermaid for diagrams extensively because it’s everywhere, mark a code block as in markdown and you get the rendering for free in things like Obsidian and on Github Monodraw for cool text-mode diagrams D2 for diagrams as code – I like it in terms of how clean the language looks, how clean the output looks, and how easy it is to use Graphviz comes along for the party, too – it’s old but it gets the job done for a large range of jobs I have a script I wrote for work to boostrap a new Windows machine; it installs and configures most of the tools below I run Win11Debloat immediately upon starting on a new Windows desktop machine Winget is a command line utility built into the operating system you are sleeping on. Windows Terminal for my terminal emulator Powertoys for various tweaks, the runner in particular is nice Notion Desktop to do work Notion Calendar as my calendar SyncThing to move files around BetterDisplay for better external monitor support SyncThing to move files around Notion Calendar to see what I’m supposed to be doing Notion Desktop to do work Rectangle to move windows around Enchanted as a standalone frontend to Ollama Ghostty as my terminal emulator Syncthing to move files around easily Rhythmbox to handle my large music library Boxes for light virtualization work Podman Desktop for container-fu virt-manager for my Windows VMs Prism Launcher to play Minecraft with my kid Alpaca as a standalone frontend to Ollama Steam for Steam Ghostty as my terminal emulator Mastodon for one Fediverse server GotoSocial for the other Miniflux for keeping up on news Forgejo for hosting my private repos An older Legion desktop running Nobara for gaming and general computing Pinebook Pro running Manjaro for remote terminal stuff and as a lightweight Miniflux client AOKZOE A1 running ChimeraOS for handheld PC gaming, also using its Gnome desktop as a portable development setup Whatever the smallest iPhone is on the market at the time for doomscrolling Shanling M0s for listening to music outside of my home office RG35XX running MyMinUI for video games during my commute hours Flipper that I never break the law with The Peak Design Everyday Backpack as a backpack This is missing in ChimeraOS for some reason and the only part absent from a working podman setup on my handheld  ↩︎

0 views