Latest Posts (20 found)
W. Jason Gilmore 2 weeks ago

10,000 Pushups And Other Silly Exercise Quests That Changed My Life

Headed into 2025 I was fat, out of shape, and lazy. My three young children were running circles around me, and I was increasingly concerned not only about my health in general but about the kind of example I was setting for them. My (very) sedentary job in front of a laptop serving as the CTO of Adalo wasn't helping, nor was the fact that my favorite hobby in the world outside of work is, well, sitting in front of the laptop building SaaS companies like SecurityBot.dev and 6DollarCRM . Adding to the general anxiety was the fact I had spent the last two years watching my parents struggle with devastating health issues. My parents had me in their early 20's, so all said they really weren't that much older than I am. My thoughts regularly turned into worry that I'd eventually wind up with my own serious health problems if I didn't get my act together. I wanted to do something about it, but what? Past attempts to go to a gym weren't successful, and I really did not want to drive any more than I already do serving alongside my wife as a kid taxi. Also, having made half-hearted attempts in the past to get into shape (Orange Theory, P90X, etc) and winding up spending less time exercising than researching the minutiae of max VO2, bicycle construction, and fasting benefits, I knew I had to keep things simple. While on a post-Christmas family vacation down in Florida I concluded it made sense to set a goal that could help me get into better shape but which also could be completed in small chunks over a long period of time. It was also important that I could do the workout at any point in the day and even in my office if necessary. And thus began the quest to complete 10,000 pushups in one year. Almost 10 months later, this harebrained goal and the many positive effects that came from it changed my life in ways I never imagined. While still in Florida I fired up a Google Sheet and added two columns to it: Date and Pushups. And on January 1, 2025 I dropped down and knocked out 30. Well not 30 in a row, mind you. I never could have done that on day 1. It was more like 10, 10, 5, 5 or something like that. Then I wrote it down. On January 2 I upped my game a bit, doing 35 and again immediately logged into the sheet and wrote it down again. In the days that followed, the reward very much became the opportunity to open that sheet. Can't write the pushup number down if I didn't do the pushups, right? I didn't want to break the chain (although you'll later see I did in fact break the chain plenty of times in the months ahead) and so in the first 31 days I did pushups on 24 of 31 days, logging 1,018 in total and averaging 32.84 per day. I even worked up the motivation to run on a treadmill one day in January, logging 2.17 miles in 30 minutes on January 14, 2025. Other than that run and pushups, according to my spreadsheet I did no other notable exercise that month. It was also in January that I stopped eating fast food of any type, and as of the day of this writing I've not reversed course on this decision. Long story short we were driving back from https://codemash.org and pulled through a McDonalds. I had at that point been eating McDonalds all of my life; nothing over the top mind you, but probably twice a month at least for as long as I can remember. Anyway, the food that day was rancid. Legitimately nauseating. I have no idea why it was that way but I was so turned off that right there and then I swore I'd never touch it again. Coincidentally, over this past weekend I was on a walk and reflecting on some of what I'd been writing in this blog post, and my thoughts turned towards diet. When was the last time you heard somebody (including yourself) say they feel better after eating fast food? We all know the answer to this question: never. This stuff is not food and I feel so much better staying away from this poison. Whether it was due to the winter blues or that shiny New Year's resolution already starting to fade, I only logged 848 pushups on 21 of 28 days in February. But I definitely seemed to be getting stronger, averaging 44.63 pushups on those days, and managed to log a daily high of 117 pushups on February 9, 2025. By the end of February I had logged 1,876 pushups. According to my spreadsheet I also managed to lift weights on February 1, 4, and 10. I have a pretty basic weight set in the basement and although I can't recall the specifics, I was probably standing around listening to CNBC on my phone most of the time. I'm not going to sugarcoat it; March was bad, real bad. I only logged 206 pushups on 9 days, averaging 22.9 pushups on those days. It's unclear to me why I'd tailed off so much other than to imagine old man winter was really starting to weigh on me by that point. Even so, those 206 pushups took me to a total of 2,082 pushups for the year. In April my pace picked back up along with the improving weather and increasing sunlight. I completed 375 pushups on 13 days, averaging 28.84 pushups on those days. However I also managed to lift weights on six days in April, went on a run on April 14, and even gave fasting a go for a 28 hour period between April 2-April 3 (not sure I'll do that again). Another lifestyle change unexpectedly happened in April: I basically quit drinking alcohol, wine in particular. This decision was a pretty simple one because as I've gotten older, the hangovers have gotten worse, and my sleep quality has gotten much worse, anytime I drank more than 1-2 drinks. As of this writing (September 28, 2025) I've had maybe 2-3 glasses of wine in almost 5 months. My new alcoholic drink of choice when I feel like having something? Miller Lite. It has low calories, low alcohol content, and you can buy a 12 pack for as much as one bottle of wine. Adding 375 pushups to the pile took me to a total of 2,457 pushups for 2025. Likely due to fear I was going to enter yet another summer rocking the "dad bod", my exercise intensity soared in May. I completed 1,281 pushups over 25 days, averaging 51.24 pushups on those days. On five of those days I completed more than 100 pushups, and on May 18 completed a YTD single day high of 150. I also became mildly obsessed with the idea of doing a split. While browsing Libby as I love to do at night, I found the book Even the Stiffest People Can Do the Splits . The cover showed the author smiling and doing a full split, and I thought well if Eiko says even stiff people can do it then maybe I can too. Over the course of May I did the splits workout 15 times, and undoubtedly became far more flexible although I never did quite reach a complete split. This continued into June and early July however for reasons I'll explain in a moment I stopped doing the regiment out of fear I'd get hurt. However, to this day I stretch daily and of all the different exercise routines I've tried this year I think aggressive stretching has perhaps had the most ROI of them all. On May 15 I ran a 5K with my daughter (well she sped ahead of me after mile 1), completing it in 32:50. Not too bad considering according to my log I ran exactly four times in 2025. Headed into June I had completed a grand total of 3,738 pushups. June is where things really started to get exciting. Every year Xenon Partners runs a friendly intercontinental pushup contest. "Friendly" is a relative term considering I work with numerous combat veterans, retired members of the United States and Australian military services, and a former Mr. Australia contestant. I also spent some time in France with the family, attending the 24 Hours of Le Mans race (amazing btw) and sightseeing around the country, meaning I had to fit pushups in whenever possible, including at Versailles: In June my output soared to 2,014 pushups, and despite all of the traveling managed to do pushups on 24 of 30 days, averaging 91.55 pushups per day. I also set multiple PRs in June, doing 205 pushups on June 1, 222 on June 15, and then 300 on June 27. As of June 30 I had completed a total of 5,752 pushups. Upon returning from Europe I got the bright idea to organize a race called the 5/15/500 Challenge. This involved running 5 miles, biking 15 miles, and then completing 500 body weight exercises. Nevermind that I'd run maybe four times in 2025 and hadn't been on my bike once. Many of my neighbors joined the fun, and we even had t-shirts printed for the occasion. Of course, I also created a website . I did this because I figured having an artificially imposed deadline was going to force me to exercise more often. Mission accomplished. In July I completed 2,002 pushups, ran 48.88 miles, and biked 28.99 miles (this includes the race day numbers). The heat throughout the month was often unbearable, but I pushed through all the same knowing July 26 (race day) was coming up quick. During this period I also really began to dial in my diet, eating little more than fruit, eggs (lots of eggs), chicken, rice, and salad (lots of salad). It was during this period and August that my body began to change. I became noticeably larger and more muscular, and incredibly my abs began to show. In this photo I'm completing race pushup #500. Don't judge the form, it was almost 90 degrees and the exhaustion was real from having already completed the run and bike segments. That said if you squint in the right light you can see I actually have muscles due to all the pushups and running! Due to all of the July training and the 5/15/500 Challenge, my YTD pushup output soared to 7,754. It was around this time that I went down a major rabbit hole regarding microplastics. A successful techie named Nat Friedman funded a study that looked into the prevalency of microplastics in food, vitamins, and other products, and published the results here . I'm not going to call out any products by name here (although I should because they are poisoning us), but take a moment to open this site in a new tab and search for protein for a glimpse into how you are being poisoned every time you take a bite of so-called health food. After spending a few weeks researching this topic I radically changed my diet and eliminated all of this nonsense. If you really want to go down a rabbit hole, look into the relationship between chocolate-infused health products and heavy metals. In August I did exactly 1,000 pushups, and threw in 190 body weight squats just for fun. 525 of these pushups were completed in a single day (August 16) thanks to my neighbor, friend, and fellow 5/15/500 contestant Charlie having the bright idea that we should knock out what was originally supposed to be 400 pushups during our sons' soccer game. Of course, our competitive spirit got the best of us and I quit at 525 while Charlie pushed on to 600. I'll get him the next time! The running sessions continued throughout August, with 37.48 miles completed. I started taking running much more seriously at this point because I signed up for the October 19 Columbus 1/2 Marathon. I've run 1/2 marathons before (poorly - my last finish time was 3:05) so I know what I'm getting into here, but this time around I want to actually finish at what I deem to be a respectable time which is around 2:20 (10:40/mile pace). Of course, in order to train for this I needed to know what pace I'm running in the first place, and so I bought a Garmin Forerunner 55 watch with GPS. As mentioned before my proclivity for going down research rabbit holes hasn't really helped my previous attempts to get into shape so I chose this watch because compared to other watches it is relatively spartan in terms of features. Above all else I wanted a watch that can accurately track my running distance, pace, and route and so far I am so, so happy with this purchase. It is perfect, and the battery life is amazing. On August 2 I received the watch and later that day took my son and his friend up to a local (Alum Creek) mountain bike park and while they were riding I decided to run the trails. I wound up running 4.69 miles on very hilly and bumpy trails, and paid for it dearly over the next week due to terrible foot and knee pain. On August 21 I ran my first training 10K, completing it in 1:12:28. According to my fancy watch I completed the first 5K in 39:11 but then sped up and completed the second 5K in 33:11. On August 25 I repeated the route, this time completing the 10K in 1:05:41. On August 28 I did it a third time, completing it in 1:02:47. Progress! I brought some help to the the August 25 and 28 10K training runs: GU packs . In July I read the book Swim, Bike, Bonk: Confessions of a Reluctant Triathlete , by Will McGough. In this hilarious recounting of training and competing in an Ironman triathlon, the author mentions using these mysterious "gel" pack, of which the most popular is known as a "GU pack". I subsequently picked up a few at the local Walmart and can confirm they unquestionably gave me a boost on these long runs. Now anytime I plan on running a 10K or longer I put one in my running pouch and open it 5K into the route. With another 1,000 pushups in the book my YTD output sat at 8,754 on August 31. Much better endurance aside, the most obvious visible outcome of the last few months is my clothes no longer fit. My polo shirts are so baggy they look like tents, and my t-shirts are too small because I'm so much more... muscular? What in the hell is going on? This seems to be working! With 8,754 pushups complete, I only had 1,246 to go and concluded I'd meet the milestone in September. With the 1/2 marathon around the corner my running workouts picked up and I set multiple PRs, including a 29:51 5K PR on September 8, followed by another 28:10 5K PR on September 11. On September 17 I got one of the biggest motivational boosts possible. I was in Chicago for a quarterly meeting, and one of the fellow board members who I've seen in person once every 3 months (but not 3 months ago because we were on the France trip) walked up to me and introduced himself. I stared back at him completely puzzled, and watched him walk away to greet the person next to me. He suddenly wheeled around with a look of shock on his face and said something to the effect of "Holy shit! I didn't even recognize you! You look amazing!". On September 21 I completed the 10,000th pushup in unceremonious fashion on my living room floor: On September 24 I gobbled up a GU pack and headed outside feeling like I could tear a phone book in half. My goal was to shatter the previous 28:10 5K record, and I was on track to do exactly that, running the first 2.1 kilometers in 18 minutes flat. Then out of nowhere I felt this terrible pain in my left calf and came to an immediate stop. It wasn't until September 29 that I could comfortably run again, and even then I only ran 1 mile because I'm terrified of a nagging injury setting me back for the October 19 1/2 marathon. In September I added 1,501 pushups to the pile, bringing the YTD total to 10,245. Today is October 1, 2025 and the pushups continue. The aforementioned 1/2 marathon is on October 19, and my neighbor Charlie and I have already agreed to walk/run a full marathon (around our neighborhood) on November 29. Although it's almost 80 degrees today, in past years we've seen snow by the end of the month so I'm thinking about getting one of those fancy stationary bikes or maybe even a treadmill so I can keep this party going over the winter. In recent months I have started to look so different that friends have asked me for some diet details. As mentioned, I no longer eat fast food, nor overconsume alcohol. But I've also almost completely cut out processed foods, eating them only very sparingly. A few months ago I did manage to go down the microplastics and heavy metals rabbit hole, and now spend some time researching anything that I plan on eating on a regular basis. Believe me, a lot of the food you think is healthy is pure garbage. Every morning I eat one of two things: either a gigantic fruit smoothie or four scrambled eggs and a salad. I do not deviate from this, only very occasionally eating some protein-powder pancakes made by my wife. My smoothie consists of milk, greek yogurt, 1.5 scoops of Optimum Nutrition protein powder, a huge scoop (probably two cups) of frozen organic berries, and an entire banana: Here is the typical scrambled eggs and salad breakfast: For lunch I eat some combination of chicken, rice, tuna, and salad. I almost never deviate from this. For dinner I eat whatever my wife decides to make, which is always healthy. Obviously we occasionally go out and I'll eat some garbage like wings or pizza, but this is pretty rare compared to the past. I also take a few vitamins and creatine daily. Earlier in this post I mentioned researching the prevalency of microplastics, heavy metals, and other poison in food. This is particularly problematic in ironically protein powder, protein bars, protein shakes, etc. I settled on Optimum Nutrition because it is one of the few powders on the market that has been tested by numerous third-parties, including the Clean Label Project . It's pretty expensive compared to other products, but I'm happy to pay in order to avoid ingesting this garbage. Despite getting myself into incredibly good shape relative to the past, this wasn't really that hard. On 105 of 274 days (38.3%) I did no pushups at all. On 142 of 274 (51.8%) days I did between 1 and 100 pushups. On just 26 of 274 (9.4%) days did I do more than 100 pushups, and on only 8 of 274 (2.9%) days did I do 200 or greater. Interestingly, although I have no hard data to back this up I feel like my strength soared in the 67 days following the 5/15/500 race (July 26). Following that date I did more than 100 pushups on 11 days (16.4% of the days), and became noticeably more muscular. Here's a chart showing the pushup volume throughout the year: Headed into October, I feel like a million dollars and plan on continuing these off-the-wall exercise quests for the rest of my (hopefully long) life. I obviously have no idea what I'm doing, but am happy to answer any questions and help motivate you to get in the best shape of your life. Send me an email at [email protected] or DM me on Twitter/X at @wjgilmore!

0 views
W. Jason Gilmore 4 weeks ago

Minimum Viable Expectations for Developers and AI

We're headed into the tail end of 2025 and I'm seeing a lot less FUD (fear, uncertainty, and doubt) amongst software developers when it comes to AI. As usual when it comes to adopting new software tools I think a lot of the initial hesitancy had to do with everyone but the earliest adopters falling into three camps: don't, can't, and won't: When it comes to AI adoption, I'm fortunately seeing the numbers falling into these three camps continuing to wane. This is good news because it benefits both the companies they work for and the developers themselves. Companies benefit because AI coding tools, when used properly, unquestionably write better code faster for many (but not all) use cases . Developers benefit because they are freed from the drudgery of coding CRUD (create, retrieve, update, delete) interfaces and can instead focus on more interesting tasks. Because this technology is so new, I'm not yet seeing a lot of guidance regarding setting employee expectations when it comes to AI usage within software teams. Frankly I'm not even sure that most managers even know what to expect. So I thought it might be useful to outline a few thoughts regarding MVEs (minimum viable expectations) when it comes to AI adoption: Even if your developers refuse to generative AI tools for large-scale feature implementation, the productivity gains to be had from simply adopting the intelligent code completion features is undeniable. A few seconds here and a few seconds there add up to hours, days, and weeks of time saved otherwise spent repeatedly typing for loops, commonplace code blocks, and the like. Agentic AIs like GitHub Copilot can be configured to perform automated code reviews on all or specific pull requests. At Adalo we've been using Copilot in this capacity for a few months now and while it hasn't identified any groundshaking issues it certainly has helped to improve the code by pointing out subtle edge cases and syntax issues which could ultimately be problematic if left unaddressed. In December, 2024 Anthropic announced a new open standard called Model Context Protocol (MCP) which you can think of as a USB-like interface for AI. This interface gives organizations the ability to plug both internal and third-party systems into AI, supplementing the knowledge already incorporated into the AI model. Since the announcement MCP adoption has spread like wildfire, with MCP directories like https://mcp.so/ tracking more than 16,000 public MCP servers. Companies like GitHub and Stripe have launched MCP servers which let developers talk to these systems from inside their IDEs. In doing so, developers can for instance create, review, and ask AI to implement tickets without having to leave their IDE. As with the AI-first IDE's ability to perform intelligent code completion, reducing the number of steps a developer has to take to complete everyday tasks will in the long run result in significant amounts of time saved. In my experience test writing has ironically one of AI's greatest strengths. SaaS products I've built such as https://securitybot.dev/ and https://6dollarcrm.com/ have far, far more test coverage than they would have ever had pre-AI. As of the time of this writing SecurityBot.dev has more than 1,000 assertions spread across 244 tests: 6DollarCRM fares even better (although the code base is significantly larger), with 1,149 assertions spread across 346 tests: Models such as Claude 4 Sonnet and Opus 4.1 have been remarkably good test writers, and developers can further reinforce the importance of including tests alongside generated code within specifications. AI coding tools such as Cursor and Claude Code tend to work much better when the programmer provides additional context to guide the AI. In fact, Anthropic places such emphasis on the importance of doing so that it appears first in this list of best practices . Anything deemed worth communicating to a new developer who has joined your team is worthy of inclusion in this context, including coding styles, useful shell commands, testing instructions, dependency requirements, and so forth. You'll also find publicly available coding guidelines for specific technology stacks. For instance I've been using this set of Laravel coding guidelines for AI with great success. The sky really is the limit when it comes to incorporating AI tools into developer workflows. Even though we're still in the very earliest stages of this technology's lifecycle, I'm both personally seeing enormous productivity gains in my own projects as well as greatly enjoying seeing the teams I work with come around to their promise. I'd love to learn more about how you and your team are building processes around their usage. E-mail me at [email protected] . Developers don't understand the advantages for the simple reason they haven't even given the new technology a fair shake. Developers can't understand the advantages because they are not experienced enough to grasp the bigger picture when it comes to their role (problem solvers and not typists). Developers won't understand the advantages because they refuse to do so on the grounds that new technology threatens their job or is in conflict with their perception that modern tools interfere with their role as a "craftsman" (you should fire these developers).

0 views
W. Jason Gilmore 2 months ago

Installing an Example MySQL Database

Sometimes it is useful to build a demo app using database data that looks realistic but isn't otherwise used for any mission-critical reasons. For this purpose I like to use the "Sakila" database, which you can learn more about here . This database is also known as the "DVD Store" database, because it mimics what might be used to manage a hypothetical DVD store from back in the day. I've personally used this and the MySQL " employee " database for years as part of technical sales demos at both DreamFactory and now, Adalo . These databases are large enough to be realistic yet easily understandable at first glance, making them ideal for sales-related demonstrations. You can download this database in zip or tar.gz format here . Once downloaded, decompress it and then import the schema into your MySQL instance like so: This will create a new database called . If you want to use a different database name, then open and find these three lines: Update each reference to the name of the database you'd like to use, such as . Keep in mind that if you don't modify these lines and already have a database named , it will be destroyed before being recreated by the second line! Next, import the data: Finally, you'll want to create a dedicated user for interacting with the database. The following command will create a user who can connect to the MySQL server from anywhere (the latter defined by the ): The user cannot however do anything. You need to grant it permission (known as privileges in the MySQL world). If you want to give the user full access to the database, you can assign it like so: If instead you want to give the user read-only access, you can execute this instead: If you wanted to restrict access from a specific IP address, you can execute: After creating the user, logout of the root account and confirm you can login with the new user: After confirming you can see the database, connect to it and view the tables:

0 views
W. Jason Gilmore 2 months ago

Model Context Protocol, Product Demos, and the New App Store

The Model Context Protocol seems to be ushering in an exciting new type of App Store, and while it's all a bit messy right now, companies should be paying close attention to this topic. There is a scene in the 2002 movie Minority Report where Tom Cruise is standing in front of an impossibly cool monitor. He’s using his hands to control the interface, rotating and swiping to display, move, and dismiss screen elements. The movie premise is pretty interesting, but it was the scenes with these fantastical computers that really left me in awe. At the time, REST APIs were still largely an academic musing found in Roy Fielding’s PhD thesis, Apple was in such a state of financial distress that at year’s end the stock closed at $0.22 (that’s cents, not dollars), and Microsoft’s Internet Explorer browser still dominated the web. Needless to say, the real world of computing looked far different than Minority Report’s fantastical interfaces. I think we’re on the verge of moving a lot closer to my idealized computing environment thanks to an emerging technology called Model Context Protocol (MCP). The official website defines MCP like so: MCP is an open protocol that standardizes how applications provide context to large language models (LLMs). This definition is fine, albeit one that largely only makes sense to programming nerds like me. For everyone else, I prefer to define MCP as such: MCP offers a way to expose all or part of a software application to another computing environment in such a way that the end user can create entirely new interfaces that wouldn’t otherwise be possible to create. Let me show you a concrete example in order to explain my thinking. Check out this infographic: Looks like some slide in a boring sales deck, right? Wrong. It is much more exciting than that. This was generated on-the-fly by Claude Desktop in response to this prompt: Use the DreamFactory MCP to show me the weekend Heroku trial numbers and display it in a Canva chart. Even more impressive, the charted data was pulled from Snowflake. So this data was pulled from a Snowflake database by DreamFactory’s MCP server, pumped into Claude Desktop, and the graphic was built on the fly by Canva’s MCP server. Not to be too dramatic here, but this type of integration wasn’t technically possible even three months ago without involving a programmer. Here’s another example: This dashboard was generated from a PostgreSQL database (also coincidentally retrieved by DreamFactory’s MCP server). The prompt was: Create a chart of recent orders In both cases, imagine a user making these sorts of requests, viewing the output, and then moving on, effectively throwing away (or swiping, in Minority Report speak) the interface. My friend and former colleague and now O'Reilly Director of Content Jon Hassell referred to these throwaway interfaces as "disposable user interfaces", or DUIs. The acronym seems a bit questionable but it is nonetheless a very apt term. How about a Minority Report-themed dashboard? Coming right up: This dashboard was generated from a combination of PostgreSQL data, one of my Google calendars, and an array of random quotes. The clock and date located at the top right are dynamically updated. Go to this link and you’ll be able to see the entire interface because I published it as a Claude Artifact. MCP servers are installable by any software which supports them, known as MCP clients. Claude Desktop is an example of an MCP client, and as the screenshot below shows, Claude is already making it pretty easy to install other MCP servers: If you click the Add Connectors link, you’ll be able to browse several more: This interface is reminiscent of another application that has in recent years been one of the largest revenue generators in the history of the world: The Apple App Store. You know what else this all reminds me of? Citrix Workspace Microapps. If you’ve never used Citrix Workspace, it’s a way for corporations to manage application availability across a workforce. A microapp can be thought of as the extraction of a specific feature within a larger application, such as expense approval. If you’re a manager then it isn’t very fun to constantly login to SAP Concur to approve expenses. The idea behind a microapp is that a mini-app can be created which can for instance send push notifications to the manager whenever expenses require approval, or the manager can open a niche user interface which only contains the minimal data required to review and approve expenses. If you’re interested in this sort of thing, a few years ago I actually co-authored a short e-book about this topic which you can download for free from here . But I digress. Anthropic's desired end state seems pretty clear: they want to build the next generation app store, with the primary difference being this app store is going to make it possible for an entirely new generation of applications to be created by interweaving multiple MCP servers together. And I'm here for it. Furthermore, I think every software company should be actively experimenting with this technology, because it provides an entirely new way to put your product in front of users who might not have otherwise tried it. I’ve been talking a lot about Claude in this post, however Claude is only one of several popular applications that support MCP servers. For instance I regularly use the GitHub and Stripe MCP servers inside the coding IDE Cursor. ChatGPT also supports adding MCP Servers although they’ve managed to bury the option inside a Connectors submenu found in Settings: Interestingly, if you put on your mining helmet and dig deep, deep into the GitHub Settings interface you’ll find that it’s possible to even add MCP servers to Copilot: By the way, any software product that can add and use an MCP server is logically referred to as an MCP client. I predict we’re just a few years away from all mainstream software doubling as an MCP client, and don’t think it is a stretch at all for operating systems to follow suit. At the moment, unless your MCP server is one of the anointed few showing up in Claude and ChatGPT’s directories, you’re going to need another solution that allows users to obtain and install your server. This is at the moment a pretty ugly process, often involving editing a JSON file like this: Yuck. However, progress is being made on several fronts. One of the most interesting developments in this area is a new open format called Desktop Extensions, or DXT . You can use the DXT packaging tool to very easily create a clickable MCP server installer. I’ve already managed to successfully build a DXT-based installer for DreamFactory’s MCP server. If you want to try it out you can download it from GitHub . Cursor has created its own solution for adding MCP servers to their namesake IDE. I don’t think it has an official name, and seems to be generally called the “Add to Cursor” button. That’s right, you just click a button and it will initiate installation of an associated MCP server. You can view a list of available buttons here . MCP server authentication is also pretty messy. At the moment two options are available: API keys and OAuth. If you install any of the MCP servers found in Claude’s connector directory, you’ll see the latter in action. That said, I think there are plenty of convenient opportunities to take advantage of API key-based authentication, particularly since if you use DXT format then the API key will be securely stored in the operating system keychain. I’ve spent 20+ years in the tech sector, and because of it usually find emerging technology to be practically radioactive due to instability. MCP is a rare exception. I think this has the potential to change the way the world uses computers, and because of it I am extraordinarily bullish on it. Whether you’re bullish or not, I’d love to hear your thoughts! Email me at [email protected] .

0 views
W. Jason Gilmore 3 months ago

Notes On the Present State of MCP Servers

I've had the opportunity to spend the last several days immersed in researching the Model Context Protocol and the present state of MCP servers. My early conclusion is this technology is for real and has the potential to entirely change how we use the Internet. That said, like any emerging technology it is most definitely in a state of rapid evolution and so I've compiled a few points here that may be useful to others exploring this topic. It is presently a messy and chaotic space, with both server and client implementations unable to keep up with the rapidly evolving spec. A great example of this is Anthropic deprecating and then removing SSE from transport options ( https://modelcontextprotocol.io/specification/2025-06-18/basic/transports ) while simultaneously advertising their partner extensions which are SSE-based ( https://www.anthropic.com/engineering/desktop-extensions ). That said, I don't think anybody cares, including the major tech companies listed in that partner link, whether their extensions are presently SSE- or Streamable HTTP-based. It is just noise in the grand scheme of things, however SSE will eventually unquestionably be phased out, and doesn't even show up in the latest spec version. MCP client support for critical server features remains uneven. What works in VS Code (server Prompts) does not presently work in Cursor. My personal experiments show Prompts to be a fascinating feature which introduce opportunities for user interactivity not otherwise possible using solely Tools. Not for lack of trying, it remains unclear to me (and apparently almost everybody else, including AWS architects , how OAuth is implemented in MCP servers. Claude Desktop seems to have the best support, as evidenced by the directory they launched a few days ago. Other MCP clients have varying support, and require the use of experimental hacks such as mcp-remote for certain use cases. That said, the exploding mcp-remote weekly download chart is indicative of just how strong the demand presently is for at least experimenting with this new technology. And further, given the obvious advantages OAuth has to offer for enterprises it will only be a matter of time before OAuth is standard. You can already see Anthropic moving in this direction thanks to their recent publication of documents such as this . API key-based authentication works very well across popular clients (VS Code, Cursor, Claude Desktop, etc), and when coupled with a capable authorization solution such as DreamFactory it's already possible to build some really compelling and practical extensions to existing products. To see a concrete example of what I'm talking about, check out this great video by my friend and colleague Terence Bennett. While adding API keys (and MCP servers for that matter) to most clients presently requires a minimal level of technical expertise (modifying a JSON file), my experiments with Claude Desktop extensions (next point) shows installation woes will shortly be a thing of the past. Anthropic (Claude) is emerging as the clear leader in all things MCP which is no surprise considering they invented the concept. Among other things their new Desktop extension spec ( https://www.anthropic.com/engineering/desktop-extensions ) is very cool and I've already successfully built one. I'd love to see this approach adopted on a wider scale because it dramatically lowers the barrier-of-entry in terms of installing MCP servers. Somebody has already started an Awesome Claude Desktop Extensions page which is worth a look. The pace of evolution is such that if you're reading this even a few weeks or months after the publication date, then some or possibly all of what is stated above is outdated. Follow me on Twitter for ongoing updates as I expect to remain immersed in this topic for the foreseeable future. It is presently a messy and chaotic space, with both server and client implementations unable to keep up with the rapidly evolving spec. A great example of this is Anthropic deprecating and then removing SSE from transport options ( https://modelcontextprotocol.io/specification/2025-06-18/basic/transports ) while simultaneously advertising their partner extensions which are SSE-based ( https://www.anthropic.com/engineering/desktop-extensions ). That said, I don't think anybody cares, including the major tech companies listed in that partner link, whether their extensions are presently SSE- or Streamable HTTP-based. It is just noise in the grand scheme of things, however SSE will eventually unquestionably be phased out, and doesn't even show up in the latest spec version. MCP client support for critical server features remains uneven. What works in VS Code (server Prompts) does not presently work in Cursor. My personal experiments show Prompts to be a fascinating feature which introduce opportunities for user interactivity not otherwise possible using solely Tools. Not for lack of trying, it remains unclear to me (and apparently almost everybody else, including AWS architects , how OAuth is implemented in MCP servers. Claude Desktop seems to have the best support, as evidenced by the directory they launched a few days ago. Other MCP clients have varying support, and require the use of experimental hacks such as mcp-remote for certain use cases. That said, the exploding mcp-remote weekly download chart is indicative of just how strong the demand presently is for at least experimenting with this new technology. And further, given the obvious advantages OAuth has to offer for enterprises it will only be a matter of time before OAuth is standard. You can already see Anthropic moving in this direction thanks to their recent publication of documents such as this . API key-based authentication works very well across popular clients (VS Code, Cursor, Claude Desktop, etc), and when coupled with a capable authorization solution such as DreamFactory it's already possible to build some really compelling and practical extensions to existing products. To see a concrete example of what I'm talking about, check out this great video by my friend and colleague Terence Bennett. While adding API keys (and MCP servers for that matter) to most clients presently requires a minimal level of technical expertise (modifying a JSON file), my experiments with Claude Desktop extensions (next point) shows installation woes will shortly be a thing of the past. Anthropic (Claude) is emerging as the clear leader in all things MCP which is no surprise considering they invented the concept. Among other things their new Desktop extension spec ( https://www.anthropic.com/engineering/desktop-extensions ) is very cool and I've already successfully built one. I'd love to see this approach adopted on a wider scale because it dramatically lowers the barrier-of-entry in terms of installing MCP servers. Somebody has already started an Awesome Claude Desktop Extensions page which is worth a look.

0 views
W. Jason Gilmore 4 months ago

Building Businesses with AI

A few months ago I launched a labor-of-love project called SpiesInDC ( https://spiesindc.com ). It's a subscription-based service which delivers secret packages to your mailbox, yes your real mailbox, containing information about famous events in Cold War history. Also included are stamps, coins, photos, and other memorabilia intended to make for a very fun experience. My friends at the online coding school Treehouse ( https://teamtreehouse.com ) asked if I'd be interested in creating a video explaining how I used a series of artificial intelligence tools to build the marketing site, companion graphics, do market research, and complete other important tasks. The video is now available, and you can watch it here: https://teamtreehouse.com/library/build-a-side-business-with-ai-tools

0 views
W. Jason Gilmore 10 months ago

Building Menubar Apps with AI

Some people collect baseball cards, others obsess over video games. I love menubar apps. No clue why, I just really like the convenience they offer, because they provide such an easy way to view and interact with information of all types. I've always wanted to build one, but never wanted to invest the time learning Swift, Objective-C, or ElectronJS. The emergence of AI coding tools, and particularly agents, has completely changed the game in terms of writing software, and so I've lately been wondering how feasible it is to not only create my first menubar app but actually create some sort of software factory that can churn out dozens if not hundreds of menubar-first applications. The first app is called TerraTime . It's a menubar app that shows the current time in a variety of timezones. TerraTime was built with Cursor in about 20 minutes. I spent another 75 minutes or so figuring out how to sign and notarize the app according to Apple requirements. The app is currently for sale on Gumroad , and will soon be available on the Mac App Store. To catalog what I hope will quickly become a collection of useful menubar apps, I've created a new site called Useful Menubar Apps . It was also built with AI, and is hosted on Netlify.

0 views
W. Jason Gilmore 1 years ago

Two Day Rome Itinerary

Many years ago I put together this two day itinerary for friends traveling to one of my favorite cities on earth. Today I finally got around to publishing it online. The following items are arranged in order of proximity to one another. If you start at the Colosseum you can methodically walk to the Fori, then to the Vittorio Emanuele monument, and so on. You'll end at Via dei Condotti where some shopping will probably occur, at which point you can just jump on the subway at the end of the street (to the left of the Spanish Steps) and return to your hotel. The Vatican Museum is one of the most extraordinary museums on the planet, and regardless of your religious proclivities is a required stop on the two day tour. The Vatican itself and museum are situated next to one another so I recommend visiting the former first and then going to the museum: If you're feet aren't hurting by day 3, check out the Musei Capitolini and Museo Nazionale Romano, both of which house some pretty amazing Roman artifacts. The Colosseum. Consider taking the tour, it's pretty neat. There is a subway stop named Colosseo which drops you off literally right in front of the Colosseum. Fori Imperiali (within it you can also see the prison where the Romans held Peter and Paul - really amazing). Keep your eyes peeled for the stone maps mounted on the wall, they were placed there by Mussolini as a tribute to the Roman Empire. Monumento di Vittorio Emanuele. Also known as the "birthday cake", it is derided by Italians as being Rome's ugliest monument. It's worth going to the top as the view is pretty nice. The piazza in front of the monument (Piazza Venezia) is somewhat infamous in recent history as both Hitler and Mussolini gave speeches on the monument steps facing this piazza. Trevi fountain The Pantheon. Keep your eyes peeled for Raffaele's tomb. Piazza Navona. I spent New Year's Eve here once! The Fountain of Four Rivers statue was carved by none other than Bernini himself. Chiesa San Luigi dei Francesi (a little known church near Piazza Navona, it is stupendous and a can't miss in my opinion) Museo Doria Pamphili (amazing art museum with Botticelli sculptures among others) Via dei Condotti (probably the most famous shopping street in the world) Spanish steps (oddity: nearby you can visit the largest McDonalds in Italy, it seats 1,200). Walk up to the very top of the steps and you'll see the home where John Keats lived. Vatican / St Peter's Square (you will spend around 90 minutes here). Be sure to visit the catacombs under the basilica. You cannot wear short shorts nor expose shoulders/midriff; they will not let you in. Vatican Museum (you will spend at least 3 hours in here) Use the subway when traveling any considerable distance across the city (for instance from Termini to the Vatican). The subway can get pretty hot depending on the time of year, but is generally easy to get around unless there is a strike. Italians drive on the same side of the street as Americans, however the similarities pretty much end there. I do not recommend driving unless you have an Indiana Jones-like thirst for adventure. You could take the train to various stops along the water, and although the train isn't without its own problems it is going to be far more relaxing than driving in the south. If you ignore this advice and plan on driving in Naples, addio. English is spoken by most shop / restaurant workers within central Rome, your luck will vary if you venture outside of the city. English proficiency will drop as you travel further south.

0 views
W. Jason Gilmore 1 years ago

Programming Games

These days I'm not particularly into video games other than occasionally getting destroyed by my kids in Fortnite or Gang Beasts. But I do like exploring programming- and technology-related games. If you'd like to explore these sorts of games I've compiled a list below. The formatting is pretty messy but this has been sitting in my drafts folder for a while now and so I figured I'd publish it today and come back to it over time. In GNU Robots you use the Scheme programming language to control a robot. http://web.mit.edu/16.410/www/project_fall04/project3.pdf https://mystery.knightlab.com https://sqlpd.com https://gitlab.com/leifhka/datastar https://www.therobinlord.com/projects/slash-escape https://untrustedgame.com http://thefounder.biz https://danielyxie.github.io/bitburner/ https://screeps.com/a/#!/sim/tutorial/1 https://microcorruption.com/debugger/Tutorial https://play.elevatorsaga.com https://swarm-game.github.io https://viewsourcecode.org/snaptoken/kilo/ https://wiki.osdev.org/Introduction https://buildyourownlisp.com/

0 views
W. Jason Gilmore 1 years ago

Japanese Short Story - Going to the Grocery Store

Miyako: 今日はスーパーに行ったよ。 Kyō wa sūpā ni itta yo. - Today, I went to the supermarket. Narumi: 何を買ったの? Nani o katta no? - What did you buy? Miyako: 野菜と果物とにくを買った。 Yasai to kudamono to niku o katta. - I bought vegetables, fruits and meat. Narumi: 何種類の野菜を買ったの? Nan shurui no yasai o katta no? - What kinds of vegetables did you buy? Miyako: キャベツ、にんじん、そしてトマトを買った。 Kyabetsu, ninjin, soshite tomato o katta. - I bought cabbage, carrots, and tomatoes. Narumi: 果物は何を買ったの? Kudamono wa nani o katta no? - What fruits did you buy? Miyako: りんごとバナナを買った。 Ringo to banana o katta. - I bought apples and bananas. Narumi: 何種類の肉を買ったの? Nan shurui no niku o katta no? - What kinds of meat did you buy? Miyako: 牛肉と鶏肉を買った。 Gyūniku to toriniku o katta. - I bought beef and chicken. Try making up your own sentences using foods you will commonly find at the grocery store.

0 views
W. Jason Gilmore 1 years ago

Technical Due Diligence - Relational Databases

Despite the relative popularity of NoSQL and graph databases, relational databases like MySQL, SQL Server, Oracle, and PostgreSQL continue to be indispensable for storing and managing software application data. Because of this, technical due diligence teams are practically guaranteed to encounter them within almost any project. Novice team members will gravitate towards understanding the schema which is of course important but only paints a small part of the overall risk picture. A complete research and risk assessment will additionally include information about the following database characteristics: I identify these three characteristics because technical due diligence is all about identifying and quantifying risk assessment , and not about nerding out over the merit of past decisions. The importance of quantifying risk assessment is in most cases no greater than when evaluating the software product's data store, for several reasons: Be sure to confirm all database licenses are in compliance with the company's use case, and if the database is commercially licensed you'll need to additional confirm the available features and support contract are in line with expectations. To highlight the importance of this verification work I'll point out a few ways in which expectations might not be met: All mainstream databases (MySQL, Oracle, PostgreSQL, etc) will have well-defined end-of-life (EOL) dates associated with each release. The EOL date identifies the last date in which that particular version will receive security patches. Therefore it is critical to determine what database versions are running in production in order to determine whether the database has potentially been running in an unpatched state. For instance MySQL 5.7 has an EOL date of October 25, 2023, and therefore if the seller's product is still running MySQL 5.7 after that date then it is in danger of falling prey to any vulnerabilities identified after that EOL date. Of course, the EOL date isn't the only issue at play here. If the database version hasn't reached its EOL date then you should still determine whether the database has been patched appropriately. For instance as of the time of this writing MySQL 8.2 was released only 9 months ago (on October 12, 2023) and there are already 11 known vulnerabilities . It's entirely possible that none of these vulnerabilities are exploitable in the context of the seller's product, however its nonetheless important to catalog these possibilities and supply this information to the buyer. In my experience where there is smoke there is fire and unpatched software is often symptomatic of much larger issues associated with technical debt and a lack of developer discipline. Enterprise web applications will typically run in an N-Tier architecture, meaning the web, data, caching, and job processing components can all be separately managed and scaled. This configuration means each tier will often run on separate servers and therefore a network connection between the database and web application tiers will need to be configured. Most databases can be configured to allow for connections from anywhere (almost invariably a bad idea), which is precisely what you don't want to see when that database is only intended to be used by the web application because it means malicious third-parties have a shot at successfully logging in should they gain access to or guess the credentials. Connecting users will be associated with a set of privileges which define what the user can do once connected to the database. It is considered best practice to assign those users the minimum privileges required to carry out their tasks. Therefore a database user which is intended to furnish information to a data visualization dashboard should be configured with read-only privileges, whereas a customer relationship management (CRM) application would require a database user possessing CRUD (create, retrieve, update, delete) database privileges. Therefore when examining database connectivity and privileges you should at a minimum answer the following questions: Satisfying this review requirement is relatively straightforward, and completed in two steps: Performance Disaster Recovery Poor security practices open up the possibility of application data having already been stolen, or in danger of being imminently stolen, placing the buyer in legal danger. Poor performance due to inadequate or incorrect indexing, insufficient resourcing, or a combination of the two might result in disgruntled customers who are considering cancelling their subscription. Some of these customers may be major contributors to company revenue, severely damaging the company's outlook should they wind up departing following acquisition. A lack of disaster recovery planning outs the buyer in greater short-term risk following acquisition due to an outage which may occur precisely at a time when personnel are not available or are not entirely up to speed. The buyer requires the data to be encrypted at-rest due to regulatory issues, however the product data is in fact not encrypted-at-rest due to use of the Heroku Essential Postgres tier which does not offer this feature. There could possibly be an easy fix here which involves simply upgrading to a tier which does support encryption-at-rest, however you should receive vendor confirmation (in writing) that encryption is indeed possible as a result of upgrading, and whether any downtime will be required to achieve this requirement. The buyer's downtime expectations are more strict than what is defined by the cloud service provider's SLA. TODO TALK ABOUT MEMSQL What users are defined and active on the production databases, and from what IP addresses / hostnames are they accessible? Is the database server accessible to the wider internet and if so, why? What privileges do the defined database users possess, and why? To what corporate applications are production databases connected? This includes the customer-facing application, business intelligence software, backup services, and so forth. What other non-production databases exist? Where is production data replicated? Are these destinations located within jurisdictions compliant by the laws and SLA under which the buyer's target IP operates? From a security standpoint, data is often defined as being encrypted at-rest and in-transit , the former referring to its encryption state when residing in the database or on server, and the latter referring to its encryption state when being transferred from the application to the requesting user or service. You'll want to determine whether these two best practices are implemented. If the data is not encrypted at-rest (which is typical and not necessarily an issue for many use cases), then how is sensitive data like passwords encrypted (or hashed)? You often won't be able to determine this by looking at the database itself; web frameworks will typically dictate the password hashing scheme, such as Laravel's use of the Bcrypt algorithm for this purpose.

0 views
W. Jason Gilmore 1 years ago

How to See All Of Your JIRA Notifications In One Place

In JIRA it's possible to somebody in a ticket or comment. By default this will trigger an email notification which will soon arrive in the designated person's inbox. Personally I only check email twice a day and when doing so am not interested in being distracted by immediately responding to a JIRA comment or question. Instead I review any outstanding JIRA inquiries once each morning, and then again later in the day depending upon the priority. To easily review all comments/tickets which require my attention, I've long used a custom JQL (JIRA Query Language) filter called "Mentions of me, me, me": The custom JQL looks like this: To create your own custom filter, follow these steps: Give your filter a name, and leave the other settings intact presuming you'd like the filter to remain private. After pressing , your custom filter will appear under the Filters menu in the navbar! If you'd like to see JIRA notifications for multiple projects, all you need to do is modify the JQL to include a comma-delimited list of project abbreviations: Login to JIRA, click on the Filters menu item in the navbar, and select "View all issues". Click the "New search" menu item located on the left side of the screen. Make sure the button is clicked on the right side of the screen (it is by default), then paste the above JQL into the search field located at the top of the screen, replacing with the abbreviation used for your JIRA project name. You can view these abbreviations by clicking on the "Projects" menu dropdown located in the navbar. The abbreviation is found in the parentheses following the project name. This is important: after pasting in the JQL, you need to press the Enter/Return key before saving the filter! Chalk this up to the usual JIRA UX insanity. Then press the button and you should see a modal window that looks like this:

0 views
W. Jason Gilmore 1 years ago

Switching Between English and Hiragana Keyboards on Mac

I maintain a lengthy set of Japanese language learning notes in a Google Doc (130 pages and counting). When taking notes it's often useful to switch between English and Japanese languages. Fortunately for macOS users this is pretty trivial thanks to a default keyboard setting. To see this setting open your Keyboard settings window: In the above screenshot I've highlighted the relevant setting. That globe represents the function (or fn) key, and if you look to the bottom left of your keyboard you'll see that key. And if you press that key, you'll be able to switch between your defined keyboard alphabets. However, the user experience associated with pressing this key will differ according to the context in which you use it, and if you don't know this it can be quite frustrating which is why I wanted to write this blog post. I've struggled to find easily understandable short stories in Japanese, and so am writing a short workbook for the benefit of others. To download a free story in PDF format and be notified when the book is ready, provide your contact information below. I hate spam and so will not ever spam you. Don't fill this out if you are human: Download Free Short Story Let's review a few usage examples. To set the stage, as you can see in this screenshot I have three alphabets defined: English, Hiragana, and Katakana: No matter where you are in macOS, if you press this button and then mouse over your keyboard icon located in the menu bar, you will see that your keyboard language is indeed changing! However, this process can be a bit more visual depending on where exactly you are inside the operating system. For instance if you're editing a document and press the function key, then you will see a little window appear below your cursor: Finally, if you're in non-editing mode, and oddly enough outside of some but not all applications, when pressing the function key this popup switcher appears: Hope you found this useful! Disable Hiragana to Kanji Conversion on Mac

0 views
W. Jason Gilmore 1 years ago

Minimal SaaS Technical Due Diligence

For more than six years now I've been deeply involved in and in recent years leading Xenon Partners ' technical due diligence practice. This means that when we issue an LOI (Letter of Intent) to acquire a company, it's my responsibility to dig deep, very deep, into the often arcane technical details associated with the seller's SaaS product. Over this period I've either been materially involved in or led technical due diligence for DreamFactory , Baremetrics , Treehouse , Packagecloud , Appsembler , UXPin , Flightpath Finance , as well as several other companies. While I've perhaps not seen it all, I've seen a lot, and these days whenever SaaS M&A comes up in conversation, I tend to assume the thousand-yard stare , because this stuff is hard . The uninitiated might be under the impression that SaaS technical due diligence involves "understanding the code". In reality, the code review is but one of many activities that must be completed, and in the grand scheme of things I wouldn't even put it in the top three tasks in terms of priority. Further complicating the situation is the fact that sometimes due to circumstances beyond our control we need to close a deal under unusually tight deadlines, meaning it is critically important that this process is carried out with extreme efficiency. Due to the growing prevalence of SaaS acquisition marketplaces like Acquire.com and Microassets , lately I've been wondering what advice I would impart to somebody who wants to acquire a SaaS company yet who possesses relatively little time, resources, and money. What would be the absolute minimum requirements necessary to reduce acquisition risk to an acceptable level? This is a really interesting question, and I suppose I'd focus on the following tasks. Keep in mind this list is specific to the technology side of due diligence; there are also financial, operational, marketing, legal, and HR considerations that also need to be addressed during this critical period. I am not a lawyer, nor an accountant, and therefore do not construe anything I say on this blog as being sound advice. Further, in this post I'm focused on minimal technical due diligence here, and largely assuming you're reading this because you're interested in purchasing a micro-SaaS or otherwise one run by an extraordinarily small team. For larger due diligence projects there are plenty of other critical tasks to consider, including technical team interviews. Perhaps I'll touch upon these topics in future posts. Please note I did not suggest asking for architectural diagrams. Of course you should ask for them, but you should not believe a single thing you see on the off chance they even exist. They'll tell you they do exist, but they likely do not. If they do exist, they are almost certainly outdated or entirely wrong. But I digress. On your very first scheduled technical call, open a diagramming tool like Draw.io and ask the seller's technical representative to please begin describing the product's architecture. If they clam up or are unwilling to do so (it happens), then start drawing what you believe to be true, because when you incorrectly draw or label part of the infrastructure, the technical representative will suddenly become very compelled to speak up and correct you. These diagrams don't have to be particularly organized nor aesthetically pleasing; they just need to graphically convey as much information as possible about the application, infrastructure, third-party services, and anything else of relevance. Here's an example diagram I created on Draw.io for the purposes of this post: Don't limit yourself to creating a single diagram! I suggest additionally creating diagrams for the following: We have very few requirements that if not met will wind up in a deal getting delayed or even torpedoed, however one of them is that somebody on our team must successfully build the development environment on their local laptop and subsequently successfully deploy to production. This is so important that we will not acquire the company until these two steps are completed . These steps are critical because in completing them you confirm: Keep in mind you don't need to add a new feature or fix a bug in order to complete this task (although either would be a bonus). You could do something as simple as add a comment or fix a typo. Keep in mind at this phase of the acquisition process you should steadfastly remain in "do no harm" mode, and are only trying to confirm your ability to successfully deploy the code, not make radical improvements to the code. This isn't strictly a technical task, but it's so important that I'm going to color outside the lines and at least mention it here. The software product you are considering purchasing is almost unquestionably built atop the very same enormous open source (OSS) ecosystem upon which our entire world has benefited. There is nothing at all wrong with this, and in fact I'd be concerned if it wasn't the case, however you need to understand that there are very real legal risks associated with OSS licensing conflicts. As I've already made clear early in this post, I am not a lawyer so I'm not going to offer any additional thoughts regarding the potential business risks other than to simply bring the possibility to your attention. The software may additionally very well rely upon commercially licensed third-party software, and it is incredibly important that you know whether this is the case. If so, what are the terms of the agreement? Has the licensing fee already been paid in full, or is it due annually? What is the business risk if this licensor suddenly triples fees? There are actually a few great OSS tools that can help with dependency audits. Here are a few I've used in the past: That said for reasons I won't go into here because again, IANAL, it is incumbent upon the seller to disclose licensing issues . The buyer should only be acting as an auditor, and not the investigatory lead with regards to potential intellectual property conflicts. You should always retain legal counsel for these sorts of transactions. Finally, if the software relies on third-party services (e.g OpenAI APIs) to function (it almost certainly does), many of the same aforementioned questions apply. How critical are these third-party services? At some point down the road could you reasonably swap them out for a better or more cost-effective alternative? A penetration test (pen test) is an authorized third-party cybersecurity attack on a computer system. In my experience for SaaS products these pen tests cost anywhere between $5K and $10K and take 1-2 weeks to complete once scheduled. A lengthy report is typically delivered by the pen tester, at which point the company can dispute/clarify the findings or resolve the security issues and arrange for a subsequent test. Also in my experience, if you're interested in purchasing a relatively small SaaS with no employees other than the founder, it's a practical certainty the product has never been pen tested. Further, if the SaaS is web-based and isn't using a web framework such as Ruby on Rails or Laravel, for more reasons than I could possibly enumerate here I'd be willing to bet there are gaping security holes in the product (SQL injection, cross-site scripting attack, etc) which may have already been compromised. Therefore you should be sure to ask if a pen test has recently been completed, and if so ask for the report and information about any subsequent security-related resolutions. If one has not been completed, then it is perfectly reasonable to ask (in writing) why this has not been the case, and whether the seller can attest to the fact that the software is not known to have been compromised. If the answers to these questions are not satisfactory, then you might consider asking the seller to complete a pen test, or ask if you can arrange for one on your own dime. If you're sufficiently technical and have a general familiarity of cybersecurity concepts such as the OWASP Top Ten , then you could conceivably lower the costs associated with this task by taking a DIY approach. Here is a great list of bug bounty tools that could be used for penetration test purposes. That said, please understand that you should in no circumstances use these tools to test a potential seller's web application without their written permission! If you think the SaaS you're considering buying doesn't have any technical debt, then consider the fact that even the largest and most successful software products in the world are filled with it: That said, due to perfectly reasonable decisions made literally years ago, it is entirely possible that this "UI change" isn't fixable in 3 months, let alone 3 days. And there is a chance it can't ever be reasonably fixed, and anybody who has built sufficiently complicated software is well aware as to why. Technical debt is a natural outcome of writing software, and there's nothing necessarily wrong with it provided your acknowledge its existence and associated risks. But there are limits to risk tolerances, and if the target SaaS is running on operating systems, frameworks, and libraries that have long since been deprecated and are no longer able to receive maintenance and security updates, then I think it is important to recognize that you're probably going to be facing some unwelcome challenges in the near term as you update the software and infrastructure instead of focusing on the actual business. Of everything that comprises technical due diligence there is nothing that makes me break out into a sweat more than this topic. Any SaaS product will rely upon numerous if not dozens of credentials. GSuite, AWS, Jenkins, Atlassian, Trello, Sentry, Forge, Twitter, Slack... this list is endless. Not to mention SSH keys, 2FA settings, bank accounts, references to founder PII such as birthdates and so forth. In a perfect world all of this information would be tidily managed inside a dedicated password manager, but guess what it's probably not. I cannot possibly impress upon you in this post how important it is for you to aggressively ask for, review, and confirm access to everything required to run this business because once the paperwork is signed and money transferred, it's possible the seller will be a lot less responsive to your requests. Ensuring access to all credentials is so critical that you might consider structuring the deal to indicate that part of the purchase price will be paid at some future point in time (90 days from close for example) in order to ensure the founder remains in contact with you for a period of time following acquisition. This will give you the opportunity to follow up via email/Zoom and gain access to services and systems that were previously missed. This blog post barely scratches the surface in terms of what I typically go through during a full-fledged due diligence process, but I wanted to give interested readers a basic baseline understanding of the minimum requirements necessary to assuage my personal levels of paranoia. If you have any questions about this process, feel free to hit me up at @wjgilmore , message me on LinkedIn , or email me at [email protected] . Cloud infrastructure: For instance if the seller is using AWS then try to identify the compute instance sizes, RDS instances, security groups, monitoring services, etc. The importance of diagramming the cloud infrastructure becomes even more critical if Kubernetes or other containerized workload solutions are implemented, not only due to the additional complexity but also because frankly in my experience these sorts of solutions tend to not be implemented particularly well. Deployment strategy: If CI/CD is used, what does the deployment process look like? What branch triggers deployments to staging and production? Is a test suite run as part of the deployment process? How is the team notified of successful and failed deployments? You're able to successfully clone the (presumably private) repository and configure the local environment. You're able to update the code, submit a pull request, and participate in the subsequent review, testing, and deployment process (if one even exists) You've gained insights into what technologies, processes, and services are used to manage company credentials, build the development environment, merge code into production, run tests, and trigger deployments. LicenseFinder

0 views
W. Jason Gilmore 1 years ago

Connecting to a Raspberry Pi Using VNC on macOS

I recently mounted an old 60" flatscreen TV in my office and plugged a Raspberry Pi into it, the goal being to display a few rotating dashboards associated with various projects I'm working on. Rather than plug a mouse and keyboard into the Pi I wanted to instead connect to it over VNC from my Mac and use the mounted TV as a giant monitor. Apparently due to some recent changes to the Raspberry Pi OS neither the native Mac VNC client nor RealVNC without making additional changes to the Pi configuration. I didn't want to deal with that and so looked for an easier option, documented here. The latest Raspberry Pi OS (released on 2024-03-15) uses a VNC server called WayVNC. However VNC isn't enabled by default, so after SSHing into your Pi, run this command: Use your arrow keys to navigate down to Interface Options, press return, and you should see the following interface: Arrow down to VNC and press return. You'll then be prompted to enable VNC. Make sure is selected and press return. Exit the configuration wizard. You won't need to restart your Pi. Next, confirm WayVNC is running: Next open a macOS terminal and install tigervnc-viewer: If you're running Mac Silicon (M1, M2, M3...) you'll need to prefix the command like so: Now open TigerVNC Viewer using Spotlight or however you open apps on your Mac. Enter your Raspberry Pi IP address and press the Connect button: You'll next be prompted to continue connecting despite a server certificate mismatch. I don't really understand why this happens but am not concerned considering I'm connecting to a Pi residing on my local network: Next you'll presumably see a screen asking to make an exception for the aforementioned server certificate mismatch: Finally, you can enter the Pi by authenticating using a user username and password you presumably configured when the Pi SD Card was flashed, or after you SSH'd into the server to create a dedicated VNC user: After connecting, you should see your Pi desktop! Hope this helps!

0 views
W. Jason Gilmore 1 years ago

A Career Spent Struggling with Web Design and Why the Future Looks Bright

I've been building on the web for more than two decades now, and along the way have built hundreds of applications for clients, for personal use, for my own businesses, and simply for fun. Over the years the client projects have varied widely in purpose and scope, and have crossed sectors and industries as varied as education, agriculture, telecom, architecture and design, and publishing. Interestingly though almost without exception they have all shared a common thread: the teams were laden with coding talent, and woefully lacking in design and UX . This experience isn't at all uncommon, and I'm not entirely sure why this is the case other than to observe that we developer types seem to suffer from a perennial delusion that "faking it until we make it" is an acceptable strategy when it comes to designing for the web. Perhaps this was the case throughout the 2000's and the first half of the twenty-tens, however at some point along the way even MVPs started to ship with exceptionally polished interfaces. But I don't think it's fair to entirely blame developers for their horribly designed applications, because over the years web design tools have always been created for, you guessed it, designers. To the best of my recollection we haven't seen a mainstream IDE that additionally bundled visual web design tools since the days of Macromedia Dreamweaver 1 . Obviously web-oriented IDEs have come a long, long way in the years since, with popular options like PHPStorm and VS Code more than satisfying the non-design side of things, yet somehow the associated design tooling never came along for the ride. To be clear, I'm not necessarily lamenting the lack of WYSIWYG interfaces (although it would be a start); rather, I'm referring to a lack of historical tooling and capabilities associated with all of the other things that developers think about when designing web applications, such as: Fortunately, thanks to the rise of generative artificial intelligence, maturing web technologies, and some really smart people working on some really compelling products, we seem to be entering a new era in which developer-only teams are going to be able to build and launch visually appealing products with amazing user experiences faster and with less fuss than ever before. Given my opening comment regarding most web products being built by developer-centric teams, it seems kind of silly that over the years more attention hasn't been devoted to developer-focused design tooling. Fortunately, we're really starting to see signs leading towards a reversal of this trend. In 2023 Figma launched Dev Mode , and entirely new products like v0.dev and Nick Dobos' amazing Grimoire GPT are focused on putting the code behind designs into the hands of developers as quickly as possible. For instance I can paste a web component screenshot into Grimoire and it will recreate a close facsimile the HTML and CSS in seconds: One such tool that I've been spending quite a bit of time using in recent months is UXPin (full disclosure: I've also spent a lot of time with the UXPin team, offering one developer's perspective regarding what a developer-oriented design tool should be). UXPin has actually been around for quite a few years, and it's used by PayPal, T. Rowe Price, HBO, and a bunch of other companies. It's historically been a designer-focused tool similar to Figma, however more recently they've launched UXPin Merge , a visual design tool that exports designs directly to React components. I've been using Merge for all sorts of UI design projects, including a macOS screenshot app I've been working on (built using ElectronJS and Tailwind). For instance I actually used Merge to create the screenshot pasted into Grimoire. I needed to create a simple settings form for accepting an API key (when enabled, screenshots are uploaded to a companion web application), managing log settings, and so forth. This is the sort of thing that I might have agonized over for hours in an attempt to get the alignment, padding, etc just so. Using UXPin I managed to create it in less than five minutes: These sorts of interfaces can easily be created using Tailwind, MUI, ANT, FluentUI, and a bunch of other libraries, and best of all you can just drag and drop the components onto your canvas: I also appreciate how I'm not constrained to using the drag-and-drop interface. For instance I'm free to directly fiddle with the Tailwind CSS styles via a convenient popup window: And of course, I can copy the code into my own project, and even view it in an interactive editor such as StackBlitz: Given the current and understandable generative AI frenzy, it's probably no surprise that Merge also offers a chat-based interface called the AI component creator. It is this feature that allows me to quickly riff on designs without getting too caught up in design mechanics. As an example, I've lately wanted a custom Tailwind component for displaying terminal-based shell command examples. Here's the prompt I used: I would like to display snippets of shell commands executed in a terminal. The terminal background should be dark gray, and the terminal prompt should be a white dollar sign. The shell commands should be styled with a bright green font color. I typed this prompt into the UXPin AI component creator: And this is the component CSS it generated: Finally, here is a screenshot of the rendered component: The Merge chat tool only supports Tailwind at this time, but support for other libraries is coming soon. Although already revolutionary, tools like Grimoire and UXPin really only represent the beginning of what's to come. I'd love to see features such as: After 20+ years of struggling with this critical and omnipresent part of web applications, I'm starting to feel like Neo entering the Matrix. These code-oriented tools open up all kinds of possibilities for design-challenged teams. Be sure to check out Grimoire, v0.dev, and if you'd like to start a UXPin trial, head on over to their site and give it a whirl. 1 . I was really surprised to learn Macromedia Dreamweaver (now called Adobe Dreamweaver ) is still very much a thing in 2024. Maintaining consistent styling across multiple sites, such as a SaaS product's marketing website and application. Contributing to the creation of a user interface when using an unfamiliar CSS library such as MUI (these days I tend to stick to Tailwind). Synchronizing designs with a code repository ( and vice versa ). Riffing on various design ideas until finding an acceptable variation. In fact this strikes me as one of the most interesting (and frustrating) aspects of not being a proficient designer; I can't create something visually pleasing, yet I know pleasing when I see it and therefore by iterating through a bunch of variations I'm pretty confident I'll eventually land on something I like. An A/B testing agent that monitors page interactions and then autonomously creates and deploys sensible experimental variations. Automated synchronization between the GUI and my repositories, meaning I can make a design change in one and immediately see the results in the other. Ideation on steroids; for instance I conjure up an initial design and the tool creates 50-100 variations and presents them in gallery or flipbook mode so I can quickly browse and identify candidates.

0 views
W. Jason Gilmore 1 years ago

Disabling Axios SSL Certificate Validation

The NodeJS Axios library will by default verify the supplied server certificate against the associated certificate authority, which will fail if you're using a self-signed certificate. You can disable this behavior by setting to like so:

0 views
W. Jason Gilmore 1 years ago

Disabling SSL Validation in Bruno

I use Laravel Herd to manage local Laravel development environments. Among many other things, it can generate self-signed SSL certificates. This is very useful however modern browsers and other HTTP utilities tend to complain about these certificates. Fortunately it's easy to disable SSL validation by opening Bruno, navigating to under the menu heading, and unchecking .

0 views
W. Jason Gilmore 2 years ago

Blitz Building with AI

In August, 2023 I launched two new SaaS products: EmailReputationAPI and BlogIgnite . While neither are exactly moonshots in terms of technical complexity, both solve very real problems that I've personally encountered while working at multiple organizations. EmailReputationAPI scores an email address to determine validity, both in terms of whether it is syntactically valid and is deliverable (via MX record existence verification), as well has the likelihood there is a human on the other end (by comparing the domain to a large and growing database of anonymized domains). BlogIgnite serves as a writing prompt assistant, using AI to generate a draft article, as well as associated SEO metadata and related article ideas. Launching a SaaS isn't by itself a particularly groundbreaking task these days, however building and launching such a product in less than 24 hours might be somewhat more notable accomplishment. And that's exactly what I did for both products, deploying MVPs approximately 15 hours after writing the first line of code. Both are written using the Laravel framework , a technology I happen to know pretty well . However there is simply no way this self-imposed deadline would have been met without leaning heavily on artificial intelligence. I am convinced AI coding assistants are opening up the possibility of rapidly creating, or blitzbuilding , new software products. The goal of blitzbuilding is not to create a perfect or even a high-quality product! Instead, the goal is to minimize business risk incurred via a prolonged development cycle by embracing AI to assist with the creation of a marketable product in the shortest amount of time. The term blitzbuilding is a tip of the cap to LinkedIn founder Reid Hoffman's book, "Blitzscaling: The Lightning-Fast Path to Building Massively Valuable Companies", in which he describes techniques for growing a company as rapidly as possible. The chosen technology stack isn't important by itself, however it is critical that you know it reasonably well otherwise the AI will give advice and offer code completions that can't easily be confirmed as correct. In my case, EmailReputationAPI and BlogIgnite are built atop the Laravel framework, use the MySQL database, with Redis used for job queuing. They are hosted on Digital Ocean, and deployed with Laravel Forge. Stripe is used for payment processing. The common thread here is I am quite familiar with all of these technologies and platforms. Blitz building is not a time for experimenting with new technologies, because you will only get bogged down in the learning process. The coding AI is GitHub Copilot with the Chat functionality. At the time of this writing the Chat feature is only available in a limited beta, but it is already extraordinarily capable insomuch that I consider it indispensable. Among many things it can generate tests, offer feedback, and even explain highlighted code. GitHub Copilot Chat runs in a VS Code sidebar tab, like this: Notably missing from these products is JavaScript (to be perfectly accurate, there are miniscule bits of JavaScript found on both sites due to unavoidable responsive layout behavior) and custom CSS. I don't like writing JavaScript, and am terrible at CSS and so leaned heavily on Tailwind UI for layouts and components. 24 hours will be over before you know it, and so it is critical to clearly define the minimum acceptable set of product requirements. Once defined, cut the list in half, and then cut it again. To the bone. Believe me, that list should be much smaller than you believe. For EmailReputationAPI, that initial list consisted of the following: There are now plenty of additional EmailReputationAPI features, such as a Laravel package , but most didn't make the list critical to the first release and so were delayed. It is critical to not only understand but be fine with the fact you will not be happy with the MVP . It won't include all of the features you want, and some of the deployed features may even be broken. It doesn't matter anywhere near as much as you think. What does matter is putting the MVP in front of as many people as possible in order to gather feedback and hopefully customers. I hate CSS with the heat of a thousand suns, largely because I've never taken the time to learn it well and so find it frustrating. I'm also horrible at design. I'd imagine these traits are shared by many full stack developers, which explains why the Bootstrap CSS library was such a huge hit years ago, and why the Tailwind CSS framework is so popular today. Both help design-challenged developers like myself build acceptable user interfaces. That said, I still don't understand what most of the Tailwind classes even do, but fortunately GitHub Copilot is a great tutor. Take for example the following stylized button: I have no idea what the classes , , etc do, but can ask Copilot chat to explain: I also use Copilot chat to offer code suggestions. One common request pertains to component alignment. For instance I wanted to add the BlogIgnite logo to the login and registration forms, however the alignment was off: I know the logo should be aligned with Tailwind's alignment classes, but have no clue what they are nor do I care. So after adding the logo to the page I asked Copilot . It responded with: After updating the image classes, the logo was aligned as desired: Even when using the AI assistant the turnaround time is such that your product will inevitably have a few bugs. Focus on fixing what your early users report, because those issues are probably related to the application surfaces other users will also use. Sometime soon I'd love to experiment with using a local LLM such as Code Llama in conjunction with an error reporting tool to generate patches and issue pull requests. At some point in the near future I could even see this process as being entirely automated, with the AI additionally writing companion tests. If those tests pass the AI will merge the pull request and push to production, with no humans involved! Have any questions or thoughts about this post? Contact Jason at [email protected] . Marketing website Account management (register, login, trial, paid account) Crawlers and parsers to generate various datasets (valid TLDs, anonymized domains, etc) Secure API endpoint and documentation Stripe integration

0 views
W. Jason Gilmore 4 years ago

Laravel Jetstream: Changing the Login Redirection URL

By default Laravel Jetstream will redirect newly logged in users to the route. A side project I'm building didn't have a need for a general post-authentication landing page, and so I needed to figure out how to redirect the user elsewhere. Turns out it's pretty easy. Open and locate this line: Change it to: Of course you'll want to swap out with your desired route name. You can retrieve a list of route names for your application by running:

0 views