Posts in Github (7 found)
W. Jason Gilmore 2 weeks ago

Resolving Dependabot Issues with Claude Code

I created a Claude skill creatively called dependabot which once installed you can invoke like this: It will use the GitHub CLI to retrieve open Dependabot alerts and upgrade the relevant dependencies. If you have multiple GitHub accounts logged in via the CLI it will ask which one it should use if it can't figure it out based on how the skill was invoked or based on the repository settings. You can find the skill here: https://github.com/wjgilmore/dependabot-skill To install it globally, open a terminal and go to your home directory, then into and clone there. Then restart Claude Code and you should be able to invoke it like any other skill. Here is some example output of it running on one of my projects:

0 views
Lonami 3 weeks ago

Ditching GitHub

AI. AI AI AI. Artificial "Intelligence". Large Language Models. Well, they sure are large, I'll give them that. This isn't quite how I was hoping to write a new blog post after years of not touching the site, but I guess it's what we're going with. To make it very clear: none of the text, code, images or any other output I produce is AI-written or AI-assisted. I also refuse to acknowledge that AI is even a thing by adding a disclaimer to all my posts saying that I do not use it. But this post is titled "Ditching GitHub", so let's address that first. Millions of developers and businesses call GitHub home And that's probably not a good thing. I myself am guilty of often searching "<project> github" in DuckDuckGo many a time when I want to find open-source projects. I'll probably keep doing it, too, because that's what search engines understand. So, GitHub. According to their API, I joined the first day of 2014 after noon (seriously, did I not have anything better to do on new year's? And how is that over twelve years ago already‽). Back then, I was fairly into C# programming on Windows. It seems I felt fairly comfortable with my code already, and was willing to let other people see and use it. That was after I had been dabbling with Visual Basic Scripts, which in turn was after console batch scripting. I also tried Visual Basics before C#, but as a programming noob, with few-to-none programming terms learnt, I found the whole and quite strange ↪1 . Regardless of the language, telling the computer to do things and have it obey you was pretty cool! Even more so if those things had a visual interface. So let's show others what cool things we could pull off! During that same year, I also started using Telegram. Such a refreshing application this used to be. Hey, wouldn't it be cool if you could automate Telegram itself ? Let's search to see if other people have made something to use that from C#. Turns out TLSharp did in fact exist! The repository seems to be archived now, in favor of WTelegramClient . I tried to contribute to it. I remember being excited to have a working code generator that could be used to automatically update the types and functions that the library had to offer, based on the most recent definitions provided by Telegram (at least indirectly, via their own open-source repositories.) Unfortunately, I had some friction with the maintainer back then. Perhaps it was a misunderstanding, or I was too young, naive, or just couldn't get my point across. That didn't discourage me though ↪2 . Instead, I took it upon myself to reimplement the library. Back then, Telegram's lack of documentation on the protocol made it quite the headache (literally, and not just once) to get it working. Despite that, I persevered, and was able to slowly make progress. Fast-forward a bit ↪3 , still young and with plenty of time on my hands, one day I decided I wanted to try this whole Linux thing. But C# felt like it was mostly a Windows thing. Let's see, what other languages are there that are commonplace in Linux… " Python " huh? Looks pretty neat, let's give it a shot! Being the imaginative person I am, I obviously decided to call my new project a mix between Tele gram and Python . Thus, Telethon was born ↪4 . Ah, GitHub stars. Quite the meaningless metric, considering they can be bought, and yet… there's something about them. I can't help myself. I like internet points. They make me feel like there are other people out there who, just like me, have a love for the craft, and share it with this small gesture. I never intended for Telethon to become as popular as it has. I attribute its success to a mix of luck, creating it at the right time, choice of popular programming language, and lack of many other options back then. And of course, the ridiculous amount of time, care and patience I have put (and continue to put) into the project out of my own volition. Downloads are not a metric I've cared to look at much. But then came support questions. A steady growth of stars. Bug reports. Feature requests. Pull requests. Small donations! And heart-felt thank-you emails or messages. Each showing that people like it enough to spend their time on it, and some even like it enough that they want to see it become better, or take the time to show their appreciation. This… this feels nice, actually. Sure, it's not perfect. There will always be an idiot who thinks you owe them even more time ↪5 . Because the gift of open-source you've given the world is not enough. But that's okay. I've had a bit of an arc in how I've dealt with issues, from excited, to tired and quite frankly pretty rude at times (sorry! Perhaps it was burn-out?), to now where I try to first and foremost remain polite, even if my responses can feel cold or blunt. There are real human beings behind the screens. Let's not forget that. Telethon is closing-in on twelve thousand stars on GitHub ↪6 . I don't know how many are bots, or how many still use GitHub at all, but that's a really darn impressive number. cpython itself is at seventy-two thousand! We're talking the same order of magnitude here. So I am well aware that such a project makes for quite the impressive portfolio. There's no denying that. We don't have infinite time to carefully audit all dependencies we rely on, as much as we should. So clearly, bigger star number must mean better project, or something like that. To an extent, it does, even if subconsciously. Unfortunately for me, that means I can't quite fully ditch GitHub. Not only would I be contributing to link-rot, but the vast majority of projects are still hosted there. So whether I like it or not, I'm going to have to keep my account if I want to retain my access to help out other projects. And, yes. Losing that amount of stars would suck. But wow has the platform gotten worse. Barely a screen into GitHub's landing page while not logged in, there it is. The first mention of AI. Scroll a bit further, and… Your AI partner everywhere. They're not wrong. It is everywhere. AI continues to be shoved so hard in so many places . Every time I'm reading a blog post and there's even the slightest mention of AI, or someone points it out in the comments, my heart sinks a little. "Aw, I was really enjoying reading this. Too bad." ↪7 It doesn't help that I'm quite bad at picking up the tell-tale signs of AI-written text ↪8 . So it hurts even more when I find out. AI used to be a fun topic. Learning how to make self-improving genetic algorithms, or basic neural networks to recognize digits . For pity's sake, even I have written about AI before . I used to be fascinated by @carykh's YouTube videos about their Evolution Simulator . It was so cool ! And now I feel so disgusted by the current situation. Remember when I said I was proud of having a working code generator for TLSharp? Shouldn't I be happy LLMs have commoditized that aspect? No, not at all. Learning is the point . Tearing apart the black boxes that computers seem. This code thing. It's actually within your grasp with some effort. Linux itself, programming languages. They're not magic, despite some programmers being absolute wizards. You can understand it too. Now? Oh, just tell the machine what you want in prose. It will do something. Something . That's terrifying. "But there's this fun trick where you can ask the AI to be a professional engineer with many years of experience and it will produce better code!" I uh… What? Oh, is that how we're supposed to interact with them. Swaying the statistical process in a more favourable direction. Yikes. This does not inspire any confidence at all. Time and time again I see mentions on how AI-written code introduces bugs in very subtle ways. In ways that a human wouldn't, which also makes them harder to catch. I don't want to review the ridiculous amount of code that LLMs produce. I want to be the one writing the code. Writing the code is the fun part . Figuring out the solution comes before that, and along experimentation, takes the longest. But once the code you've written behaves the way you wanted it, that's the payoff. There is no joy in having a machine guess some code that may very well do something completely different the next time you prompt it the same. As others have put it very eloquently before me, LLM-written text is "a cognitive DoS ". It's spam. It destroys trust. I don't want to read an amalgamation of code or answers from the collective internet. I want to know people's thoughts. So please, respect my time, or I'll make that choice myself by disengaging with the content. Embrace AI or get out -- GitHub's CEO Out we go then. If not GitHub, where to go? GitHub pages makes it extremely easy to push some static HTML and CSS and make it available everywhere reliably, despite the overall GitHub status dropping below 90% for what feels like every day. I would need to host my website(s) somewhere else. Should I do the same with my code? I still enjoy being part of the open source community. I don't want to just shut it all down, although that's a fate others have gone through . Many projects larger than mine struggle with 'draining and demoralizing' AI slop submissions , and not just of code . I have, thankfully, been able to stay out of that for the most part. Others have not . I thought about it. Unfortunately, another common recurring theme is how often AI crawlers beat the shit out of servers, with zero respect for any sensible limits. Frankly, that's not a problem I'm interested in dealing with. I mean, why else would people feel the need to be Goofing on Meta's AI Crawler otherwise? Because what else can you do when you get 270,000 URLs being crawled in a day. Enter Codeberg . A registered non-profit association. Kord Extensions did it , Zig did it , and I'm sure many others have and will continue to do it. I obviously don't want this to end in another monopoly. There are alternatives, such as SourceHut , which I also have huge respect for. But I had to make a choice, and Codeberg was that choice. With the experience from the migration, which was quite straightforward ↪9 , jumping ship again should I need to doesn't seem as daunting anymore. Codeberg's stance on AI and Crawling is something I align with, and they take measures to defend against it. So far, I'm satisfied with my choice, and the interface feels so much snappier than GitHub's current one too! But crawling is far from the only issue I have with AI. They will extract as much value from you as possible, whether you like it or not. They will control every bit that they can from your thoughts. Who they? Well, the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds . Putting aside the wonderful experience that the site's design provides (maybe I should borrow that starry background…), the contents are concerning . So I feel very validated in the fact that I've never made an attempt to use any of the services all these companies are trying to sell me. I don't want to use them even if I got paid . Please stay away, Microslop . But whether I like it or not, we are, unfortunately, very much paying for it. So Hold on to Your Hardware . Allow me to quote a part from the article: Q1 hasn’t even ended and a major hard drive manufacturer has zero remaining capacity for the year So yeah. It's important to own your hardware. And I would suggest you own your code, too. Don't let them take that away from you. Now, I'm not quite at the point where I'm hosting everything I do from my own home, and I really hope it doesn't have to come to that. But there is comfort in paying for a service, such as renting a server to host this very site ↪10 , knowing that you are not the product (or, at least, whoever is offering the paid service has an incentive not to make you one.) Some people pair the move from GitHub to Codeberg along statichost.eu . But just how bad can hosting something youself can get anyway? Judging by the amount of people that are Messing with bots , it indeed seems there are plenty of websites that want to keep LLM crawlers are bay, with a multitude of approaches like Blocking LLM crawlers, without JavaScript or the popular Anubis . If I were to self-host my forge, I would probably be Guarding My Git Forge Against AI Scrapers out of need too. Regardless of the choice, let's say we're happy with the measures in place to keep crawlers busy being fed garbage. Are we done? We're protected against slop now, right? No, because they're doing the same. To those that vibecode entire projects and don't disclaim they're done with AI: your project sucks . And it's in your browser too. Even though I think nobody wants AI in Firefox, Mozilla . Because I don't care how well your "AI" works . And No, Cloudflare's Matrix server isn't an earnest project either. If that's how well AIs can do, I remain unimpressed. I haven't even mentioned the impact all these models have on jobs either ↪11 ! Cozy projects aren't safe either. WigglyPaint also suffers from low quality slop redistribution. "LLMs enable source code laundering" and frequently make mistakes. I Am An AI Hater . That's why we see forks stripping AI out, with projects like A code editor for humanoid apes and grumpy toads as a fork of Zed. While I am really happy to see that there are more and more projects adopting policies against AI submissions , all other fronts seem to just keep getting worse. To quote more comments , AI cause: environmental harms , reinforce bias , generate racist output , cause cognitive harms , support suicides , amplify numerous problems around consent and copyright , enable fraud , disinformation , harassment and surveillance and exploit and fire workers. Utter disrespect for community-maintained spaces. Source code laundering. Questionable ties to governments. Extreme waste of compute and finite resources. Exacerbating already-existing problems. I'm not alone thinking this . Are we expected to use AI to keep up? This is A Horrible Conclusion . Yeah. I don't want to have to do anything with it. I hope the post at least made some sense. There are way too many citations that it's hard to tie them neatly. Who knows, maybe one day I'll be forced to work at a local bakery and code only on my free time with how things are going. 1 I get them now. Though I prefer the terseness of no- or .  ↩ 2 I like to think I'm quite pragmatic, and frankly, I've learnt to brush off a lot of things. Having thick skin has proven to be quite useful on the internet.  ↩ 3 I kept working on C# GUI programs and toyed around with making more game-y things, with Processing using Java, which also naturally lent itself to making GUI applications for Android. These aren't quite as relevant to the story though (while both Stringlate and Klooni had/have seen some success, it's not nearly as much.)  ↩ 4 My project-naming skills haven't improved.  ↩ 5 Those are the good ones. There are worse , and then there is far worse. Stay safe.  ↩ 6 And for some reason I also have 740 followers? I have no idea what that feature does.  ↩ 7 Quite ironic… If you're one of those that also closes the tab when they see AI being mentioned, thanks for sticking by. I'm using this post to vent and let it all out. It would be awkward to address the topic otherwise, though I did think about trying to do it that way.  ↩ 8 As much as I try to avoid engaging with it, I'm afraid I'll eventually be forced to learn those patterns one way or another.  ↩ 9 I chose not to use the import features to bring over everything from GitHub. I saw this as an opportunity to start clean, and it's also just easier to not have to worry about the ownership of other people's contributions to issues if they remain the sole owner at their original place in GitHub.  ↩ 10 I have other things I host here, so I find it useful to rent a VPS rather than simply paying for a static file host. Hosting browsable Git repositories seems like an entirely different beast to hosting static sites though, hence the choice of using Codeberg for code. If all commits and all files are reachable, crawlers are going to have fun with that one.  ↩ 11 Even on my current job the company has enabled automatic Copilot code-reviews for every pull request. I can't disable them, and I feel bad opening PRs knowing that I am wasting compute on pointless bot comments. It just feels like an expensive, glorified spell-checker. The company culture is fine if we ignore this detail, but it feels like I'm fighting an uphill battle, and I'm not sure I'd have much luck elsewhere…  ↩

0 views
Susam Pal 3 months ago

Minimal GitHub Workflow

This is a note where I capture the various errors we receive when we create GitHub workflows that are smaller than the smallest possible workflow. I do not know why anyone would ever need this information and I doubt it will serve any purpose for me either but sometimes you just want to know things, no matter how useless they might be. This is one of the useless things I wanted to know today. For the first experiment we just create a zero byte file and push it to GitHub as follows, say, like this: Under the GitHub repo's Actions tab, we find this error: Then we update the workflow as follows: Now we get this error: Next update: Corresponding error: The experiments are preserved in the commit history of github.com/spxy/minighwf . Read on website | #technology Empty Workflow Runs On Ubuntu Latest Empty Steps Hello, World

0 views
Grumpy Gamer 3 months ago

Hugo comments

I’ve been cleaning up my comments script for hugo and am about ready to upload it to Github. I added an option to use flat files or sqlite and it can notify Discord (and probably other services) when a comment is added. It’s all one php file. The reason I’m telling you this is to force myself to actually do it. Otherwise there would be “one more thing” and I’d never do it. I was talking to a game dev today about how to motivate yourself to get things done on your game. We both agreed publicly making promises is a good way.

0 views
Farid Zakaria 8 months ago

GitHub Code Search is the real MVP

There is endless hype about the productivity boon that LLMs will usher in. While I am amazed at the utility offered by these superintelligent LLMs, at the moment (August 2025) I remain bearish on the utilization of these tools to have any meaningful impact on productivity especially for production-grade codebases where correctness, maintainability, and security are paramount. They are clearly helpful for exploring ideas or any goal where the code produced may be discarded at the end. Thinking about how much promise of productivity we might gain from this tool had me reflecting on what other changes in the past 5 years had already benefited me and a clear winner stands out: GitHub’s code search via cs.github.com . Pre-2020, code search in the open-source domain never really had a good solution, given the diaspora of various hosting platforms. If you’ve worked in any large corporate environment (Amazon, Google, Meta etc…) you might have already had exposure to the powers of an incredible code search. The lack of such a tool for public codebases was a limitation we simply worked This is partly why third-party libraries were consolidated into well-known projects like Apache or established companies such as Google’s Guava . An upside to the consolidation of code on GitHub’s platform was capitalized on with the release of their revamped code search. Made generally available in May 2023, the new engine added powerful features like symbol search and the ability to follow references. The productivity win is clear to me, even with the introduction of LLMs. I visit cs.github.com daily, more frequently and with more interaction than any of the LLMs available to me. Finding code written by other humans is fun , and for some reason, more joyful to read. There is a certain level of joy to finding solutions to problems you may be facing that were authored and written by another human. This psychological effect may diminish as the code I’m wading through begins to tilt toward AI-generated content. But for now, the majority of the code I’m viewing still subjectively looks like that authored by a human. I also tend to work in niche areas such as NixOS or Bazel that don’t have a large corpus of material online so the results from the LLM tend to be more disappointing. If given a Sophie’s choice between GitHub code search and LLMs, strictly for the purpose of code authorship, I would pick code search as of today. Humans easily adapt to their environment, a phenomenon known as the hedonic treadmill. As we all get excited for the incoming technology of generative AI, let’s take a moment to reflect on the already amazing contribution to engineering we have become accustomed to due to a wonderful code search.

0 views
Matthias Endler 5 years ago

Launching a Side Project Backed by Github Sponsors

Yesterday we launched analysis-tools.dev , and boy had I underestimated the response. It’s a side project about comparing static code analysis tools. Static analysis helps improve code quality by detecting bugs in source code without even running it. What’s best about the project is that it’s completely open-source . We wanted to build a product that wouldn’t depend on showing ads or tracking users. Instead, we were asking for sponsors on Github — that’s it. We learned a lot in the process, and if you like to do the same, keep reading! First, Some Stats Everyone likes business metrics. Here are some of ours: Github stars over time. That graph screams BUSINESS OPPORTUNITY. Source: star-history.t9t.io “Why did it take five years to build a website!?”, I hear you ask. Because I thought the idea was so obvious that others must have tried before and failed. I put it off, even though nobody stepped in to fill this niche. I put it off, even though I kept the list up-to-date for five years, just to learn about the tools out there. You get the gist: don’t put things off for too long. When ideas sound obvious, it’s probably because they are. Revenue Model It took a while to figure out how to support the project financially. We knew what we didn’t want: an SEO landfill backed by AdWords. Neither did we want to “sell user data” to trackers. We owe it to the contributors on Github to keep all data free for everyone. How could we still build a service around it? Initially, we thought about swallowing the infrastructure costs ourselves, but we’d have no incentive to maintain the site or extend it with new features. Github Sponsors was still quite new at that time. Yet, as soon as we realized that it was an option, it suddenly clicked: Companies that are not afraid of a comparison with the competition have an incentive to support an open platform that facilitates that. Furthermore, we could avoid bias and build a product that makes comparing objective and accessible. Sponsoring could be the antidote to soulless growth and instead allow us to build a lean, sustainable side business. We don’t expect analysis-tools.dev ever to be a full-time job. The market might be too small for that — and that’s fine. Tech Once we had a revenue model, we could focus on the tech. We’re both engineers, which helps with iterating quickly. Initially, I wanted to build something fancy with Yew . It’s a Rust/Webassembly framework and your boy likes Rust/Webassembly … I’m glad Jakub suggested something else: Gatsby . Now, let me be honest with you: I couldn’t care less about Gatsby. And that’s what I said to Jakub: “I couldn’t care less about Gatsby.” But that’s precisely the point: not being emotionally attached to something makes us focus on the job and not the tool. We get more stuff done! From there on, it was pretty much easy going: we used a starter template, Jakub showed me how the GraphQL integration worked, and we even got to use some Rust! The site runs on Cloudflare as an edge worker built on top of Rust. (Yeah, I cheated a bit.) Count to three, MVP! Finding Sponsors So we had our prototype but zero sponsors so far. What started now was (and still is) by far the hardest part: convincing people to support us. We were smart enough not to send cold e-mails because most companies ignore them. Instead, we turned to our network and realized that developers reached out before to add their company’s projects to the old static analysis list on Github. These were the people we contacted first. We tried to keep the messages short and personal. What worked best was a medium-sized e-mail with some context and a reminder that they contributed to the project before. We included a link to our sponsors page . Businesses want reliable partners and a reasonable value proposal, so a prerequisite is that the sponsor page has to be meticulously polished. Our Github Sponsors page Just like Star Wars Episode IX , we received mixed reviews: many people never replied, others passed the message on to their managers, which in turn never replied, while others again had no interest in sponsoring open-source projects in general. That’s all fair game: people are busy, and sponsorware is quite a new concept. A little rant: I’m of the opinion that tech businesses don’t nearly sponsor enough compared to all the value they get from Open Source. Would your company exist if there hadn’t been a free operating system like Linux or a web server like Nginx or Apache when it was founded? There was, however, a rare breed of respondents, which expressed interest but needed some guidance. For many, it is the first step towards sponsoring any developer through Github Sponsors / OpenCollective. It helped that we use OpenCollective as our fiscal host, which handles invoicing and donation transfers. Their docs helped us a lot when getting started. The task of finding sponsors is never done , but it was very reassuring to hear from DeepCode - an AI-based semantic analysis service, that they were willing to take a chance on us. Thanks to them, we could push product over the finishing line. Because of them, we can keep the site free for everybody. It also means the website is kept free from ads and trackers. In turn, DeepCode gets exposed to many great developers that care about code quality and might become loyal customers. Also, they get recognized as an open-source-friendly tech company, which is more important than ever if you’re trying to sell dev tools. Win-win! Marketing Jakub and I both had started businesses before, but this was the first truly open product we would build. Phase 1: Ship early 🚀 We decided for a soft launch: deploy the site as early as possible and let the crawlers index it. The fact that the page is statically rendered and follows some basic SEO guidelines sure helped with improving our search engine rankings over time. Phase 2: Ask for feedback from your target audience 💬 After we got some organic traffic and our first votes, we reached out to our developer friends to test the page and vote on tools they know and love. This served as an early validation, and we got some honest feedback, which helped us catch the most blatant flaws. Phase 3: Prepare announcement post 📝 We wrote a blog post which, even if clickbaity, got the job done: Static Analysis is Broken — Let’s Fix It! It pretty much captures our frustration about the space and why building an open platform is important. We could have done a better job explaining the technical differences between the different analysis tools, but that’s for another day. Phase 4: Announce on social media 🔥 Shortly before the official announcement, we noticed that the search functionality was broken (of course). Turns out, we hit the free quota limit on Algolia a biiit earlier than expected. 😅 No biggie: quick exchange with Algolia’s customer support, and they moved us over to the open-source plan (which we didn’t know existed). We were back on track! Site note: Algolia customer support is top-notch. Responsive, tech-savvy, and helpful. Using Algolia turned out to be a great fit for our product. Response times are consistently in the low milliseconds and the integration with Gatsby was quick and easy. We got quite a bit of buzz from that tweet: 63 retweets, 86 likes and counting Clearly, everyone knew that we were asking for support here, but we are thankful for every single one that liked and retweeted. It’s one of these situations where having a network of like-minded people can help. As soon as we were confident that the site wasn’t completely broken, we set off to announce it on Lobste.rs (2 downvotes), /r/SideProject (3 upvotes) and Hacker News (173 upvotes, 57 comments). Social media is kind of unpredictable. It helps to cater the message to each audience and stay humble, though. The response from all of that marketing effort was nuts : Traffic on launch day Perhaps unsurprisingly, the Cloudflare edge workers didn’t break a sweat. Edge worker CPU time on Cloudflare My boss Xoan Vilas even did a quick performance analysis and he approved. (Thanks boss!) High fives all around! Now what? Of course, we’ll add new features; of course, we have more plans for the future, yada yada yada. Instead, let’s reflect on that milestone: a healthy little business with no ads or trackers, solely carried by sponsors. 🎉 Finally, I want you to look deep inside yourself and find your own little product to work on . It’s probably right in front of your nose, and like myself, you’ve been putting it off for too long. Well, not anymore! The next success story is yours. So go out and build things. Oh wait! …before you leave, would you mind checking out analysis-tools.dev and smashing that upvote button for a few tools you like? Hey, and if you feel super generous today (or you have a fabulous employer that cares about open-source), why not check out our sponsorship page ? The project started as an awesome list on Github in December 2015 . We’re currently listing 470 static analysis tools. Traffic grew continuously. Counting 7.5k stars and over 190 contributors at the moment. 500-1000 unique users per week. I had the idea to build a website for years now, but my coworker Jakub joined in May 2020 to finally make it a reality.

1 views