Latest Posts (20 found)
Robert Greiner 2 months ago

Believe the Checkbook

Anthropic’s AI agent was the most prolific code contributor to Bun’s GitHub repository, submitting more merged pull requests than any human developer . Then Anthropic paid millions to acquire the human team anyway. The code was MIT-licensed; they could have forked it for free. Instead, they bought the people. Everyone’s heard the line: “AI will write all the code ; engineering as you know it is finished.” Boards repeat it. CFOs love it. Some CTOs quietly use it to justify hiring freezes and stalled promotion paths. The Bun acquisition blows a hole in that story . Here’s a team whose project was open source, whose most active contributor was an AI agent, whose code Anthropic legally could have copied overnight. No negotiations. No equity. No retention packages. Anthropic still fought competitors for the right to buy that group. Publicly, AI companies talk like engineering is being automated away. Privately, they deploy millions of dollars to acquire engineers who already work with AI at full tilt. That contradiction is not a PR mistake. It is a signal . The key constraint is obvious once you say it out loud. The bottleneck isn’t code production, it is judgment . Anthropic’s own announcement barely talked about Bun’s existing codebase. It praised the team’s ability to rethink the JavaScript toolchain “from first principles” . That’s investor-speak for: we’re paying for how these people think, what they choose not to build, which tradeoffs they make under pressure. They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain. AI drastically increases the volume of code you can generate. It does almost nothing to increase your supply of people who know which ten lines matter, which pull request should never ship, and which “clever” optimization will explode your latency or your reliability six months from now. So when Anthropic’s own AI tops the contribution charts and they still decide the scarce asset is the human team, pay attention. That’s revealed preference. Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands. If you want to understand what AI companies actually believe about engineering, follow the cap table, not the keynote. So what do you do with this as a technical leader? Stop using AI as an excuse to devalue your best knowledge workers. Use it to give them more leverage. Treat AI as force multiplication for your highest-judgment people. The ones who can design systems, navigate ambiguity, shape strategy, and smell risk before it hits. They’ll use AI to move faster, explore more options, and harden their decisions with better data. Double down on developing judgment, not just syntax speed: architecture, performance modeling, incident response, security thinking, operational literacy. The skills Anthropic implicitly paid for when it bought a team famous for rethinking the stack, not just writing another bundler. Be careful about starving your junior pipeline based on “coding is over” narratives. As AI pushes routine work down, the gap between senior and everyone else widens. Companies that maintain a healthy apprenticeship ladder will own the next generation of high-judgment engineers while everyone else hunts the same shrinking senior pool at auction. Most important: calibrate your strategy to revealed preferences , not marketing copy. When someone’s AI writes more code than their engineers but they still pay millions for the engineers, believe the transaction, not the tweet.

0 views
Robert Greiner 2 months ago

The Most Expensive Wall in Software

Palantir didn’t have a working product for the first several years. What they had were brilliant engineers building custom solutions on customer sites. Somehow that “broken” model made them worth nearly $500 billion. The company that couldn’t ship software became one of the most valuable enterprise platforms of the decade by doing the one thing every engineering VP tries to prevent: sending their best people to live with customers instead of letting them write code in peace. We treat that story like a Palantir quirk, some weird exception from a weird company. It isn’t. It’s a preview. The “Forward Deployed Engineer” sounds like a new job title. It’s not. It marks the moment a company admits that the wall between “building software” and “understanding the problem” has been a very expensive illusion. For years, we’ve optimized for engineer “focus.” Noise-canceling headphones. Dark mode. A Jira board that shields them from anything messy or human. No customer calls. No sales drama. Just tickets. We assumed fewer distractions meant more productivity. We never questioned whether we were removing the input they actually needed. An engineer told to “build an executive dashboard” isn’t doing product work. They’re playing telephone. One person heard it from sales. Sales heard it from a VP. The VP heard it from a board slide. By the time the engineer sees the ticket, the real problem is unrecognizable. So they do what they’re paid to do. They make boxes and charts. The execs shrug. Nobody is thrilled. The engineer walks away a little more convinced they should just be left alone to code. We call that a personality type. Usually, it’s an organizational symptom. Forward Deployed Engineers flip the script. Same brains. Same editor. Different raw material. Instead of sitting behind a backlog, they sit inside the customer’s day. Three or four days a week on-site, watching how analysts fudge CSVs, how operators bypass the tool, where people swear under their breath because the system “just doesn’t get it.” Then they fix it, right there, while the user is still at their keyboard. You don’t need a PRD when you’re watching someone copy-paste the same field into three different systems. The last mile of business logic and “reasoning tokens” is where the moat lives. Those messy, tacit rules in a claims team or a supply chain desk are precisely what future AI systems will need to learn. Think about what an FDE actually captures. Not just requirements. Not just bug reports. They’re watching the workarounds. The spreadsheet that Linda maintains because the system doesn’t handle edge cases. The mental model that a senior analyst has built over fifteen years that lets her spot a fraudulent claim in seconds. The tribal knowledge that exists only in the heads of people who’ve been doing the job long enough to know where the bodies are buried. That knowledge has always been valuable. It’s about to become essential. Large language models are remarkable at general reasoning. They’re terrible at knowing that your company approves claims differently on the last day of the quarter, or that “rush order” means something completely different to the Chicago warehouse than it does to the one in Phoenix. The models don’t know that when a customer says “the usual,” they mean the configuration they’ve been using since 2019 that nobody documented. This is the knowledge gap that will separate AI that works from AI that sort of works. And FDEs are uniquely positioned to close it. Every time an FDE watches someone work around the system, they’re documenting a gap in the model’s training data. Every time they build a quick fix for a specific customer workflow, they’re encoding business logic that no foundation model will ever learn from public data. They’re not just smoothing sales cycles. They’re harvesting structured insight from chaos. If you see FDEs as revenue padding, you’ll treat them like overpaid sales engineers. If you see them as a data acquisition engine for your AI future, you’ll treat them like your most strategic asset—and that recognition rewrites the org chart. Product management starts to hollow out. When engineers have direct customer relationships and live context, you don’t need as many people rewriting customer pain into Jira poetry. Some PMs evolve into true strategists, synthesizing markets, pricing, portfolio bets. Others, whose job was “talk to customers, then make tickets,” find there’s no seat left. Compensation models buckle. What do you pay the engineer who rewired a deployment on-site and saved a $5 million deal from churning? Base salary plus… a sales commission? A spot bonus? Equity? No spreadsheet handles “closed the deal and wrote the patch.” Career ladders fork. The old path (IC, senior, staff, principal) assumed “deeper into the code” was the only axis. FDEs create a second track: deep enough technically, but wide in context. They know the customer’s industry, regulatory mess, and how the CFO thinks. Both tracks are valuable. They will quietly compete for your best people. The “coding is dead” crowd has this backwards. It’s not that engineers disappear, it’s that the walls between roles do. An FDE with AI workflows can do the job of the engineer, the solutions architect, the PM who translates requirements, and half the support team. They’re in the room, they understand the problem, and now they have tools that let them ship the fix before the meeting ends. The specialists who survive aren’t the ones who go deeper into one skill. They’re the ones who go wider across the problem—and use AI to cover the gaps. You’re already paying for the gap between your engineers and reality. You pay for it in features nobody uses. In quarters-long roadmap resets. In “strategic pivots” that are really just corrections to bad guesses. Joe Lonsdale said about Palantir’s early days: “We didn’t actually have a product that worked for the first several years. What we had were brilliant engineers who could quickly build solutions for specific customer problems.” That sounded like an admission. It was also their advantage. Every on-site hack became another puzzle piece of the eventual platform. Not every company can station engineers at every client. But every company can lower the wall. Put engineers on sales calls. Rotate them through support. Let them watch users struggle without a PM running interference. Treat customer exposure as fuel for better code, not a distraction from it. Palantir’s “broken” model turned out to be the only thing that wasn’t broken. They understood before everyone else that the distance between your engineers and reality is the most expensive line item on your P&L.

0 views
Robert Greiner 3 months ago

The Breaker Box Economy

During a summer blackout when I was a kid, a neighbor ran an orange extension cord across the street so our freezer wouldn’t thaw. It looked absurd: this thin line humming with borrowed power, keeping the lasagna alive. But it worked. In a pinch, you build your own grid. OpenAI is doing the grown-up version. Not with cords, but contracts. They’re stringing a private power grid across rival utilities, locking in long-term “compute offtake” so the lights of their AI never flicker. Look at the map they’ve drawn. They pried open their exclusivity with Microsoft , won the right to buy from any cloud, and immediately signed a seven-year, $38 billion deal with Amazon . Then came data center projects with Oracle, SoftBank, and sovereign partners in the Gulf —$500 billion through the Stargate Project. In parallel, they locked in chip supply with Nvidia, AMD, and Broadcom so the turbines behind the meter actually spin. None of this reads like a software roadmap. It reads like a utility prospectus. For a decade, the cloud dictated terms. Everyone else took what they could get. Now the script flips. Hyperscalers become suppliers. The leading AI buyer aggregates their capacity. When OpenAI complained it couldn’t get enough compute from Microsoft alone, it wasn’t a feature request. It was a reliability concern. The numbers tell you where the leverage moved. Last year, Amazon, Google, Meta, and Microsoft spent over $380 billion on infrastructure — more than the entire GDP of Finland , spent in a single year by four companies. OpenAI, meanwhile, remains unprofitable . Yet they’re committing $38 billion to Amazon over seven years. That deal alone exceeds Ford’s entire market cap . The traditional calculus would call this a bubble. The company with no profits dictating terms to the most valuable companies on earth. But that misreads what’s happening. OpenAI isn’t betting they’ll be profitable next quarter. They’re betting that guaranteed access to compute becomes the most valuable asset in technology. They’re securing supply before the shortage arrives. This is what commodity markets look like when everyone realizes the same thing at once. In 2021, car manufacturers couldn’t build vehicles because they didn’t own chip fabrication. They got outbid by companies that did. Now imagine that dynamic, but with compute instead of semiconductors, and the stakes aren’t empty dealer lots. It’s whether your AI works at all. The shift isn’t subtle. Strategy used to be about inventing a better model. Now it’s about financing a continent of capacity and keeping it fed. Risk used to be “does it work?” Now it’s “does it arrive on time?” Whoever aggregates demand across utilities starts to look less like a tenant and more like a grid operator. Consider what that means for the next decade. The breakthrough that matters won’t necessarily be the cleverest algorithm. It will be who locked in supply at 2025 prices before the 2027 shortage. Who secured diversity so a single vendor’s outage doesn’t crater their service. Who convinced a sovereign wealth fund that compute infrastructure is as strategic as oil reserves. In commodities, advantage compounds quietly. The steel mill that signed iron ore contracts before prices spiked doesn’t celebrate publicly. They just keep running while competitors idle. In AI, we’re approaching the same dynamic. The winners will be the ones who treated compute like the scarce resource it’s becoming, not like the abundant cloud capacity it used to be. All this infrastructure raises a different question: what happens to the people? Recent analyses from the St. Louis Fed paint a more complex picture than the standard narrative. Occupations with higher AI exposure have experienced larger unemployment rate increases between 2022 and 2025. Computer and mathematical occupations, among the most AI-exposed at around 80%, saw some of the steepest unemployment rises. Meanwhile, blue-collar jobs and personal service roles, which have limited AI applicability, experienced relatively smaller increases. But the infrastructure being built suggests different stakes than current conditions reveal. Some economists warn that if systems approach human-like general intelligence within years, wages and work could be jolted in ways our social safety nets weren’t designed to handle. The gap between today’s emerging patterns and tomorrow’s possible disruption is the same gap that existed between early subprime mortgage exposure and full-blown crisis. Not everyone sees the bridge until it’s crossed. The path forward depends less on what AI can do than on whether we invest in reskilling at the same rate we pour concrete for data centers. Either way, the bottleneck won’t be ideas. It will be throughput. The premium shifts from model architecture to infrastructure literacy. For engineers, understanding how to optimize for constrained compute becomes more valuable than squeezing another point of accuracy. For companies, strategic advantage flows to those who secure capacity now, even at uncomfortable cost. Waiting for prices to drop assumes supply will meet demand. History suggests otherwise. The boring bets may matter most. Not the sexiest model, but the companies with the longest runway of guaranteed compute. Not the flashiest demo, but the partnerships that ensure it keeps running under load. And for nations, compute dependency becomes a geopolitical wedge. Countries that built domestic chip fabs after recent shortages are now asking the same questions about AI infrastructure. The grid matters more than the code running on it. We spent a decade believing software eats the world because it scaled like thought. Marginal costs near zero. Distribution instant. Barriers low. The next decade looks different. AI scales like energy: constrained by physical infrastructure, governed by supply contracts, and bottlenecked by whoever controls the flow. In that world, brilliance still matters. But the decisive move isn’t elegant. It’s securing the breaker box before the lights go out.

0 views
Robert Greiner 3 months ago

The Internet's Forgotten Superpower

When I was eight, my save button was a pencil. Not the controller. A pencil. And a scrap of paper. You’d finish a stage in Mega Man 2 and the game would show you a grid. Five rows of dots, each one either empty or filled. You’d copy it down dot by dot, turn off the NES, and come back days later. Enter that same pattern and your world reappeared. All eight robot masters. Every E-tank. Metal Man’s stage half-cleared. One small grid held your entire state. You expected it to work. You trusted it. The web has had this same feature since 1991. We just stopped using it. A colleague sends you a GitHub link. It doesn’t just open the file. It highlights lines 8 through 15, exactly where the bug lives. You land in the right place, conversation ready to start. Figma does the same. Click a teammate’s link and you’re on their canvas, same position, sometimes same object selected. Google Maps puts coordinates right in the URL. A pin isn’t just “coffee shop.” It’s precisely where you were looking. This isn’t innovation. It’s just the web working as designed. Then React launched in 2013 and single-page applications became the default. The trade seemed worth it: instant updates, no flicker, that native-app feel. But the cost was steeper than anyone admitted. SPAs broke the browser’s most fundamental contract: refresh should restore, not destroy. The back button should remember. A link should mean something. Instead, we got applications where your filters vanish on reload. Where sharing your screen means sending a link to a useless homepage, then giving verbal directions. Where analytics teams write custom JavaScript to manually fire events every time the URL changes. Except half the time the URL doesn’t change because updating it is “extra work.” We built save systems that die in RAM. It’s easier not to. Redux launched in 2015 and everyone copied the pattern. State lives in memory, managed by reducers. Tutorials taught this approach. Libraries assumed it. The entire ecosystem optimized around it. It worked until you hit refresh. Then tutorials would sheepishly mention you’d need to “rehydrate from the server” like it was some minor detail. The URL sat there, a solved problem we chose to ignore. Early React Router didn’t even consider the URL a first-class state container. It was decoration. The routing library itself didn’t believe routes should carry data. And nobody wanted to think about what belongs in a URL. Is it IDs? Filters? View modes? Sort order? The answer is “it depends,” which means you actually have to think about your application. It’s easier to dump everything in Redux and hope for the best. To be fair, URL state isn’t trivial. Browsers limit URLs to around 2,000 characters. Try serializing a complex filter object and you’ll hit that ceiling fast. Put sensitive data in URLs and it leaks everywhere: server logs, browser history, analytics tools, shoulder surfers. Nested objects don’t serialize cleanly. Arrays of objects with their own nested arrays? Good luck making that readable. And if you naively push every state change to the URL, you pollute browser history until the back button becomes unusable. These are real problems. But they’re solvable problems. And more importantly, they’re problems worth solving. The character limit matters for complex queries with dozens of filters. Most applications have three to five. IDs are short. Sort orders and view modes take a few characters. You’re not serializing your entire database. Sensitive data never belonged in URLs anyway. Authentication tokens go in cookies or headers. PII stays on the server. This isn’t a URL problem, it’s a security boundary you should already have. Complex objects? Most view state isn’t that complex. When it is, you can use short identifiers that reference server-side state. Stripe does this with their expandable API parameters. Linear does it with saved filters. History pollution? Use replaceState instead of pushState for transient updates. Problem solved in one line. The complexity exists. But it’s manageable complexity. The kind engineers solve every day. We just decided it wasn’t worth the effort. Durable, user-chosen facts belong in the URL. If someone set filters, they go in the URL. If they chose a view mode, it goes in the URL. If they navigated to a specific item, its ID goes in the URL. The test is simple: if someone shares this link, should they see the same thing? If yes, it belongs in the URL. Google ships billions of search results. Every one is a URL with your query in it: google.com/search?q=url+state+management Figma, Linear, Trello. Every design, every issue, every card has an address. These aren’t clever hacks. They’re examples of what happens when you treat the URL as infrastructure instead of decoration. We chased the native app feel and forgot why the web matters. Native apps can’t share state with a link. Can’t bookmark a screen. Can’t open three views in separate tabs. The web could do all of this by default. We broke it. Single-page applications have real benefits. Speed. Smooth transitions. Reactive updates. But those benefits don’t require abandoning the URL as a state container. You can have instant updates and meaningful addresses. Smooth transitions and working back buttons. The reactive experience and shareable links. The frameworks that win long-term will be the ones that treat the URL as infrastructure. That make it easy to put state there. That default to meaningful addresses instead of treating them as decoration. The URL has been waiting thirty years to be your save code. Every application that ignores it is one refresh away from losing your work. One shared link away from confusion. One back button away from frustration. Your eight-year-old self knew better. Drew that grid. Kept that scrap of paper. The web gave you something better. Stop building amnesia into your applications.

0 views
Robert Greiner 4 months ago

The Experience Upload

Remember when Neo downloaded kung fu directly into his brain? “I know kung fu,” he said, eyes snapping open after seconds of upload. That scene from The Matrix doesn’t feel like science fiction anymore. It’s basically Tuesday for any junior analyst with ChatGPT. Generative AI has become an experience accelerator that compresses decades of pattern recognition into an afternoon. A newcomer can synthesize 2,000 sales pitches, 500 project postmortems, and years of design reviews, then extract the winning templates. The apprenticeship model, where you shadow masters for years to absorb their judgment, is being disrupted by something closer to direct knowledge transfer. Stanford’s AI Index shows AI tools meaningfully narrow skill gaps , with the biggest performance gains flowing to less-experienced workers. Junior employees get the most lift because they’re gaining that “I’ve seen this before” intuition that used to require years in the trenches. When everyone can download the standard moves, competitive advantage relocates entirely. The edge migrates from having patterns to selecting them. McKinsey’s research shows companies pulling real productivity gains from generative AI , particularly in content-heavy workflows that traditionally rewarded tenure over tempo. Templates are now abundant. Knowing when and how to deploy them remains scarce. Writing a product spec? The structure is free. Any AI can provide the template. Choosing which trade-offs to make, what features to kill, whose pain to prioritize? That’s where advantage lives. The same dynamic plays everywhere: sales, research, design, strategy. Pattern abundance breeds pattern addiction. The junior PM who deploys a perfect RICE framework without understanding why their particular product needs different criteria. The analyst who runs flawless regressions without asking if they’re measuring the right thing. Templates that look sophisticated crumble on contact with reality. Think of expertise like a library card. For most of history, the card itself was precious. You earned it through years in the stacks, slowly building your catalog. Now the entire library rolls to your desk on command. The constraint shifted from access to selection. AI can upload a hundred negotiation tactics into your working memory. Winning the negotiation still requires reading the room, sensing undercurrents, knowing when to break the pattern because this situation is different. Even in scientific research, models dramatically accelerate discovery, but validation and framing remain essentially human . When moves become free, judgment becomes the only scarcity that compounds. You don’t win by having seen everything. You win by truly seeing what’s in front of you and knowing which of those thousand downloaded patterns actually applies, if any.

0 views
Robert Greiner 4 months ago

The Three Infinity Stones That Can Erase Your Company

A reputation can be erased with three permissions most companies don’t control. Think of it like Thanos’s gauntlet, but the stones are Reddit Mod, SERP (Search Engine Result Page), and LLM (like ChatGPT). Slip that glove on and reality bends. Not because the truth changed, but because the places people trust to find the truth did. Lars Lofgren documented a chilling case study : a coding bootcamp allegedly watching its reputation dissolve in slow motion. The pattern? Concerned posts pinned to the top. Defensive replies deleted as “too aggressive.” Critical threads climbing Google rankings while rebuttals disappeared. The moderator shaping the narrative? Someone who happened to run a competing program. The beauty was in the restraint. No rants, no smoking guns. Just a steady drip of concerned questions that somehow never got answered. Alumni defending the program? Their comments would disappear. Too aggressive, the mod would explain, if anyone asked. Meanwhile, every “I heard some troubling things” post would stick around, accumulating upvotes and anxiety. Month by month, those threads crept up Google’s rankings until searching the bootcamp’s name meant swimming through doubt. Then the AI models started training on all that helpful Reddit content. Now when someone asks ChatGPT about coding bootcamps, guess whose concerns get recycled as conventional wisdom? The company’s revenue didn’t explode. It deflated, slowly, like air through a pinhole you can’t quite locate. Whether or not the mod was misbehaving is irrelevant (to you), the physics at play must be understood to survive and thrive into the future. Here’s how the whole system works, stone by stone. A rival grabs moderation of a niche subreddit that prospects actually read. The new mod doesn’t need to invent facts. Just nudge. Seed insinuations. Pin the “open questions.” Delete the boring defenses. The thread hardens into a narrative. It’s cheap, it scales, and it never tires. This can happen at the individual level, or through mob groupthink. We learned how much power unpaid moderators really have during the 2023 Reddit blackout, when a mod revolt flipped thousands of subreddits to private. Overnight, a volunteer class reminded a $10 billion platform who actually holds the keys. That same control can shape a market’s first impression of your company for years. Google has been giving forums like Reddit prominent real estate because people want “real talk” from peers. Google began highlighting Reddit and other forum threads in results as an explicit strategy. That means a single, active thread can sit next to your homepage for your own brand terms. Google didn’t conspire against you; it just turned the mic toward the crowd and turned up the volume on forums by design. Search is not a neutral index. It’s a curation engine with preferences, and right now it prefers the places where the narrative about you is being written by someone else. Large language models eat the web, and the web now includes a lot of Reddit. Officially so. When OpenAI struck a deal to license Reddit content , it formalized what many already suspected: what rises in those threads will rise again in AI answers. The model doesn’t know your context or your competitor’s incentives. It knows frequency. It knows what looks “human.” And thanks to the illusory truth effect, we’re wired to believe what we hear repeatedly, even when we know better. Put those three stones together and you get a feedback loop: a forum thread gains traction, search promotes it, models repeat it, and repetition hardens into credibility. If you’ve ever watched a good company’s name become a punchline inside a forum, you know the feeling. Sales calls get weird. Candidates ask sideways questions. Friends send “Saw this, you okay?” texts. You ship the same quality, but the room temperature drops five degrees. The failure mode isn’t a viral crisis. It’s a slow, durable slant that tilts the playing field just enough to make every win feel uphill. It’s cheaper to capture the gate than to storm the castle. This isn’t a PR story. It’s an architecture story. Your brand now runs on an information supply chain you don’t control. Reality, for your buyers, is compiled. Mods decide what persists. Search decides what’s seen. Models decide what’s said back to them. So you design like it’s adversarial: Map the surfaces where first impressions form. Which subreddits, forums, and Discords do your buyers actually read? Treat them like production systems, even when you don’t control them. Watch for asymmetry. Healthy communities show variance: praise, critique, indifference. When every thread tilts one way, document patterns. You’re investigating a leak, not winning a debate. Build upstream relationships. Invest quietly in connections with platform trust and safety teams. You’re not asking for special treatment. You’re asking for a path when the normal appeal ladder leads nowhere. Create systems that survive ambient doubt. Build conviction in your team and your market that persists even when the narrative doesn’t. Own more of the conversation through credible third-party reviews, real user communities, and transparent metrics that travel beyond any one forum’s gravity. In a world where those three stones can be snapped by someone else, the rare advantage is building systems (and teams) that stay accurate even when the narrative doesn’t. You can’t wish this away. You can only design around it, the way good engineers handle single points of failure: assume they exist, monitor them relentlessly, and make sure your fate doesn’t rest in the hands of whoever holds the stones.

0 views
Robert Greiner 5 months ago

The Server in the Closet

There's a specific kind of technology leader who has become endangered: the one who builds things. Not the kind who orchestrates vendors. Not the kind who manages integration roadmaps between Salesforce, HubSpot, and whatever AI wrapper launched last week. I'm talking about the CTO who looks at a problem and thinks, "We should own this." David Heinemeier Hansson (DHH to most) is one of these people. When his company 37signals wanted to build an innovative email product, he didn't start by evaluating Gmail API limits or building a wrapper on top of existing email platforms. He built an email server. From scratch. His reasoning was elegantly simple: "If you want to do interesting things with email, you have to own the email server." This sounds almost quaint in 2025, doesn't it? Like someone suggesting you raise your own chickens instead of buying eggs. But here's what happened: 37signals pulled their entire infrastructure off AWS . They spent $700,000 on Dell servers (hardware you can actually touch) and saved $2 million in their first year . Over five years, they'll save more than $10 million. Their operations team didn't grow. Their product didn't slow down. They just stopped renting what they could own. The math is almost offensive: a $350 consumer-grade mini PC provides the same computing power as $1,200 per month on Heroku . The cloud markup isn't a service fee. It's a tax on not thinking. Walk into most tech companies today and you'll find an elaborate performance I call "integration theater." Everyone's running Salesforce for CRM. HubSpot for marketing. AWS for infrastructure. OpenAI's API for their "proprietary AI." Snowflake for analytics. The technology stack looks identical to their competitors' because, well, it is. They bought it from the same catalog. Then everyone sits around conference tables wondering why they have no competitive advantage. The delusion is that excellence comes from picking the right items off the menu. It doesn't. It comes from owning the kitchen. Netflix's recommendation algorithm drives 80% of viewing time and saves roughly $1 billion annually in reduced churn. Amazon's recommendation system generates 35% of the company's revenue . TikTok's algorithm is valued at over $100 billion (more than most Fortune 500 companies are worth in their entirety). You can't rent that kind of advantage from a SaaS vendor. You have to build it. AI is accelerating the commoditization crisis, and most companies are sleepwalking into it. Two years ago, having "AI-powered" anything was a differentiator. Today? There are companies whose entire business model is "ChatGPT with a nice interface." Industry analysts are openly asking : "Are you just an LLM wrapper? Because you're replaceable now." The models themselves are becoming commodities. LLaMA is open source. The cost difference between AI providers is basically compute pricing. If your "proprietary AI solution" is just an API call to OpenAI with some prompt engineering, your competitor can replicate your entire value proposition by Tuesday. But AI is also the great equalizer for building. Five years ago, you needed a team of specialists to build custom infrastructure. Today, a talented engineer with Claude or Cursor can build in a weekend what used to take months. The barrier to creating proprietary technology is collapsing at the exact moment that renting commodity technology is becoming worthless. This is the inflection point. The companies that realize they can build are going to pull away from the companies that keep renting. Fast. The SaaS vendors see this coming. Why do you think there are 702 CRM solutions on G2 ? Not because the world needs 702 ways to track customer data. Because CRM is so commoditized that differentiation is nearly impossible. Everyone's selling the same thing with different logos. The vendors are trapped in a feature-parity death spiral. You add a feature. Your competitor copies it in three weeks. Customers start choosing based on price. Margins compress . Everyone loses except the customer, who still doesn't have a competitive advantage because everyone else bought the same stuff. Meanwhile, companies like 37signals are running on servers they bought six years ago . Still humming. Still paid off. Still creating compounding advantages. The winners in the next decade will be companies that wake up and ask: "What are we paying millions for annually that we could own for a fraction of that?" The answer is usually shocking. The losers will be the SaaS vendors who can't answer why anyone should keep paying them when AI makes building so much easier. Watch what happens when a CFO realizes their $2 million annual Salesforce bill could be a one-time $500K custom build that does exactly what they need (and nothing they don't). Building sucks for the first year. Maybe two years. It's slower. It's harder. You'll have bugs the SaaS vendor already fixed. But ask a different question: What would you have if you'd spent five years building things only you have? The answer is the only real moat that exists in 2025: proprietary technology so specific to your business that competitors can't buy it, can't rent it, and can't replicate it without years of their own work. Modern technology leadership has forgotten patient capital. We think in quarters. In sprints. In OKRs that reset annually. Making a decision that pays off in year four feels almost irresponsible. But year four is exactly where competitive advantage lives. When 37signals bought those servers, they bought them. Past tense. Done. The servers keep running. The savings compound. The knowledge compounds. Meanwhile, that AWS bill would arrive every month, forever, growing as usage grew. Could you explain to your board what technology you own that competitors don't? If the answer involves "our unique Salesforce configuration" or "our sophisticated integration layer," you don't own anything. You're renting shelf space in someone else's store. Real ownership sounds different: "We built our own recommendation engine because Algolia couldn't do real-time personalization at our scale, and now our conversion rates are 40% higher than category average." Or: "We built our own data pipeline because we needed sub-second latency, and that's why we can offer same-day delivery when competitors take three days." This isn't romanticism about building everything yourself. Buy commodity stuff. Buy your email service and your calendar and your video conferencing. Buy anything where being different doesn't matter. But when something is core to how you deliver value? When it's the reason customers choose you? Own it. Build it. Make it yours. The leaders who get this (who can code, who understand infrastructure, who think in five-year horizons while everyone else thinks in quarters) are going to build companies that are genuinely difficult to compete with. The ones managing vendor relationships are going to wake up one day and realize they're running the same company as everyone else, just with a different logo on top. The question isn't whether you can afford to build. It's whether you can afford to keep renting.

0 views
Robert Greiner 5 months ago

Tools Create Capacity, Workflows Create Value

A team installs AI coding assistants. Engineers report feeling 40% faster. Ship dates don't move. A lab buys pipetting robots. Throughput jumps 3x per station. Projects still run late. A finance team automates their models. Analysts save hours daily. Deal flow stays flat. The pattern is so consistent it's almost boring: tools create capacity, but capacity without workflows dissipates. The energy has nowhere to go, so it converts to higher standards, deeper analysis, or wider scope - anything except the acceleration we expected. When factories first installed electric motors in the 1890s, productivity barely budged for 30 years. Factory owners simply swapped steam engines for electric ones, keeping the same line-shaft layouts designed around a central power source. Real gains only came when they redesigned entire floors around distributed power - small motors at each machine, workflows rebuilt from scratch. As Paul David's analysis of the "productivity paradox" shows, electrification's value came from workflow reorganization, not the technology itself. Toyota understood this. Their production system isn't about robots, it's about standardized work that makes problems visible and response immediate. Andon cords, just-in-time delivery, continuous flow. The same equipment in different plants produces wildly different results because the choreography matters more than the hardware. Fred Brooks saw it in software decades ago. In The Mythical Man-Month , he explained why adding developers to a late project makes it later: coordination overhead grows faster than individual productivity. AI coding assistants shift this bottleneck rather than eliminating it - from writing code to reviewing, integrating, and deciding what to build. The physics are simple: work flows through systems at the rate of the slowest constraint. Speed up one step without addressing the constraint, and you've just created slack that the system will absorb in unexpected ways. Value appears when organizations make explicit decisions about how to channel new capacity: Without this explicit choice - encoded in workflows, metrics, and incentives - the system makes its own choice, usually defaulting to quality creep or scope expansion. The senior engineer with AI assistance doesn't ship faster; they refactor more elegantly. The analyst with automated data gathering doesn't close more deals; they build more scenarios with more advanced models. Systems naturally expand to consume available resources unless specifically constrained otherwise. This creates a paradox: the better the tool, the less visible its impact. A mediocre tool that requires workflow changes often delivers more value than a brilliant tool that slots into existing processes. The disruption forces the reorganization that captures the value. But most organizations resist this disruption. They want the gain without the pain, the acceleration without the reorganization. Three forces ensure they rarely get it: Incentive inertia : We measure what we've always measured, which drives behavior toward old patterns even with new tools. A coding team measured on features delivered won't naturally convert AI-generated time savings into faster delivery... they'll add features. Hidden coordination costs : Most work involves handoffs, reviews, approvals, and synchronization. These costs often dominate individual task time. Making individuals faster can actually make coordination harder if everyone moves at different speeds. Workflow lock-in : Existing workflows encode years of tacit knowledge about what works. Changing tools is easy; changing deeply embedded routines is hard. The quick experiment with a new AI tool succeeds; the systemic transformation required to capture its value takes quarters or years. Not every tool needs workflow change. Calculators, spell-checkers, and search engines delivered immediate value without reorganization. The difference? They accelerate truly atomic tasks with clear inputs and outputs, no coordination requirements, and immediate feedback loops. But as tools move from accelerating tasks to augmenting decisions - from "check this spelling" to "draft this strategy" - workflow integration becomes essential. The more complex the task, the more it depends on context, coordination, and downstream processes. As AI tools proliferate, competitive advantage shifts from having the tools to having the workflows that exploit them. The race isn't for the best model; it's for the best integration. Your AI initiative will probably disappoint not because the technology fails, but because workflows don't change. The pilot will amaze, the rollout will underwhelm, and everyone will blame the tool. The fix isn't better tools - it's better workflows. Find your real constraint. Design processes that assume the new capacity. Align metrics with intended outcomes. Make the new way easier than the old way. Most organizations are sitting on 30-40% latent capacity from tools they've already deployed. They don't need more tools. They need workflows that channel the capacity they've created. The next time someone shows you an amazing demo, ask: "What workflow changes does this assume?" If the answer is "none," you're looking at expensive slack, not transformation. Tools are just potential energy. Workflows are what make it kinetic. Speed : Cut scope to maintain quality while shipping faster Quality : Keep timelines but raise standards with the extra capacity Cost : Maintain output with smaller teams Scope : Do more without changing timelines or headcount

0 views
Robert Greiner 5 months ago

The Age of Citation

Watch someone use ChatGPT to research. They type, wait, skim, and act. No scrolling endless search results. Just a verdict, a decision, and a click. That behavior is why Answer Engine Optimization (AEO) exists - the craft of showing up in the synthesis, not ranked on a page. Google trained us to climb to number one on a page of blue links. Now the “page” is a paragraph stitched from everywhere. The engine isn’t choosing a winner; it’s cross-checking a chorus. In that world, the most-mentioned brand beats the top-ranked page. Visibility shifts from a trophy on one site to a probability across many surfaces. You can see the new values whenever an answer engine shows its cards. Perplexity, by design, cites multiple sources and synthesizes across the web. OpenAI’s SearchGPT puts sources inside answers, optimizing for corroboration instead of a lone authority . If you appear in five distinct citations across different websites, videos, and docs, you get pulled into the story. We miss this because the old game felt clean. One Search Engine Result Page (SERP). One keyword. One winner. But the retrieval stack changed. These models are pattern matchers with trust issues. They don’t want a single page screaming authority; they want independent witnesses who agree. “Most-mentioned” isn’t about volume for its own sake. It’s breadth of corroboration. Mention velocity over rank. This is why a two-paragraph Reddit comment can move more revenue than a 10,000-word pillar page with great SEO. Not because brevity is magic, but because Reddit is where the conversation is happening - and the engines are wired to listen. Reddit inked data deals to feed real-time content into assistants, including a partnership with OpenAI . The model prioritizes living dialogue. A short, honest answer in a thread about “Which espresso machine under $500 can I use without waking up my family in the morning?” can reverberate across the internet, showing up in AI-powered search windows as a trusted recommendation. That’s not supposed to beat domain authority. Yet it does. Here’s the uncomfortable part: this shift collapses the moat incumbents thought they had. A brand-new startup mentioned by actual users in a few credible places can show up in answers next week. The old gatekeeper - investing in years of link building - lost leverage when the interface began preferring fresh corroboration. I’ve watched unknown names slip into AI summaries overnight because they were present where models cross-check: a YouTube explainer, a help page that reads like a real fix, a handful of community threads. The bar moved from “accumulate PageRank” to “earn believable mentions.” That’s a different company muscle. It also flips where the highest-ROI content lives. Ask an LLM a long, messy question and listen to it breathe: “How do I connect Product X to Workflow Y under constraint Z for a team with policy Q?” That’s not a keyword... it’s a paragraph. Your help center is a gold mine because it answers the exact multi-clause questions assistants get. People arrive with intent; assistants surface pages that look like fixes, not funnels. There’s a trap here, and teams are already falling into it. If synthesis is the currency, why not flood the web with AI-generated pages and force your way into the chorus? Because the models are learning to ignore their own echoes. Train on synthetic output long enough and you get model collapse: the system drifts toward its own errors and forgets rare, true signals . Platforms don’t want that. Retrieval pipelines are getting more sensitive to provenance, originality, and human fingerprints. The AI mirror maze looks productive until you notice most of what you’re producing never gets cited - and worse, it makes the real you harder to trust. None of this means SEO is dead. It’s upstream of the answer now. Your site feeds the synthesizer, not the other way around. Your best work will be cited, paraphrased, and delivered without a click. That’s scary if your model is “capture the session.” It’s liberating if your model is “win the decision.” If the assistant makes the choice and you’re in the synthesis, you win. The internet spent two decades teaching us to chase rank. The next decade rewards citation share. The playbook is simpler than it sounds: be the most-cited truth about the problem you exist to solve. Earn mentions that look like reality. Put your expertise where the model listens. Avoid the mirror maze. In answer-first interfaces, the spotlight doesn’t land on a single podium. It sweeps the room until the story feels true. Be in that story, or be invisible.

0 views
Robert Greiner 5 months ago

Win the Default, Win the Decade

The most expensive real estate in the world isn’t oceanfront - it’s the default button. Google reportedly paid Apple $18–$20 billion to be the default search on Safari - not the best search, the one that shows up without a thought. That price isn’t about features. It’s about gravity: the path everything takes when no one is pushing. This matters because most teams still try to win by arguing, branding, or persuading. Meanwhile, the winners are quietly grading the slope so the flow moves their way. Control the slope and you don’t have to shout; you just have to be there when the decision makes itself. We’ve seen this before. A tiny policy tweak that changes nothing but the starting point can change everything about the outcome. Automatic enrollment boosts 401(k) participation by entire workforces - not by inspiring them with retirement sermons, but by making saving the path of least resistance. When countries switch organ donation to opt-out instead of opt-in, consent rates leap to above 90%. When Apple turned off third-party tracking by default, ad platforms didn’t adjust their pitches; they bled. Different domains, same pattern: defaults quietly bend behavior. We keep treating the world like a debate club. It’s closer to a river. Rivers don’t negotiate with rocks; they follow the smallest gradient and start carving. The Grand Canyon didn’t appear because the Colorado River was persuasive. Water followed the easiest route, and the route compounded itself: flow deepened the channel, a deeper channel increased speed, and speed accelerated erosion. The flow creates the canyon that then dictates the flow. Products, policies, and markets work the same way. The riverbed is the default. The flow is human behavior. Every click you remove, every field you pre-fill, every setting you make the starting point is a millimeter off the riverbank - but across years, it’s a canyon. Here’s the important nuance: the riverbed doesn’t need to be perfect; it only needs to be preferred. Users are satisficers. They don’t climb hills for tiny gains; they follow the slope that’s already downhill. A merely OK experience on a well-graded slope beats a great experience you have to hike to. This is why the AI wars won’t be won by the smartest model. They’ll be won by whoever becomes the default layer between you and everything else - and can actually deliver. Microsoft Copilot looks like it should have already won. It’s embedded in Office, connected to your SharePoint, reading your emails, summarizing your Teams meetings. It’s the perfect default—pre-installed, pre-integrated, pre-authorized. The riverbed couldn’t be better graded. But defaults only work if the water actually flows. Copilot shows how even unmatched distribution can backfire if the product misses the minimum bar of usefulness. If every summary misses the point, if every SharePoint search returns nonsense, if the AI can’t actually help with the work… people will climb out of the canyon. They’ll copy-paste into ChatGPT. They’ll try Claude. They’ll find their own rivers. That’s the paradox: the default position is priceless, but only if it’s good enough to keep people in the channel. Google Search wasn’t perfect; it was just good enough that climbing out felt pointless. Copilot risks teaching millions of enterprise users the opposite lesson: that the default can be worse than nothing. Microsoft owns the most valuable real estate in enterprise AI: every Office toolbar, every Teams window—but they’re fumbling the handoff. The channel is perfect, but the water won’t flow. Which means the throne is still empty. The next trillion-dollar company won’t just become the AI default - they’ll be the first one good enough to keep it. The riverbed is ready. We’re just waiting for water worth flowing.

0 views
Robert Greiner 6 months ago

Mise en Place for AI Teams

In every great kitchen, speed and consistency don't come from more gadgets. They come from mise en place — the small, disciplined set of knives, pans, and staples laid out the same way, every time. I learned this watching a chef glide through a Friday dinner rush with the grace of a violinist. Her secret wasn't molecular gastronomy equipment; it was a ruthless commitment to the few tools she trusted and the rituals that made them predictable. Most AI teams are trying to cook a grilled cheese with an immersion circulator and liquid nitrogen. They wrap their models in agents, their agents in orchestrators, their orchestrators in monitoring, and their monitoring in more dashboards with sprawling rule files. The result isn't better food - it's longer prep times, more points of failure, regression defect tickets, technical debt, and confused line cooks. The complexity tax doesn't disappear; it lands squarely on your people. Your AI kitchen doesn't need more stations. It needs a mise en place. A simplicity-first AI workflow should look like a chef's tool roll: a small set of robust, purpose-built tools with conversational prompting as the default UX, and light guardrails for structure and safety. Pair this with some internal training and practice, and you have a recipe for reduced operational drag and increased developer output. This is not Luddism; it's throughput. The evidence is clear: minimal stacks avoid heavy abstractions, trim maintenance, and let teams iterate faster on real product problems instead of fighting orchestration glue code. You can see this ethos in the rise of terminal-native tools like Aider, Claude Code, and Warp, which make AI pair programming productive without layers of framework ceremony. The best tech stacks privilege simplicity and maintainability over breadth of tools. Light guardrails are your recipe card, not a second chef, or a set of expensive, hard-to-maintain tools. Use structured output with JSON Schema or Pydantic models to get reliable shapes from conversational prompting. Keep rules situational. Engineers in the trenches are documenting approaches that lean on small, explicit constraints rather than sprawling instruction documents that models won't consistently honor. Think of it as a plating ring, not a sous-vide rig for toast. The human cost of over-orchestration is subtle and corrosive. You get more onboarding time, more brittle assumptions, and a constant hum of "how does this thing actually work?" The pathologies are familiar: dashboards nobody trusts, playbooks no one reads, and incident write-ups that end with "framework edge case." Engineers who wanted to build product now spend their mornings deciphering an agent graph. That's culture drift. In other words, agent frameworks solve a class of problems. But adopting them too early is like installing a salamander broiler to toast your bread: impressive, expensive, and unnecessary. The tool roll: Pick two or three CLI-first interfaces (e.g., Claude Code or Cursor) and make them the default workflow. Document a 10-minute quickstart. Show how CLI-first, minimal stacks drive speed and reduce overhead. The house prompt: Standardize a short, single-page prompting style guide and a few tested templates. No 20-page rule tomes. Think index card, not encyclopedia. The guardrail: Enforce structured output, plus a tiny set of situational rules for safety and correctness. Lightweight, structured guardrails deliver reliability without orchestration bloat. The escalation: Create one "when to reach for agents/graphs" checklist. Require a written justification tied to product complexity and expected ROI. Frameworks add power and overhead—use them when the problem demands it, not before. Jiro Dreams of Sushi is not a movie about fish. It's about the compounding returns of mastering a small number of moves. AI development is heading the same way. Your team doesn't need more stations; it needs a sharper knife and the discipline to put it in the same place, every day. Build your AI mise en place: small, robust CLI tools, conversational prompting, and light guardrails. Stop shifting the complexity tax onto your team. The fastest kitchen in town is the one that knows where everything goes.

0 views
Robert Greiner 6 months ago

AI Belongs in Your Dev Pipeline, Not Your Product

A few months ago, a product lead at a mid-market SaaS company told me about her team’s long, expensive slog to launch an “AI-powered” dashboard. They spent months wrangling data, tuning models, and building a feature that would predict churn and surface insights. The result? A widget that looked impressive in demos, but rarely changed what users actually did. Meanwhile, her backlog ballooned. Customers wanted core features faster, bugs fixed sooner, and the UI modernized. But most of her engineers were busy wrangling the AI “add-on.” This pattern repeats across the industry. The last few years have been about embedding AI into existing products, hoping to sprinkle on some magic and get some hype revenue. But what if that’s missing the point entirely? What if the future of software development isn’t about making products “smarter,” but about using AI to build faster, with less? What if the real unlock is not the features AI adds for users, but the time it gives back to builders? Everyone wants AI in their app: at least, that’s what the headlines and investor decks say. Add an AI button, a chatbot, some “insights,” and you’re future-proof, right? The reality is more sobering. Most companies struggle to integrate meaningful AI features, and even when they do, the user impact is often marginal. Meanwhile, development teams are drowning in technical debt, slow release cycles, and growing feature requests. We have a mountain of legacy technical debt and complexity in the real world, with already deployed applications, to fight against. The real story is that AI’s biggest impact so far isn’t in the product - it’s in the process. Recent data shows that AI accelerates software development by up to 50% , with teams reporting 70% better bug detection and resolution. AI-driven automation in CI/CD pipelines enables 2.5 times more frequent deployments , slicing feedback loops and release times from weeks to days. This isn’t about smarter apps; it’s about faster, better builders. Tools like GitHub Copilot, Cursor, and Claude Code have become co-developers instead of just autocomplete on steroids. They turn requirements or code stubs into working modules, automate boilerplate, refactor, and catch bugs before code reviews even start. The effect is cumulative: not only are you writing code faster, but you’re avoiding entire classes of human error, and spending more time designing features that actually matter. As case studies show , this shift shaves months off traditional timelines and lets smaller teams punch above their weight. Organizations that learn to mitigate the downsides of AI-powered development workflows (like compounding technical debt, polluted codebases, and hallucinated data) are shipping real, unsexy features faster than their competitors. The old model was “add AI to the product.” The new model is “let AI build the product.” This is not a subtle shift. In 2024, 75% of companies applied AI directly to their development workflows , not just as user-facing features. Over half cited task automation as the top reason, with code optimization, diagnostics, and testing close behind. But the real inflection point is the emergence of AI-native development platforms and autonomous agents. Microsoft, IBM, and dozens of startups are building environments where AI isn’t an accessory but the primary tool. These platforms offer advanced code generation, real-time bug fixing, and multistep workflow automation . The most ambitious teams deploy autonomous agents that monitor live applications, optimize code, and fix bugs without human intervention . Why does this matter? Because the bottleneck in software is rarely the absence of new ideas. It’s the time and cost to ship, adapt, and maintain those ideas. As AI-native platforms automate everything from requirement gathering to documentation , development becomes less about brute force and more about orchestration. The result: companies can build, iterate, and respond to the market with a fraction of the traditional headcount. If AI is so capable, why not let it run the whole show? This is where the narrative gets more nuanced. AI excels at automating the repeatable, the tedious, the knowable. But it struggles with ambiguity, context, and the kind of judgment that shapes product vision. The most advanced tools still require skilled engineers to architect solutions, integrate data, and make strategic decisions . There are also open questions about where to draw the line between AI autonomy and human oversight . Reliability, security, and ethical use are not solved problems. And as AI becomes more specialized and powerful, the risk of subtle bugs or unintended consequences rises. So yes, AI doubles your speed, but it still needs humans to choose the direction. Still, this is not a limitation - it's an invitation to focus human talent where it matters. Imagine a world where your smartest engineers spend their time designing architectures, exploring new business models, or engaging customers, while AI sweeps away the friction of routine coding and deployment. There’s a counter-argument that says, “Won’t every competitor have access to the same AI tools, erasing any advantage?” But this misses the point. The advantage isn’t the tool - it’s the leverage. The companies that win will be those that treat AI as a multiplier for their unique talent and strategy, not as a checklist item for investors or a shiny add-on for users. AI unlocks faster adaptation to market changes , more frequent releases, and higher customer satisfaction because the teams using it can out-iterate, out-learn, and out-deliver their rivals. In the same way that the assembly line transformed manufacturing, AI is transforming software by making scale and speed the default, not the exception. Shift budget and talent from AI features to AI-native development platforms. Invest in tools that automate your build, test, and deploy cycles. If you’re still treating AI like a product differentiator, you’re a step behind. Free your engineers from boilerplate and bug-chasing. Let them focus on system design, product strategy, and customer engagement. Use AI to automate everything else that can be automated. Adopt deployment frequency, lead time, and customer responsiveness as your north stars. If your build times, release velocity, and feedback loops aren’t at least twice as fast as three years ago, you’re leaving leverage on the table. The biggest opportunities in software aren’t about what AI puts in the hands of users. They’re about what AI puts in the hands of builders. Companies that keep treating AI as a feature risk missing the real story: AI is the new factory floor. It’s how you build faster, cheaper, and smarter with the same or fewer people. In the end, AI isn’t the “smart” feature your customers are waiting for. It’s the silent partner that lets you deliver the features they actually want, twice as fast. The future belongs to those who stop asking, “How do I add AI to my product?” and start asking, “How do I let AI build my product for me?” The difference, as always, is speed. And speed, in software, wins.

0 views
Robert Greiner 7 months ago

Why Your Enterprise AI Strategy Is Failing

Last week, I spoke with a CTO running technology for a $500M distribution company. He's a sharp executive with two decades of experience, overseeing custom-built systems moving twice as fast as the industry standard. They'd already deployed ChatGPT licenses, engaged an AI automation vendor on a six-figure deal, and achieved such productivity with developers using AI that they were moving faster than ever. Then he dropped a statement that stopped me cold: Here was someone doing everything right, yet still failing at what mattered most:  adoption . After 18 months of enterprise AI consulting, I've identified three distinct leadership approaches. First, the Blockers: still debating the merits of allowing AI, falling exponentially behind. Second, the Panic Buyers: executives rushing to mandate AI adoption without a clear strategy. And finally, the Strategic Adopters: thoughtful leaders who meticulously map use cases, choose vendors carefully, and implement effectively - yet still often fail just as spectacularly as the Panic Buyers, just at higher costs. Our CTO fit neatly into the third category, committing to an automation project at a "bargain" multiple-six-figure contract. The plan was straightforward: automate document reconciliation for 10-12 million annual documents: a seemingly perfect AI use case. But there was one glaring issue:  user adoption . According to McKinsey, 70% of digital transformations fail due to poor adoption. And even the successful 30% often triumph despite their technology choices, not because of them. Investing heavily in AI without user buy-in is like giving a Formula 1 car to someone whose only racing experience comes from playing Mario Kart on the weekends. Underlying this technical complexity is an even bigger issue: trust. Employees fear AI-driven efficiency. When management says, "AI makes you productive," employees often hear, "AI makes you redundant." Companies successfully adopting AI have reframed this narrative. Microsoft's Copilot didn't sell "efficiency" ; it offered to "skip boring tasks." Another client succinctly redefined AI's role: "We're replacing tasks nobody enjoys doing." Beyond job replacement anxieties, there's another emerging concern: creative ownership. The U.S. Copyright Office already restricts registrations for purely AI-generated works, posing a genuine existential threat. Enterprises may inadvertently be creating competitive advantages they don't legally own. Having observed many enterprises navigate these challenges, I've identified patterns of AI adoption that actually deliver: Our CTO realized another uncomfortable truth: Most "AI agents" today are essentially advanced robotic process automation (RPA) marketed under a more glamorous name. Genuine autonomous AI remains years away from widespread deployment, according to Gartner. Often, vendors sell costly solutions to problems traditional tools could solve more effectively. Yet, the hidden value often lies in the forced process documentation these initiatives require, a task beneficial regardless of vendor success. Consider the simple math of real ROI: No elaborate AI agents required - just smarter usage of available tools. Even better, the multiplier effect emerges when teams are freed from mundane tasks and shift to innovation, becoming engines of value creation rather than mere maintenance crews. The essential question driving successful adoption isn't technical—it's psychological: The CTO's cautious six-month AI vendor contract buys crucial time - not to validate the technology but to realize the fundamental problem is adoption, not automation. Competitors prioritizing grassroots AI literacy will be far ahead in just a few months, thriving on small wins and earned trust. Ironically, the enterprises poised to dominate AI aren't tech giants or flashy startups. They're pragmatic, mid-sized firms succeeding by placing trust at the center of their AI approach. They're carefully turning skeptics into advocates, one small, meaningful step at a time. Five years from now, these firms will dominate - not through superior technology or larger budgets, but because their people genuinely want to use AI. Successful companies start with personal productivity. Rather than imposing broad automations, they focus initially on helping individuals with specific tasks. Early adopters naturally evangelize their experiences, organically spreading adoption. Companies embrace their "shadow IT." Unauthorized AI users are often innovation drivers, discovering valuable use cases independently. Rather than shutting them down, turning these users into official AI champions significantly boosts adoption. Addressing data chaos is foundational. Layering AI on disorganized data only multiplies confusion and increases costs. Those who first unify their data infrastructure ultimately realize substantially higher ROI from their AI investments. 260 additional staff adopting AI Saving just 3 hours weekly each At $50/hour, this translates into over $2M annually

0 views
Robert Greiner 10 months ago

The Human Side of AI: Giving People Back Their Time

In 1927, Henry Ford made a revolutionary decision to transition his workforce to a five-day workweek. The reason wasn't just altruism - he discovered that productivity actually increased when people had more personal time. Nearly a century later, we're facing a similar inflection point with artificial intelligence. The most compelling AI solutions don't replace humans. They free us to be more human. I've spent time with dozens of organizations implementing AI, and there's a pattern I keep seeing: the most successful deployments don't start with the technology - they start with understanding human workflows, pain points, and aspirations. When implemented thoughtfully, AI doesn't just save time; it transforms how we work at a fundamental level. The most valuable currency in modern business isn't money… it's time. Imagine giving everyone in your organization an extra hour each day. That's 250+ hours per employee annually. For a company of 30 people, that's 7,500 hours of creative potential unlocked. What could your team accomplish with that gift? This isn't just about efficiency. It's about creating space for the deep thinking, creativity, and relationship-building that machines can't replicate. The irony is striking: we need AI to help us be more distinctly human. Why this matters : In knowledge-intensive fields, small time savings compound dramatically. A director-level employee earning $150,000 annually costs roughly $75/hour. Saving just one hour daily represents nearly $20,000 in value per person yearly: before accounting for the enhanced quality of work produced during those reclaimed hours. Most AI implementations fail not because of technology limitations but because of a fundamental misunderstanding of workplace dynamics. The traditional approach is backward: select a tool, then force your workflow to adapt. The smartest organizations reverse this—they observe how people actually work, identify true pain points, then select or develop tools that enhance existing workflows. This "fly-on-the-wall" strategy reveals: Why this matters : A multi-week observation and discovery period might seem excessive, but it prevents spending months implementing a solution people won't use. The best AI deployments feel like they were built specifically for your team—because in a way, they were. When Polaroid invented instant photography, they simultaneously created new legal questions about image ownership and privacy. AI tools create similar new territories - especially regarding document retention, intellectual property, and liability. Organizations that thrive with AI don't just focus on capabilities; they establish clear protocols for: For legal, financial, and healthcare organizations, these considerations aren't afterthoughts—they're fundamental requirements. Why this matters : AI systems create new forms of institutional memory. Unlike casual conversations, AI interactions are typically documented and potentially discoverable in litigation. Without proper governance, the very tools meant to enhance productivity could create significant exposure. The balance isn't about restricting AI use but establishing guardrails that allow confident innovation. As one executive put it: "We don't want to tie people's hands; we just need to protect what matters most." The most profound insight about organizational AI adoption isn't about technology at all: it's about people. Companies that see AI as merely another productivity tool miss the larger opportunity: reimagining how work happens. When Smartsheet replaced Excel for one real estate development team, the value wasn't just in features, it was in creating a unified framework for collaboration and decision-making. Transformative AI implementation follows a similar pattern: The most successful organizations approach AI as an organizational change initiative, not a technology deployment. Why this matters : The ROI equation for AI isn't just about time saved—it's about unlocking human potential. When routine work is automated, people naturally redirect energy toward higher-value activities that machines cannot replicate. The world doesn't need more AI tools. It needs more thoughtful implementation of the right tools in the right contexts. Your organization's AI strategy should begin not with capabilities but with questions: The organizations that thrive won't be those with the most advanced AI, but those who use AI most thoughtfully to amplify what makes their people exceptional. Remember Henry Ford's insight: sometimes the most productive thing you can do is give people back their time. What would your team do with an extra hour every day? Which repetitive tasks drain creative energy Where human judgment is truly irreplaceable How information bottlenecks slow progress What unique value your people provide that no AI can match What information should never be processed by AI systems How AI-generated content should be reviewed and verified When human judgment must supersede algorithmic recommendations Where generated content is stored and how long it's retained Start with understanding existing workflows (2-4 weeks) Identify high-impact opportunity areas (not just pain points) Develop clear implementation and training plans Establish governance protocols 5. Measure actual impact against expected outcomes Where do your people spend time that doesn't leverage their unique talents? What knowledge work could be enhanced with better pattern recognition? How might freeing up an hour per day change your culture? What boundaries need protection as you adopt these technologies?

0 views
Robert Greiner 1 years ago

When Products Think For Themselves

Not too long ago, if you wanted to picture a technology that made decisions on its own, you might think of Tony Stark chatting with J.A.R.V.I.S. - that all-knowing AI butler from the Iron Man movies. J.A.R.V.I.S. wasn’t just an assistant; it was an active partner, anticipating needs, solving problems, and pushing Tony forward. Once, that kind of autonomy seemed squarely in the realm of Hollywood imagination. Now, it’s creeping into our real world. As another year begins, there’s a quiet but profound shift unfolding in how we build products. It’s something called “agentic architecture,” and while that phrase sounds like it belongs in a sci-fi script, it captures a simple, transformative idea: products are evolving from passive tools you operate into active teammates you rely on. Until recently, products just sat there, waiting for you to poke, prod, and instruct them. Today they’re starting to think on their own. Imagine a logistics system that reroutes deliveries mid-journey because it senses a bottleneck ahead. Or a home heating system that adjusts to changing energy prices without you lifting a finger. We’re moving from products that quietly wait for orders to products that act on their own judgment. That’s the essence of agentic architecture. At this year's Microsoft Ignite conference, keynote speakers introduced proof-of-concepts for employee-self-service agents to answer common policy questions, meeting facilitators to take notes and nudge participants to stay on track, retail store assistants, warehouse assistance agents, and active translators to help people communicate in different languages. This shift is both exhilarating and unsettling. On the upside, these agentic products can help companies become more adaptable and efficient. They can spot patterns before we do, self-correct when something’s off, and free up human time for more creative work. On the flip side, letting products make decisions raises questions about trust, ethics, and alignment with our broader goals. How do we know these autonomous systems won’t drift off course? Heading into the new year, the first step is accepting that we’re not just tweaking features; we’re redefining relationships. We need cross-functional teams: engineers who can think about ethics, designers who get data, and product managers who see the forest, not just the trees. It’s about building a culture that values curiosity and breadth as much as depth. We also need the right foundation: sturdy data pipelines, flexible infrastructure, and clear processes for continuously learning what works and what doesn’t. Agentic architecture isn’t something you master from day one. It’s an ongoing experiment. The companies that treat it as such—trying, measuring, refining—will find themselves better positioned to adapt as products inch closer to becoming partners. Like Tony Stark learning to trust J.A.R.V.I.S., we’re learning to trust our creations in new ways. The key takeaway this year is to shift from making better products to cultivating better relationships with them. That’s the real heart of this transformation.

0 views
Robert Greiner 1 years ago

Don't Wait for January

We romanticize beginnings. January 1st becomes a sanctuary for our procrastination, a mirage of the perfect time to start. But here's the harsh truth: big projects don't wait on calendar dates. They require momentum, and momentum starts now. Waiting for January feels safe. It gives us a cushion, a grace period to "prepare." But our cushion quickly turns into quicksand. The first weeks of the new year are a haze. People are returning from holidays, inboxes are overflowing, and meetings are pushed back. By the time everyone's settled, it's mid-January. By the time we get started in earnest, assigning tasks, setting goals, and aligning teams, it all takes time. Weeks slip by. Suddenly, it's February, and meaningful progress hasn't even begun while the year is already 10% over. Meanwhile, others who took the plunge earlier are already leagues ahead, riding the waves you hesitated to catch. Five months down the line, you will find yourself no further along than you are today. But it's not just about stagnation; it's about regression. Standing still is moving backward in a world that spins faster every day. While you were waiting for the "right time" to start, your competition seized the moment, pushing forward, leaving you not just months but perhaps a year behind. Starting now builds momentum. It's the first push that gets the rock rolling. You're setting the stage even if progress is slow during the holiday season. You're preparing the soil so that when the new year arrives, you're not planting seeds - you’re watching sprouts grow. Don't let the arbitrary turn of the calendar dictate your actions. The "next big thing" won't move itself, and time won't grant you favors for waiting. Start now. Embrace the discomfort of beginning when others are winding down. When January comes, you're already in motion, propelled by the momentum you started building today. Ready. Steady. Go.

0 views
Robert Greiner 1 years ago

AI Rule #1 - Customer First

In 1985, Warren Buffett wisely said, Similarly, the first rule of AI investment is to focus on the customer first . And the second rule of AI investment is don't forget the first rule. In the four decades since Buffett's quote, investors around the world have shown how hard these words are to live by behaviorally. We just can't seem to collectively manage our irrational behaviors in the market. Business leaders are facing a similar "hot stock" siren call in the AI arms race - throwing enough funds into random investments and speculative moonshots that we could have sent another space station into orbit. This manifests in several ways, but there are a few red flags that commonly pop up: During my time as a consultant, I've had a front-row seat to the AI frenzy, watching companies pour millions into ambitious projects. They often solve fascinating problems but miss the mark on what their customers truly want or are willing to pay for. The allure of cutting-edge technology and the hype of flaunting "Powered by AI" on their websites lures them into a costly trap. They end up with expensive solutions no one asked for or needs. These companies run costly models on borrowed cloud infrastructure, tying up their brightest minds on use cases lacking business viability and customer appeal. It's like constructing a bridge to nowhere - technically impressive but ultimately pointless. A few of the more public strikeouts serve as a reminder to us about the dangers of getting our AI investments wrong: Buffett emphasizes that the most crucial quality of an investment manager is temperament, not intellect. We've found this also applies to managing AI investments. It's not about mastering the technology first; it's about having the wisdom to prioritize the right use cases from the start. Instead of treating AI as a hammer and viewing every problem as a nail, we need to begin with the voice of the customer—their journey, hopes, desires, and needs. You already know where to start with this. It's how your company has a moat in the first place. What does your customer value, and what is AI especially suited to solve? Use AI as a secondary tool to enhance a customer-driven use case. Many organizations are finding tremendous success with AI today by leveraging the technology to supercharge various use cases while adding additional value to their customers, which allows them to charge more: We believe that the success in these examples is rooted in design thinking. Instead of starting from the "left" side of the user journey with AI capabilities as a solution in search of a problem, start from the "right to left" with the customer's voice and work backward. Seek technology solutions to meet those demands; sometimes, AI use cases will be the perfect fit. We've seen how building AI products without a rigorous focus on user needs can be a company's bridge to nowhere: an expensive and potentially impressive product that ultimately remains disconnected and unused. A customer-centric approach gives your AI efforts direction and purpose. It ensures that the technology serves customers, not vice versa. Feeling overwhelmed about where to start? Imagine having a head start in the AI race with a roadmap crafted by experts who've faced the same challenges you are. We've distilled decades of experience into a dynamic 45-minute presentation on Leveraging AI for Your Business. If you'd like to discuss having us come out and deliver the presentation, reach out to learn more. Investing in a hot new technology to appear cutting-edge (resume-driven) Creating features that would be better built programmatically, but using AI for the sake of using AI Creating splashy, over-generalized features that don't meet real needs IBM’s Watson gave unsafe recommendations for treating cancer How Twitter Corrupted Microsoft's Tay: A Crash Course In the Dangers Of AI In The Real World NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down Adobe Creative Cloud Adobe’s Firefly generative AI tools are now generally available How Walmart enhances its inventory, supply chain through AI Siemens elevates predictive maintenance with generative AI Headstorm Unveils AGPILOT: Revolutionizing Agricultural Retail with Gen AI

0 views
Robert Greiner 2 years ago

Navigating the Upside Down as a Technology Leader

CIOs have the hardest job in the C-Suite in 2024. The pace of change is eating your lunch, while the constant pressure to innovate within shrinking budgets makes the role particularly challenging. This pain radiates at every level of the organization. I love Stranger Things , the era, the mystery, and the ability to watch reality break down in real time, right in front of our eyes. One day, you are a regular school kid with normal kid problems, and the next, you save the world from an extinction-level event. It's messy, and the laws of the universe no longer apply. As a technology leader, you are in transition from the ordinary to facing extraordinary challenges, pushing us into leadership trials we never anticipated. In our real-life Hawkins, the technology world is a bit Upside Down. Things don't work like they used to, and success requires a band of friends venturing into the unknown. In this adventure, the technology leader's mission is clear: to illuminate a path through uncertainty using all the tools at their disposal: collaboration, technology expertise, leadership, experimentation, and placing effective strategic bets. Navigating through another cybersecurity threat while trying to integrate AI without a clear ROI can feel like confronting the Shadow Monster with nothing but a flashlight and a walkie-talkie. Budgets are not keeping up with revenue growth or inflation. Organizations are playing catch-up with AI, dealing with human friction and sporadic ad-hoc investments diminishing future business cases. Business problems require increased capabilities and collaboration across the enterprise, requiring CIOs to relinquish control to marshal the resources and support required to succeed. How do we, as leaders, adapt when the playbook no longer applies? If you are in a technology leadership position - here are four ideas that will help you think differently about your organization and provide areas to focus on to ratchet up your effectiveness. Over the next few weeks, we will dive deeper into each one. Just as the kids in Hawkins navigate the unknown with courage and unity, technology leaders can guide their teams through today's challenges with strategic insight, collaboration, and a focus on core strengths. Use the four trends above as your roadmap for change and exploration. Maximize Existing Investments - You don't need another tool. Before evaluating additional long-term contracts, ensure you get the most out of what you already pay for. Chances are, you are leaving value on the table. This is a good time to evaluate your "rental" agreements and make sure you need all of the compute resources, human capital, and SaaS licensing/features you are paying for. Extend the Olive Branch - You can no longer meet your long-term objectives in the luxurious walled garden of the IT silo. To achieve progress toward the organization's strategy, you must mobilize resources from across the enterprise. Jim Hopper couldn't achieve his mission without a cross-functional team. Chances are, you can't either. You need your peers' resources, support, and budget to keep your job. Capture Value from AI Investments - Your organization has likely over-corrected on the AI fervor with sporadic investments and suspect business cases. Now is the time to narrow your focus and place strategic bets on AI that will move your business forward. We don't need another low-code document summarization POC. Remember when everyone was creating their own social media platform? Let's not make AI our next Quibi . Align Talent for Results - Organizations have taken their eye off the talent development ball. When the market heats up, capturing growth depends on your organization's capability portfolio. Instead of chasing the hottest trends in skillset - focus on productivity, throughput, teamwork, and work ethic. Joel Spolsky had it right - screen your talent for " Smart and Gets Things Done ." We are already seeing benefits from this shift in the wild.

0 views
Robert Greiner 2 years ago

Call to Adventure

I remember the first time I read The Lord of the Rings . I understood viscerally why Frodo, Bilbo, and company decided to leave The Shire in search of adventure. They were drawn by a calling and an urge to break free from the everyday hassles of a ho-hum life. One hundred eleven birthday parties are a lot to celebrate in a single place. Once the Fellowship got to Rivendell, though, that's another story. Rivendell is a sanctuary of tranquility. Its gardens and flowing streams provide a level of comfort and stability. Rivendell is a place of refuge, learning, and growth. Within its gates, the world seems at peace; it's hard to imagine ever leaving. After twelve years at the same company, I am departing for a new adventure. We rarely measure jobs in decades, yet here I am, having spent a significant chapter of my life at one of the best companies on the planet to work for. I had a strong reputation, a level of comfort, predictability, familiarity, and certainty that I likely could have ridden to retirement. But today, I'm trading that in for a new journey toward an adventure with different opportunities, challenges, and the thrill of the unknown that you can't get within the walls of the familiar. I'm joining a boutique consulting firm called Headstorm. As soon as I met them, I knew it would be a fit. They remind me of the Fellowship, a small, intrepid group hyper-focused on a North Star. Everyone brings their own experience, skills, and perspectives to forge a formidable force greater than the sum of its parts. The allure of being part of a nimble, high-impact team was too good to pass up. I love the idea of a small, focused, well-functioning team changing the world around them for the better , and I think I have that in Headstorm. I'm also excited about stretching my skills in new directions, particularly around helping clients develop new strategies and implementing them in a human-centric way. Over the last twelve years, I've grown in ways I could have never imagined. I retooled my career completely from a technical implementer to a leader of teams. I've built a robust talent stack of skills and experiences with dozens of people I can genuinely call my friends. Looking back, my core memories are not of what was accomplished over the years but of the people I worked with and the stories we wrote together. The thing I'm most grateful for, though, is that Pariveda helped me better understand myself. Before joining, I thought I was introverted and detail-oriented. I figured all software developers were introverted and detail-oriented, so why not me? In my first three months, I took Predictive Index training and realized I am extroverted and not detail-oriented at all (big surprise). The tension I felt in my career over the several years leading up to my time at Pariveda is hard to describe. How much longer would I have experienced it without being part of an organization that fervently focuses on human development? I'm grateful for Pariveda giving me the gift of understanding a little bit about how I'm wired, giving me a vocabulary to express that, and giving me feedback over the years to help shape it into something more productive and balanced. As I step into the next phase of my journey, the legacy of Pariveda accompanies me. The lessons learned, relationships forged, and insights gained are not just part of a farewell; they are integral components of my evolving narrative. I leave with phenomenal memories, an enriched perspective, an expanded community of colleagues-turned-friends, and a heart full of gratitude.

0 views
Robert Greiner 2 years ago

Artist for a Day

Last week, I got to tap into my creative side. I designed an AI art exhibit for a pop-up event aimed at helping people think differently about how AI impacts the world around them. As someone with two left feet, who can barely draw stick figures and doesn't know how to read music - it was a nice departure from my day-to-day. The exhibition's centerpiece was a unique blend of history and modernity. We showcased an original Picasso and Salvador Dali (Thanks, T!) alongside their AI-reimagined counterparts in various styles, from western to cyberpunk. We also provided laptops for attendees to generate AI images, print on high-quality paper, and sign to take home and display. It was a hit. Beyond the technology and marvels of art created by two of the best in history, the true revelation of the event was the human interaction it fostered. Total strangers collaborated with each other, crafting unique pieces of art. The technology seamlessly blended into the background as conversation, laughter, and human creativity took center stage. We discussed the enduring nature of art through history and how imagery can evoke profound emotions and narrate stories more potently than words. This unique experience not only showcased the power and potential of AI but also set the stage for a deeper discussion into how technology can amplify human creativity rather than overshadow it. As a culture, we've become enamored with AI and its sweeping impacts. Technology has provided us with near-infinite benefits and opportunities, sometimes at the expense of what makes us human - creating an undercurrent of yearning that reconnects us with our human-centric roots. This desire is perfectly encapsulated by The Cultural Tutor , a Twitter/X profile that amassed over 1.5 million followers in the span of a year by writing about art, history, and literature - right in the middle of one of the hottest technology trends to sweep the planet. Nicholas Taleb points out that the quality of AI's output is intrinsically linked to the richness of its input. This is why you can't just ask ChatGPT to "write a book about leadership" and expect any meaningful output. This also reinforces the notion that AI's effectiveness is magnified by robust human input. The canvas pop-up event was a testament to this. The best pieces generated were created through conversation - demonstrating how collaboration and technology, when harmoniously integrated, can lead to enriched creative output. Imagine a future where AI isn't seen as a rival but a collaborator in creative endeavors. This isn't about machines claiming the creative throne but about a partnership that could redefine how knowledge work, artistic expression, and all forms of creative output are made. Technology and AI amplify human creativity when not used as a crutch. AI's integration promises to enhance our creative capacities, to create output that is as emotionally resonant as technologically advanced - from the nuanced strokes of a painter's brush to the strategic thinking in a modern business plan. The canvas pop-up event was more than just an exhibit; it was a reminder of the inherent value and power of human collaboration, amplified by technology. By leveraging AI in our work, conversations, writing, and art, we don't just enhance our output; we redefine the creation process itself. Making it more dynamic, sophisticated, and profoundly human.

0 views