Latest Posts (7 found)

Launch Now

Inside us are two wolves. One wolf wants to craft, polish and refine – make things of exceptional quality. The other wolf wants to move fast and get feedback now. The two wolves don’t always get along. For years, I’ve balanced this by working toward exceptional products but constantly collecting private feedback along the way. Then, once we’ve built something excellent, something worthy of attention, we launch it to the world with appropriate fanfare. Videos, marketing campaigns, polished onboarding, and so on. “Here’s something worth trying, we think you’ll really like it.” This totally works. At least, it works as a path to eventually ship high-quality software. Polished, usable, even delightful software. But when it comes to building something people will pay for, it’s neither reliable nor fast. Our first product at Forestwalk was a developer tool – a platform for building and running evaluations of LLM-powered apps . We learned a ton building it, but after a few months – as we approached our first pilot projects – feedback from demos and potential first customers convinced us that this was the wrong path. It was more likely to lead us into a lifestyle business than something big. So we pivoted. We spent a few weeks building a prototype a week, showing demos, doing customer research, and found a second promising product path. Our second product was a productivity tool – a work assistant that could capture, organize, and rationalize teams’ tasks . We learned a ton building it, but after a few months – as we approached a public beta – feedback from private testers and our investors convinced us that this was the wrong path. It was more likely to lead us into a lifestyle business than something big. So we pivoted. The third time purports to be the charm. But at the same time, doing the same thing over and over typically gets the same results. We need to build something profoundly useful, something people really want. We can’t keep hiding away, sending out private demos and prototypes, not fully shipping anything! So, we decided to push harder into the discomfort of showing our work early. Just before Christmas, we decided to commit to something and work towards getting it shipped. This third product is codenamed Cedarloop 1 . It’s a realtime meeting agent. Unlike AIs that passively listen in to meetings and just write up notes after the fact, Cedar joins calls and uses “voice in, visuals out” to screen-share useful observations and perform routine tasks live during a Google Meet or Zoom meeting. The vision is to build a kind of agentic PM assistant. It can respond within a second of you talking 2 , which – when it works – feels like magic. We’ve been learning a lot building it. Recently, we started working with an excellent designer here in Vancouver who was keen to get going. I’d like to do some user testing. What do people say when you let them try it? Well, obviously it’s so early right now. They won’t like it. The inference and onboarding need more work. But we’ve been doing research about problems, needs, willingness to pay, and things like that. Sure… but we should also let people try it. What if we launched now? Well, obviously we can’t launch now . I mean… obviously. Launching now would be embarrassing. It’s not my brand to launch something publicly that’s not ready. On the other hand… I keep a printed copy of Y Combinator’s list of essential startup advice on my desk. And if you know YC, you’ll know that the first point of advice is “Launch now”. Only last month I was interviewing Brett Huneycutt, Wealthsimple’s co-founder . He had a lot of great stories, but one that sticks out is that even as a $10B company, they prioritize launching “now”, for as close as they can get to that definition. It’s not just about speed: a rapid feedback loop is a core ingredient in getting to quality. So we launched now. As of today, people can check out our research-preview realtime meeting agent at Cedarloop.ai . With luck, they’ll report issues, inform what we should prioritize next, and tell us what problems they’d love to have automated away. We’re only a few hours in, and yep – people are reporting issues. Linear integration had an OAuth issue. Login didn’t work in social-media webviews. We’ve been so focused on the desktop experience that we’ve let the mobile layout get janky. This is embarrassing! But also, there’s signal. People are trying the Linear integration. Our desktop-focused app is being discovered on mobile. Folks care enough to click at all. And in a week or so, we’ll have a smoother onboarding flow than we would have gotten to with weeks of private user tests. So it’s worth the pain. We’re going to take the feedback, follow the signal, learn and re-learn, and do better. We’ll use it to forge the best damn live agent ever – or, if the feedback peters out, we’ll know we’re on the wrong path, and find the right one. In the meantime, there’s a lot to do. 3 Back to work! This is not a good name yet. For example, sometimes iOS mishears “Hey Cedar” as “Hey Siri”. But part of our move-fast strategy is to worry more about names once we’ve proven something has traction. At that point, we’ll put in the work to give it the right name – and eventually rename the company after it. ↩ It’s fascinating how much you can do to get LLM response times down. Our first prototype often took over 8000ms to respond, which doesn’t feel live at all. Once we got it under ~1200ms, voice-in-vision-out suddenly felt alive – a step change. We have a lot of work planned to get Cedarloop even faster and much more reliable, which I’m keen to write about when I can. ↩ Speaking of having a lot to do: if you’re an experienced product-minded developer in Vancouver who would be excited to iterate and build out realtime agents using LLMs and TypeScript, we’re hiring a Founding Engineer . Just sayin’. ↩ This is not a good name yet. For example, sometimes iOS mishears “Hey Cedar” as “Hey Siri”. But part of our move-fast strategy is to worry more about names once we’ve proven something has traction. At that point, we’ll put in the work to give it the right name – and eventually rename the company after it. ↩ It’s fascinating how much you can do to get LLM response times down. Our first prototype often took over 8000ms to respond, which doesn’t feel live at all. Once we got it under ~1200ms, voice-in-vision-out suddenly felt alive – a step change. We have a lot of work planned to get Cedarloop even faster and much more reliable, which I’m keen to write about when I can. ↩ Speaking of having a lot to do: if you’re an experienced product-minded developer in Vancouver who would be excited to iterate and build out realtime agents using LLMs and TypeScript, we’re hiring a Founding Engineer . Just sayin’. ↩

0 views
Allen Pike 1 months ago

A Box of Many Inputs

One of the interesting questions when designing AI-enabled software is, “What does search input mean?” This was once a simple question: if a user entered “squish” in a search box, it would of course return things that contained “squish”. Over time though, computers have improved to the point where a kind of universal “omnibox” is now possible. Today software can: Ideally, users shouldn’t need to think about which mode they want. They should just be able to type enough characters to make their intent clear, then press Enter. While this is great in theory, it creates a very crowded design space for input boxes. While most kinds of software will end up with an omnibox-like input – especially with the rise of universal command bars – one of the most fiendish cases is found in the web browser. At birth, browsers’ address bar required precise input. To get to Wikipedia, you would have had to type precisely. Now, you can mangle it as and still get there, which is impressive in its own right. However, with modern technology there is even more we can do with the address bar. The rise of “AI browsers” like Dia, Atlas, and Comet all strive to layer more functionality into this input field. For example, in Dia if you type in the address bar, it will forego Google and use the in-browser chat to correctly answer $610 million. If it sees a question, it sends it to chat. Seems maybe helpful, right? Except sometimes you’re not asking a question, you’re searching for a question. Check out what happens when you search for the movie “Who Framed Roger Rabbit?” below. This is basically never what the user wants when they type a film title into the browser omnibox. Now, if you typed “who framed roger rabbit” into ChatGPT, a chat response would not be surprising. It’s a chat app. But people expect browsers to be able to browse to things, even if the thing’s name starts with . Admittedly, Dia’s algorithm for question detection is more complex than prefix-matching. They’ve bundled a little classifier that detects if a phrase has “question vibes”. By doing question classification locally with a very small model 1 , they can identify questions within milliseconds, with no effect on privacy. Smart! Except. There are a lot of cases where it’s difficult to classify a phrase by whether it should get an answer or a search result. Consider these results from Dia’s question classifier, all of which should do a search: So it’s imperfect at identifying questions. But more critically, even if they hill-climbed to 100% on “is this a question” accuracy, that’s still a dead end. The actual classification we want is “is this a question they are asking , or is it a phrase they want to search for ?” Unfortunately, that’s a much harder training problem! Currently, it’s beyond the ken of a fast local classifier. As I’ve been weighing alternative browsers, I expected other “AI browser” contenders to have the same problem. To my surprise, no. ChatGPT Atlas avoids it entirely by keeping their heuristic very simple: if a query has less than 10 words, send it to search – otherwise chat. 3 If you want to explicitly send a short query to chat, you can press ⌘ + return. Meanwhile, Perplexity’s Comet and Google’s Chrome send every query to their respective search engines, which use larger server-side models to determine whether a given result should get a more chat-forward response or a more results-centric one. As of today, Google errs on the side of returning web results, and Perplexity errs on the side of giving an AI-generated answer, but both can do either. In time, these different search and chat engines will converge toward a design that gives users a good experience almost every time, without the need for modes. Already Google is testing out merging their “AI mode” into the AI overviews that appear on results. And presumably OpenAI will evolve to give more search-like responses to pedestrian search queries, in their ever-growing quest for world domination. Of the four leading contenders for the “AI browser” crown, only Dia tries to interpret as a question. From here, Dia can go one of two ways. They can either go all-in on their own answer engine, competing with Perplexity and Google, so they can send every query there. Or, they can concede on the local classifier and do as ChatGPT Atlas has done – just route all short queries to search. Earlier this month, Atlassian CEO and Dia acquirer Mike Cannon-Brookes gave an interview where he mentioned the acquisition and the future vision . Reading the tea leaves, it seems Dia could fulfill Atlassian’s vision of building the browser for professional productivity without needing to become a full-on search engine. On the other hand, Cannon-Brookes mentioned the supposedly-retired Arc 4 browser three times and didn’t use the phrase Dia once. So… it’s hard to tell exactly what the path is going to look like. Regardless, with The Browser Company’s talent and Atlassian’s reach, it would be rash to count them out in this one. They’ll fix the Roger Rabbit thing soon enough. But while browsers encountered it first, time will bring more and more ambitious text boxes. Many of them will initially struggle to deliver on our intent. But, despite some growing pains, the holy grail is within reach. People should be able to type what they want their software to do, and it should do it. And it should do so without any unpleasant surprises. Quickly. It won’t be easy! But it’ll be great. You can see in the app bundle that it’s a DistilBERT model, with LoRA adapters for routing search queries and for identifying sensitive content, running on Apple’s MLX for speed. It looks like the whole local ML package is about 160MB. While that size class of model can be powerful for certain tasks, it’s hard to pack in enough world knowledge to distinguish between film titles and popular quotes vs. genuine questions.  ↩ I love the idea of the model thinking, “Oh I found a question! starts with ! So the user is asking whether or not ‘of beans’ can recipe.”  ↩ Dia also sends long queries to chat – anything over 12 words.  ↩ I can’t mention Arc without a shout out to one of my favourite features of all time: Tab Tidy. If you have a bunch of tabs open in Arc’s vertical tab bar, a little Tidy button will group the tabs by logical topic or task. Then, you can close them group by group. So neat.  ↩ Return things with your input’s typo fixed Return things with a synonym to your input Autocomplete recent or popular things Directly retrieve an answer to your input Jump to an action based on your input Generate a conversational response to your input You can see in the app bundle that it’s a DistilBERT model, with LoRA adapters for routing search queries and for identifying sensitive content, running on Apple’s MLX for speed. It looks like the whole local ML package is about 160MB. While that size class of model can be powerful for certain tasks, it’s hard to pack in enough world knowledge to distinguish between film titles and popular quotes vs. genuine questions.  ↩ I love the idea of the model thinking, “Oh I found a question! starts with ! So the user is asking whether or not ‘of beans’ can recipe.”  ↩ Dia also sends long queries to chat – anything over 12 words.  ↩ I can’t mention Arc without a shout out to one of my favourite features of all time: Tab Tidy. If you have a bunch of tabs open in Arc’s vertical tab bar, a little Tidy button will group the tabs by logical topic or task. Then, you can close them group by group. So neat.  ↩

0 views
Allen Pike 2 months ago

Why is ChatGPT for Mac So… Bad?

Last week I wrote an exploration of Ben Thompson’s recent question, “Why is the ChatGPT Mac app so good?” A lot of people on the internet, it turns out, do not agree with this premise! Many folks have been having problems with ⌘C not copying text. Hacker News sees the app as “not good at all”, to the point that my post about it being better than the alternatives was flagged off the site. X doesn’t like it either . Beyond the bugs I mentioned in last week’s post, I’ve recently been plagued with a ChatGPT Mac bug of my own, where every time I start a new chat, it will pre-fill the text field with the first input I used last time I started a new chat on Mac. All of this led me to an informative post by one of OpenAI’s Mac developers, Stephan Casas: nearly everyone who works on the ChatGPT macOS app has been stretched thin, and hard at work building Atlas. i’m thankful that our users appreciate our decision to develop a native app just as much as i’m thankful for the heightened expectations they hold because we did so Apparently he merged a fix this week for the copy-paste bug that has been plaguing many folks, which is promising. Something implied in last week’s article that’s worth saying explicitly: although many good Mac apps are native, being native is neither necessary nor sufficient for being a great app . While OpenAI is investing more in desktop apps than any other model labs, they have much to do before they can transcend “better than the alternatives” and achieve “great.”

0 views
Allen Pike 3 months ago

Why is ChatGPT for Mac So Good?

This year, even as Anthropic, Google, and others have challenged OpenAI’s model performance crown, ChatGPT’s lead as an end-user product has only solidified. On the Dithering podcast last week (paywalled) , Ben Thompson called out an aspect of why this is: I need someone to write the definitive article on why the ChatGPT Mac app is so good, and why everyone else is in dereliction of duty in doing these. Gemini 3 is reportedly coming this week. […] And I’m looking forward to it. I expect it to be good. And it’s just going to have to be so astronomically good for me to not use ChatGPT, precisely because the [Mac] app is so useful. A model is only as useful as its applications. As AI becomes multimodal and gets better at using tools, these interfaces are getting even more important – to the point that models’ apps now matter more than benchmarks . And while every major LLM has a mobile app, only three have a Mac app: Copilot, Claude, and ChatGPT. And of those, only one is truly good. Hold on – we’re diving in. ChatGPT for Mac is a nice app. It’s well-maintained, stable, performant, and pleasant to use. Over the last year and a half, OpenAI has brought most new ChatGPT features to the Mac app on day one, and even launched new capabilities exclusively for Mac, like Work with Apps . The app does a good job of following the platform conventions on Mac. That means buttons, text fields, and menus behave as they do in other Mac apps. While ChatGPT is imperfect on both Mac and web, both platforms have the finish you would expect from a daily-use tool. Meanwhile, the Mac apps for Claude and Microsoft’s “365 Copilot” are simply websites residing in an app’s shell, like a digital hermit crab. 365 Copilot is effectively a build of the Edge browser that only loads m365.cloud.microsoft , while Claude loads their web UI using the ubiquitous Electron framework. While the Claude web app works pretty well, it only takes a few minutes of clicking around Claude for Mac to find various app-specific UI bugs and bits of missing polish. As just one example: Mac apps can typically be moved by dragging the top corner of the window. Claude supports this too, but not when you have a chat open? Unsurprisingly, the Microsoft 365 Copilot app is even worse, and Gemini doesn’t have a Mac app at all. The desktop has not been a focus for the major AI labs thus far. The oddball here is the plain “Copilot” app, which is of course unrelated to the “365 Copilot” app other than sharing an icon, corporate parent, and name. Copilot for Mac is, it seems, a pared-down native Mac reproduction of the ChatGPT app with a bit of Microsoft UI flavor. It’s actually weirdly nice, although it’s missing enough features that it feels clearly behind ChatGPT and Claude. Fascinatingly, the Copilot app doesn’t allow you to sign in with a work account. For work – the main purpose of a desktop app – you must use the janky 365 Copilot web app. While this dichotomy might be confusing, it’s a perfect illustration of the longstanding tension that’s made cross-platform the norm for business apps. Cross-platform apps like Claude’s are, of course, cheaper to develop than native ones like OpenAI’s. But cost isn’t the most important tradeoff when these very well-capitalized companies decide whether to make their apps cross-platform. The biggest tradeoff is between polished UX and coordinated featurefulness . It’s easier to get a polished app with native APIs, but at a certain scale separate apps make it hard to rapidly iterate a complex enterprise product while keeping it in sync on each platform, while also meeting your service and customer obligations. So for a consumer-facing app like ChatGPT or the no-modifier Copilot, it’s easier to go native. For companies that are, at their core, selling to enterprises, you get Electron apps. This is not as bad as it sounds, because despite popular sentiment, Electron apps can be good apps. Sure, by default they’re janky web app shells. But with great care and attention and diligence and craft, they can be polished almost as well as native apps. While they might not feel native, Electron apps like Superhuman, Figma, Cursor, and Linear are delightful 1 . These apps are tools for work, and their teams invest in fixing rough edges, UI glitches, and squirrelly behaviour that might break users’ flow. Meanwhile, ChatGPT, despite being built on native tech, has its share of problems. These range from the small (the Personalization settings pane currently has two back-arrows instead of one) to the hilarious. View this post on Instagram A post shared by Allen Pike (@allenjpike) At the end of the day, the ChatGPT app for Mac is good because they care. They have a product-led growth model that justifies spending the resources, an organizational priority on user experience, and a team that can execute on that mission. Meanwhile, Anthropic’s been going hard on enterprise sales, so it’s not shocking they’ve neglected their desktop experience. It’s unlikely they have a big team of developers on the app who don’t care about these issues – they probably haven’t had many folks working on it at all. Still, I wouldn’t count out the possibility of a change in course here. While mobile is king, desktop is still where work happens. While OpenAI has acquired Sky to double down on desktop, Google has long been all-in on the browser. That leaves Anthropic as the challenger on desktop, with their latest models begging to be paired with well-crafted apps. While Anthropic could surprise everybody by dropping a native Mac app, I would bet against that. There’s a lot of headroom available to them just by investing in doing Electron well, mixing in bits of native code where needed, and hill-climbing from “website in shell” to “great app that happens to use web technology”. Just as ChatGPT’s unexpected success woke OpenAI to the opportunities of being more product-centric, the breakout hit of Claude Code might warm Anthropic to the importance of investing in delightful tools. Last year they brought on Mike Krieger as CPO , who certainly seems like he could rally a team in this direction given the chance. Until then, ChatGPT will reign supreme. We’ve done some Electron work at Forestwalk, and it was surprising how easy it was to cause classic Electron bugs like the whole app being a white square, the top navigation scrolling out of view, and the like. It was even more surprising how tractable it is to just refuse to tolerate these common issues, and put in the time to fix them one by one. It can be done.  ↩ We’ve done some Electron work at Forestwalk, and it was surprising how easy it was to cause classic Electron bugs like the whole app being a white square, the top navigation scrolling out of view, and the like. It was even more surprising how tractable it is to just refuse to tolerate these common issues, and put in the time to fix them one by one. It can be done.  ↩

0 views
Allen Pike 5 months ago

UX Entropy

In the olden days, video calls were hard. Circa 2012, if your next meeting was online, it was important to start the process 5-10 minutes early. The process, at that time, was some or all of the following incantations and rituals: With luck, you would eventually be in the meeting. The other participants, often, would not be. Regrettably, each participant also needed to do the incantations, and they might not have started early. They might even be stuck. For example, the person you’re meeting might think they’re waiting for you, so they’ve multi-tasked to another app – but surprise! GoToMeeting or WebEx or whatever actually needed them to click “OK” or “Update” or “Ẓ̴͝a̴̡̕l̷̙̓g̶̫̔ó̸̻” to continue the joining process. After 5-10 minutes you would politely email your colleague, asking if they were still joining. Often enough you’d find yourself attempting to help people troubleshoot the above steps via email, which was… not enjoyable. This was all obviously bad. Any user could see it was bad, but it seemed – oddly – like the companies supporting these apps were kind of blind to it. Or, at least, their enterprise customers weren’t demanding better. As the story goes , Eric Yuan, then an executive at WebEx, was aware how clunky these product experiences were, and was ashamed of it. He felt that customers deserved a more user-centric video product, with excellent call quality, that ensured anybody could join a call with one click. In January 2013, his new startup launched Zoom 1.0. They employed some clever tricks to make sure Zoom seamlessly installed and stayed up to date, so anybody could always join a call in one click. They pushed hard to ramp up the video quality. They prioritized UX at all costs. The formula worked. A few months after launching 1.0, Zoom had 1 million users. In April 2019, they IPOed with $600M of revenue, were profitable, and were doubling yearly. By then they were well-known as the video app with the best call quality and UX, so when the pandemic happened the following year, Zoom was propelled to household name status. Today, they have over $1B/yr in profit, and continue to grow. Zoom is one of the great startup success stories. It’s also slowly falling apart. Success at scale always causes problems. Enterprise software success, doubly so. The first hurdle for Zoom, shortly after their IPO, was security issues. These ranged from underpowered encryption to leaky analytics to the revelation that their legendary one-click meeting flow was itself a security vulnerability . With market dominance in hand and billions of dollars of enterprise revenue on the line, Zoom started to unwind their approach of usability at all costs. Zoom founder Eric Yuan on this shift : One-click is important. However, you need to make sure there’s not any potential issue, any potential violation of the operating system. Sometimes we have to sacrifice usability for privacy or security, that’s exactly what we did. We now think security or privacy [is] even more important than that. And objectively, this is good! We want the software everybody uses to communicate to be private and secure. But it’s also a change in mindset from what made the product great in the first place. The defaults get locked down, the settings panels balloon, and Zoom is that much less likely to incubate the next team communications breakthrough. Zoom was also one of the companies most thrashed around by the pandemic. While from the outside the surge in usage might have seemed like a blessing, it ultimately caused Zoom more trouble than it was worth. Yuan again: I really wish there was no COVID. Zoom would be a much better company today. COVID, I do not think really helped us that much except for the brand recognition. For everything else, I feel like there was a negative impact to our business in terms of culture, and growth, and the internal challenge, or the competitive landscape. Everything else… I feel like it’s not good for us. When your company goes from 2000 employees to 6000 in 2 years because of an event outside your control, you’re gonna have a bad time! You’re also going to get even more settings screens. How many settings, you ask? Developing a clear and coherent product is hard. Developing a clear and coherent product with 6000 other people is even harder! The other day I had to log in to Zoom to change one of these myriad settings. Shown below is what Zoom looks like today when somebody at a 2-person startup logs in. Now. In your opinion, what is the ideal number of times this screen should try to sell a startup an upgrade to allow over 100 people in a meeting? Maybe… 6 separate upsells? (The sixth is hard to spot, it’s partially hidden by the popover for the 5th upsell.) Of course nobody at Zoom decided 6 was the right number. While there is probably somebody at Zoom thinking about the 2-person startup UX, there are clearly 20x as many people concerned about increasing the number of customers who sign up for 500-person meetings. This dashboard is a veritable banner that says “Our KPI is selling Large Meeting add-ons.” Which I’m sure is logical! At least in the short term. At the same time though, this stuff gives users the ick. “Avoid the ick” is not an OKR, and “% of users that hate navigating your settings” does not appear on your KPI dashboard. But it still accumulates. When this kind of rot happens, it’s obviously bad. Any user can see it’s bad. But, importantly, enterprise customers don’t demand fewer settings, nor sane marketing position toward startups. So, often, these situations degrade. It’s a tale as old as time. Occasionally a market leader who’s gotten off track will rally – especially if they’re still founder-led – to save themselves from fossilization and reinvent. In theory, Zoom could lever their position, in the center of billions of work meetings, into becoming a critical part of future AI-accelerated work. More often, the gaps grow large enough that they spawn new startups. Blind spots and product debt compound until they recreate the situation that inspired Yuan in 2011: the market leader’s UX will be bad enough, and the potential for what could be will be compelling enough, that a worthy successor will launch. People will love it, and it will grow like wild. Either way, we’ll look back on today as the bad old days, and appreciate how much better software has gotten. Customers will continue to demand better, and eventually someone will provide. It’s the circle of life. Find the meeting URL Find the meeting passcode Download a specific videoconferencing app Agree to and accept various things Dial in separately to get audio Troubleshoot your audio or video Wait for an update to download Wait for the videoconferencing app to restart Wait for your whole computer to restart Repeat some of the above steps, now that your computer has restarted

0 views
Allen Pike 6 months ago

Building Something Big

When I talk about building Forestwalk , people who’ve long known me are sometimes surprised that I’ve been using terms like “runway”, “venture-scale”, and other jargon more associated with the VC world than indie or lifestyle businesses. And indeed, I do have a secret to come clean about. You see, for most founders, most of the time, it’s logical to build a “lifestyle business” rather than a venture-track one. The good lifestyle is right in the name. Unluckily for me, working for a lifestyle was never that motivating. I love building software and teams and companies – if I earned enough to retire, I would just keep doing that. So instead of centring my first business around my lifestyle, it was focused on building great products and being a great place to work. Still, our ambitions were generally sized to ensure we didn’t need to make tradeoffs like working late nights, bringing on investors, or taking big risks. This mostly achieved my goals. For a while. Yet a standard human foible is that, as we achieve our dreams, we generate larger ones. A decade in, I didn’t just want to build great apps with a small team of good people. I wanted to build great products that had a positive impact on a lot of people , and I wanted to do that with a highly ambitious team . Over the years I’ve had the chance to work with some really incredible folks – driven, passionate, smart, and ambitious. People who are unhappy with the status quo, and who rally their peers to do better work and set their sights higher. As I was working last year towards founding Forestwalk , I realized that a core motivator for us was building with these kinds of people. But how the heck could we afford to do that? Alex MacCaw highlighted this dynamic in his generally excellent Lifestyle business FAQ : Pros of lifestyle businesses: Cons of lifestyle businesses: There it is. If you want to constantly be learning, and attract and retain a team full of world-class people who are driven to push you to do so – the sort of people you dream of working with – the best way to do that is to build a venture-scale business. So if you’re a weirdo who cares more about that than you do about your own stress levels, you should swing big. So that’s what we’ve been doing. That’s why, earlier this year, when we concluded the LLM evals product we’d been working on could make a meaningful business but not a venture-scale one, we pivoted to something new (using what we’d learned as kindling). And why we’ll keep adjusting our plan until something clicks that we could plausibly build into something big. Not because building a huge company is inherently good, but because building toward something big is the best way to attract incredible people. Of course, it might not work. Things are still very early. But I thought it was worth being straight: that’s the goal. We’re going to build something big, or die tryin’. Wish us luck. Fairly straightforward way to get rich Earn while you sleep; escape the 9 to 5 rat race Focus on other pursuits, like writing, traveling, family, etc Unreliable source of income (at least initially) Does not force ones self-growth (unlike venture-backed companies) Most likely you won’t work closely with incredible people (can get boring/lonely)

0 views
Allen Pike 7 months ago

Getting Tied Up

I never was a Boy Scout. As a kid, I leaned heavily toward papers, screens, and other indoor pursuits. Despite this, I was always drawn to camping. Setting up in the forests of British Columbia for a few days, surrounded by trees and fresh air, always felt good. Worthwhile. Right. While camping was always joyful, there is one aspect I long struggled with: I was bad at knots. Okay, that is too charitable. I was incompetent at knots. All I could really do is tie the basic learn-it-when-you’re-five knot, repeated twice for good measure. Knot connoisseurs call this a “granny knot,” and it is an objectively bad knot . These bad knots got me through most of life – they tie a garbage bag until it’s out of sight and out of mind – but when it comes to camping, they are not very helpful. They don’t stay tight, but they’re also hard to untie. They’re not adjustable for tarp lines, and they’re not useful when you only have one end of a rope to work with. They’re just generally bad, and they should feel bad. I kind of knew this. I had camped every year for decades, and my knots were always a source of frustration. But I was never a Boy Scout. I missed the knot-tying part of life! And my dad moved out when I was a kid. And… I dunno. I’m a computer guy, don’t make me learn knots. I mean, obviously I could learn knots. I learned long ago that we can learn anything at any age! Being bad at something is just the first step to getting pretty good at it. But if you try to get started with knots, it’s… a lot. The Ashley Book of Knots documents 3857 of them. I downloaded the Knots 3D app, hoping it would give me some guidance. It explains 201 knots, but specifically calls out the “essential” knots: the mere 18 knots one must learn how to execute in order to survive. You see, there are knots for binding an object down, hitching a rope to an object, adding a loop to a rope, joining two ropes together, stopping a rope from going through a hole, and making an adjustable tie. The ideal knot can vary depending on the direction of tension, the kind of rope, and the relative size of the ropes you’re using. Plus, many knots can easily be done incorrectly, resulting in a problematic bad version – like our cursed double-tied shoelaces. But… I just wanna quickly tie tarps. And do basic camping stuff. There are a lot of things I’d rather spend my time mastering than knots! So I went back to ignoring them. A couple years ago, after one particularly frustrating battle with a large tarp in the rain, I finally realized I’d played myself. By avoiding knot practice for so long, I’d let it become a gremlin in my mind. A thing I was bad at, not as a transitional phase towards being good, or even because I was happy to be bad at it, but because I’d let being bad at it become part of my character. So, when I got home, I set myself down and learned one single knot. Something that would help with tarping. I spent a couple hours and learned the adjustable Tarbuck Knot . The Tarbuck Knot isn’t an ideal knot in any sense. But it’s adjustable, it’s reasonable, and I like it. And by going from knowing nothing – other than “I am bad at this” – to knowing literally anything levelled up my vacation every year. I now have nice little adjustable tarp lines everywhere. Sure, I sometimes have things tied together with adjustable knots that don’t strictly need to be adjustable. But it’s quick and useful. I guess the thing I learned – other than how to tie a knot – is that there is nothing so outside your wheelhouse that you can’t go 0 to 1 with it. It’s too easy to dismiss a topic or discipline as not your domain and let your ignorance slowly hinder you. One of the miracles of being human is that we can learn a little bit about everything. I suppose there’s one other thing I learned. When it comes to the plain knot – the “I’m gonna tie my shoelaces” right over left knot – you should never double-tie it. Instead, tie the second one in reverse, left over right. That upgrades the bad knot into a Square Knot : stronger and easier to untie. Little things can make big differences.

0 views