Latest Posts (20 found)
Gabriel Weinberg 1 weeks ago

As AI displaces jobs, the US government should create new jobs building affordable housing

We have a housing shortage in the U.S., and it is arguably a major cause of long-term unrest about the economy. Putting aside whether AI will eliminate jobs on net, it will certainly displace a lot of them. And the displaced people are unlikely to be the same people who will secure the higher-tech jobs that get created. For example, are most displaced truck drivers going to get jobs in new industries that require a lot of education? Put these two problems together and maybe there is a solution hiding in plain sight: create millions of new jobs in housing. Someone has to build all the affordable homes we need, so why not subsidize jobs and training for those displaced by AI? These jobs will arguably offer an easier onramp and are sorely needed now (and likely for the next couple of decades as we chip away at this housing shortage). Granted, labor may not be the primary bottleneck in the housing shortage, but it is certainly a factor and one that is seemingly being overlooked. There are many bills in Congress aimed at increasing housing supply through new financing and relaxed regulatory frameworks. A program like this would help complete the package. None of this has been happening via market forces alone, so the government would therefore need to create a new program at a large scale, like the Works Progress Administration (WPA) at the end of the Great Depression, but this time squarely focused on affordable housing (and otherwise narrowly tailored to avoid inefficiencies). There are a lot of ways such a program could work (or not work), including ways to maximize the long-term public benefit (and minimize its long-term public cost), but this post is just about floating the high-level idea. So there you have it. I’ll leave you though with a few more specific thought starters: Every state could benefit since every state has affordable housing issues. Programs become more politically viable when more states benefit from them. Such a program could be narrowly tailored, squarely focused on affordable housing (as mentioned above), but also keeping the jobs time-limited (the whole program could be time-limited and tied to overall housing stock), and keeping the wages slightly below local market rates (to complement rather than compete with private construction). It could also be tailored to those just affected by AI, but that doesn’t seem like the right approach to me. The AI job market impact timeline is unclear, but we can nevertheless start an affordable-housing jobs program now that we need today, which can also serve as a partial backstop for AI-job fallout tomorrow. It seems fine to me if some workers who join aren't directly displaced by AI, since the program still creates net new jobs we will need anyway and to some extent jobs within an education band are fungible. We will surely need other programs as well to help displaced workers specifically (for example, increased unemployment benefits). Thanks for reading! Subscribe for free to receive new posts or get the audio version . We have a housing shortage in the U.S., and it is arguably a major cause of long-term unrest about the economy. Putting aside whether AI will eliminate jobs on net, it will certainly displace a lot of them. And the displaced people are unlikely to be the same people who will secure the higher-tech jobs that get created. For example, are most displaced truck drivers going to get jobs in new industries that require a lot of education? Put these two problems together and maybe there is a solution hiding in plain sight: create millions of new jobs in housing. Someone has to build all the affordable homes we need, so why not subsidize jobs and training for those displaced by AI? These jobs will arguably offer an easier onramp and are sorely needed now (and likely for the next couple of decades as we chip away at this housing shortage). Granted, labor may not be the primary bottleneck in the housing shortage, but it is certainly a factor and one that is seemingly being overlooked. There are many bills in Congress aimed at increasing housing supply through new financing and relaxed regulatory frameworks. A program like this would help complete the package. None of this has been happening via market forces alone, so the government would therefore need to create a new program at a large scale, like the Works Progress Administration (WPA) at the end of the Great Depression, but this time squarely focused on affordable housing (and otherwise narrowly tailored to avoid inefficiencies). There are a lot of ways such a program could work (or not work), including ways to maximize the long-term public benefit (and minimize its long-term public cost), but this post is just about floating the high-level idea. So there you have it. I’ll leave you though with a few more specific thought starters: Every state could benefit since every state has affordable housing issues. Programs become more politically viable when more states benefit from them. Such a program could be narrowly tailored, squarely focused on affordable housing (as mentioned above), but also keeping the jobs time-limited (the whole program could be time-limited and tied to overall housing stock), and keeping the wages slightly below local market rates (to complement rather than compete with private construction). It could also be tailored to those just affected by AI, but that doesn’t seem like the right approach to me. The AI job market impact timeline is unclear, but we can nevertheless start an affordable-housing jobs program now that we need today, which can also serve as a partial backstop for AI-job fallout tomorrow. It seems fine to me if some workers who join aren't directly displaced by AI, since the program still creates net new jobs we will need anyway and to some extent jobs within an education band are fungible. We will surely need other programs as well to help displaced workers specifically (for example, increased unemployment benefits).

0 views
Gabriel Weinberg 1 months ago

Some surprising things about DuckDuckGo you probably don't know

We have hundreds of easter-egg logos (featuring our friendly mascot Dax Brown) that surface when you make certain queries on our search engine . Our subreddit is trying to catch ‘em all . They’ve certainly caught a lot, currently 504, but we keep adding more so it’s a moving target. The total as of this post is 594. I’m the one personally adding them in my spare time just for fun and I recently did a Duck Tales episode (our new podcast ) with more details on the process. This incarnation of specialty logos is relatively new, so if you are a long-term user and haven’t noticed them, that’s probably why (aside from of course that you’d have to search one of these queries and notice the subtle change in logo). And, no promises, but I am taking requests. There is a rumor continuously circulating that we’re owned by Google, which of course couldn’t be farther from the truth . I was actually a witness in the U.S. v. Google trial for the DOJ. I think this rumor started because Google used to own the domain duck.com and was pointing it at Google search for several years. After my public and private complaining for those same years, in 2018 we finally convinced Google to give us the duck.com domain , which we now use for our email protection service, but the rumor still persists. We’ve been blocked in China since 2014 , and are on-and-off blocked in several other countries too like Indonesia and India because we don’t censor search results . We’ve been an independent company since our founding in 2008 and been working on our own search indexes for as many years. For over fifteen years now (that whole time) we’ve been doing our own knowledge graph index (like answers from Wikipedia), over ten years for local and other instant-answer indexes (like businesses), and in the past few years we’ve been ramping up our wider web index to support our Search Assist and Duck.ai features. DuckDuckGo began with me crawling the web in my basement, and in the early days, the FBI actually showed up at my front door since I had crawled one of their honeypots . The plurality of our search traffic now comes from our own browsers. Yes, we have our own browsers with our search engine built in along with a ton of other protections. How do they compare to other popular browsers and extensions, you ask? We made a comparison page so you can see the differences. Our mobile browsers on iOS & Android launched back in 2018 (wow, that’s seven years ago), and our desktop browsers on Mac and Windows in 2022/23. Our iOS browser market share continues to climb and we’re now #3 in the U.S. (behind Safari and Chrome) and #4 on Android (behind Chrome, Samsung, and Firefox). People appreciate all the protections and the front-and-center (now customizable) fire button that quickly clears tabs and data in an (also customizable) animation of fire. About 13% of U.S. adults self-report as a “current user” of DuckDuckGo. That’s way more than most people think. Our search market share is lower since all of those users don’t use us on all of their devices, especially on Android where Google makes it especially hard. Once you realize that then it is less surprising that we have the highest search market share on Mac at about 4% in the U.S., followed by iOS at about 3%. I’m talking about the U.S. here since about 44% of our searches are from the U.S., and no other country is double digits, but rounding out the top ten countries are Germany, the United Kingdom, France, Canada, India, the Netherlands, Indonesia, Australia, and Japan. Our approach to AI differs from most other companies trying to shove it down your throat in that we are dedicated to making all AI features private, useful, and optional . If you like AI, we offer private AI search answers at duckduckgo.com and private chat at duck.ai , which are built-into our browsers . If you don’t like or don’t want AI, that’s cool with us too. You can easily turn all of these features off. In fact, we made a noai.duckduckgo.com search domain that automatically sets those settings for you, including a recent setting we added that allows you to hide many AI-generated images within image search. Another related thing you might find surprising is search traffic has continued to grow steadily even since the rise of ChatGPT (with Duck.ai traffic growing even faster). If you didn’t know we have a browser, you probably also don’t know we have a DuckDuckGo Subscription (launched last year), that includes our VPN , more advanced AI models in Duck.ai, and in the U.S., Personal Information Removal and Identity Theft Restoration . It’s now available in 30 countries with a similar VPN footprint and our VPN is run by us (see latest security audit and free trials ). Speaking of lots of countries, our team has been completely distributed from the beginning, now at over 300 across about 30 countries as well, with less than half in the U.S. And we’re still hiring . We have a unique work culture that, among other things, avoids standing meetings on Wednesdays and Thursdays. We get the whole company together for a week once a year. We played a critical role in the Global Privacy Control standard and the creation of search preference menus . I have a graduate degree in Technology and Public Policy and so we’ve done more of this kind of thing than one might expect, even going so far to draft our own Do Not Track legislation before we got GPC going. We also donate yearly to like-minded organizations ( here’s our 2025 announcement ), with our cumulative donations now at over $8 million. Check our donations page for details going back to 2011. We can do this since we’ve been profitable for about that long, and more recently have even started investing in related startups as well. If this hodge-podge of stuff makes you think of anything, please let me know. I’m not only taking requests for easter-egg logo ideas, but also for stuff to write about. Thanks for reading! Subscribe for free to receive new posts or get the audio version . We have hundreds of easter-egg logos (featuring our friendly mascot Dax Brown) that surface when you make certain queries on our search engine . Our subreddit is trying to catch ‘em all . They’ve certainly caught a lot, currently 504, but we keep adding more so it’s a moving target. The total as of this post is 594. I’m the one personally adding them in my spare time just for fun and I recently did a Duck Tales episode (our new podcast ) with more details on the process. This incarnation of specialty logos is relatively new, so if you are a long-term user and haven’t noticed them, that’s probably why (aside from of course that you’d have to search one of these queries and notice the subtle change in logo). And, no promises, but I am taking requests. There is a rumor continuously circulating that we’re owned by Google, which of course couldn’t be farther from the truth . I was actually a witness in the U.S. v. Google trial for the DOJ. I think this rumor started because Google used to own the domain duck.com and was pointing it at Google search for several years. After my public and private complaining for those same years, in 2018 we finally convinced Google to give us the duck.com domain , which we now use for our email protection service, but the rumor still persists. We’ve been blocked in China since 2014 , and are on-and-off blocked in several other countries too like Indonesia and India because we don’t censor search results . We’ve been an independent company since our founding in 2008 and been working on our own search indexes for as many years. For over fifteen years now (that whole time) we’ve been doing our own knowledge graph index (like answers from Wikipedia), over ten years for local and other instant-answer indexes (like businesses), and in the past few years we’ve been ramping up our wider web index to support our Search Assist and Duck.ai features. DuckDuckGo began with me crawling the web in my basement, and in the early days, the FBI actually showed up at my front door since I had crawled one of their honeypots . The plurality of our search traffic now comes from our own browsers. Yes, we have our own browsers with our search engine built in along with a ton of other protections. How do they compare to other popular browsers and extensions, you ask? We made a comparison page so you can see the differences. Our mobile browsers on iOS & Android launched back in 2018 (wow, that’s seven years ago), and our desktop browsers on Mac and Windows in 2022/23. Our iOS browser market share continues to climb and we’re now #3 in the U.S. (behind Safari and Chrome) and #4 on Android (behind Chrome, Samsung, and Firefox). People appreciate all the protections and the front-and-center (now customizable) fire button that quickly clears tabs and data in an (also customizable) animation of fire. About 13% of U.S. adults self-report as a “current user” of DuckDuckGo. That’s way more than most people think. Our search market share is lower since all of those users don’t use us on all of their devices, especially on Android where Google makes it especially hard. Once you realize that then it is less surprising that we have the highest search market share on Mac at about 4% in the U.S., followed by iOS at about 3%. I’m talking about the U.S. here since about 44% of our searches are from the U.S., and no other country is double digits, but rounding out the top ten countries are Germany, the United Kingdom, France, Canada, India, the Netherlands, Indonesia, Australia, and Japan. Our approach to AI differs from most other companies trying to shove it down your throat in that we are dedicated to making all AI features private, useful, and optional . If you like AI, we offer private AI search answers at duckduckgo.com and private chat at duck.ai , which are built-into our browsers . If you don’t like or don’t want AI, that’s cool with us too. You can easily turn all of these features off. In fact, we made a noai.duckduckgo.com search domain that automatically sets those settings for you, including a recent setting we added that allows you to hide many AI-generated images within image search. Another related thing you might find surprising is search traffic has continued to grow steadily even since the rise of ChatGPT (with Duck.ai traffic growing even faster). If you didn’t know we have a browser, you probably also don’t know we have a DuckDuckGo Subscription (launched last year), that includes our VPN , more advanced AI models in Duck.ai, and in the U.S., Personal Information Removal and Identity Theft Restoration . It’s now available in 30 countries with a similar VPN footprint and our VPN is run by us (see latest security audit and free trials ). Speaking of lots of countries, our team has been completely distributed from the beginning, now at over 300 across about 30 countries as well, with less than half in the U.S. And we’re still hiring . We have a unique work culture that, among other things, avoids standing meetings on Wednesdays and Thursdays. We get the whole company together for a week once a year. We played a critical role in the Global Privacy Control standard and the creation of search preference menus . I have a graduate degree in Technology and Public Policy and so we’ve done more of this kind of thing than one might expect, even going so far to draft our own Do Not Track legislation before we got GPC going. We also donate yearly to like-minded organizations ( here’s our 2025 announcement ), with our cumulative donations now at over $8 million. Check our donations page for details going back to 2011. We can do this since we’ve been profitable for about that long, and more recently have even started investing in related startups as well.

0 views
Gabriel Weinberg 1 months ago

What GLP-1 drug price is cost neutral to Medicare?

As GLP-1s are studied more, their benefit profile is expanding rapidly. Acknowledging that many questions remain , a recent journal article titled The expanding benefits of GLP-1 medicines puts it like this: GLP-1 medicines, initially developed for blood glucose and weight control, improve outcomes in people with cardiovascular, kidney, liver, arthritis, and sleep apnea disorders, actions mediated in part through anti-inflammatory and metabolic pathways, with some benefits partly independent of the degree of weight loss achieved. Many millions of Americans would benefit from taking these drugs, but limited insurance coverage and high out-of-pocket costs limit their use. However, if the price was low enough to match their cost savings, then wider coverage could be justified. What price would that need to be? If a drug reduces future care expenditures by more than it costs, then it pays for itself (is cost neutral). Modeling this out can get complicated, especially for drugs whose benefits accrue over many years. That’s because you need to at least consider how those cost savings unfold as well as people who stop taking the drug (adherence rate). The Congressional Budget Office (CBO) looked into this question in detail in 2024, using these approximate assumptions: 9-year time horizon (2026-2034) 35% adherence (continuation) in first year, ramping up to 50% by year 9 80% yearly continuation rate after first year of continuous use Available to Medicare patients who are classified as obese or overweight with at least one weight-related comorbidity $5,600/year cost (implying about ~$625/month cost if you assume a 75% reimbursement) Savings from reduced care of $50/year in 2026, reaching $650/year in 2034 CBO concludes in their report that these assumptions lead to expanding GLP-1 coverage to be very costly to the Federal government. Yes, but not for obesity writ large, which about doubles the qualified population. From the CBO report: In 2026, in CBO’s estimation, 29 million beneficiaries would qualify for coverage under the illustrative policy. About half of that group, or 16 million people, would have access to those medications under current law for indications such as diabetes, cardiovascular coverage, and other indications approved by the FDA in the interim. Still, CBO only expects a small percentage of eligible patients to use the drugs, due to activation and adherence. In the final year of their model (2034) they predict “ about 1.6 million (or 14 percent) of the newly eligible beneficiaries would use an AOM [anti-obesity medication]. ” CBO doesn’t calculate a break-even price. They just say they expect $50 in average savings in year 1, rising to $650 in year 9, implying a 9% offset rate overall. If we assume a progression of increasing yearly savings to match these assumptions, you get a cumulative savings of about $4,000, or about $445 per year. If you assume on average the government picks up 75% of the bill, that implies a break-even drug price of about $50/month. Time Horizon. The CBO time horizon of 9 years is too low. They acknowledge that “from 2035 to 2044…the savings from improved health would be larger than they would be from 2026 to 2034”. So, let’s add 10 years (for a total of 19), and stipulate the last ten years average $800 in savings, rising from the year 9 savings of $650. That implies an increased average savings per year of about 1.4x. Emerging Benefits. The CBO only accounted for weight-loss benefits, using comparisons to Bariatric surgery and other weight-loss evidence, noting that “ CBO is not aware of any direct evidence showing that treatment of obesity with GLP-1-based products reduces spending on other medical services. ” However, the other emerging benefits reduce conditions that are very costly to Medicare like kidney, heart, and sleep apnea complications (e.g., dialysis, heart surgery, CPAP, etc.). I think we can speculatively call this a 2x multiplier. $50/month (CBO original estimate) x 1.4 (for increased time horizon) x 2 (for increased benefits) =~ $140/month. That is, at $140/month, we would expect the Medicare costs to roughly equal the cost savings, and net out to 0 (be cost neutral). That’s still well below the recently negotiated prices starting in 2027 (for example, Ozempic at $274). Why are you thinking about this again? I’m seeing the expanding benefit profile and thinking we have to find a way to get these benefits to more people, as a way to generally increase our average standard of living (in this case by greatly increasing health-span/quality of life). The best way I can see to get the benefits to the most people is if it were government subsidized/provided. But obviously health care costs are a major barrier to that method, and so framing expanding benefits as cost neutral seems most politically viable. At $100/month, then it would be a no-brainer (assuming the above math is correct) to make available to qualified Medicare patients (say, using at least the CBO obesity criteria) since it would then be clearly making the government money. Additionally, at that price, I think you could start expanding it well beyond Medicare in waves, monitoring outcomes and cost savings. For example, you could start with programs where the government similarly runs both the cost and benefits like Medicare, such as for the military and other federal workers. Then you could expand to Medicaid / disability (with cross-state subsidies). Ultimately there could be justification to subsidize a subset of the public at large, for example people aged 55+ who will be on Medicare within the next ten years, such that the savings will be realized by the federal government and the whole program could still be cost neutral. This may be a half-baked idea, but one approach is to offer up the market a yearly contract for expanded Medicare, and whoever shows up first gets it (to be renegotiated yearly). I don’t think this is that crazy because the manufacturing cost is estimated to be a small fraction of the list price, and the UK previously had negotiated pricing in this ballpark. The volumes would be huge, and as more companies enter the market, I imagine eventually one of them would take the offer. Thanks for reading! Subscribe for free to receive new posts or get the audio version . 9-year time horizon (2026-2034) 35% adherence (continuation) in first year, ramping up to 50% by year 9 80% yearly continuation rate after first year of continuous use Available to Medicare patients who are classified as obese or overweight with at least one weight-related comorbidity $5,600/year cost (implying about ~$625/month cost if you assume a 75% reimbursement) Savings from reduced care of $50/year in 2026, reaching $650/year in 2034 Time Horizon. The CBO time horizon of 9 years is too low. They acknowledge that “from 2035 to 2044…the savings from improved health would be larger than they would be from 2026 to 2034”. So, let’s add 10 years (for a total of 19), and stipulate the last ten years average $800 in savings, rising from the year 9 savings of $650. That implies an increased average savings per year of about 1.4x. Emerging Benefits. The CBO only accounted for weight-loss benefits, using comparisons to Bariatric surgery and other weight-loss evidence, noting that “ CBO is not aware of any direct evidence showing that treatment of obesity with GLP-1-based products reduces spending on other medical services. ” However, the other emerging benefits reduce conditions that are very costly to Medicare like kidney, heart, and sleep apnea complications (e.g., dialysis, heart surgery, CPAP, etc.). I think we can speculatively call this a 2x multiplier.

0 views
Gabriel Weinberg 1 months ago

One approach to a heavily curated information diet

Disclaimer: This approach works for me. It may not work for you, but also maybe gives you some ideas. I find the signal to noise ratio on social media and news sites/apps too low for me to have a consistently good experience on them. So, I developed an alternative, heavily curated approach to an information diet that I’m laying out here in hope people will give me suggesions over time to improve it. It involves four main inputs: RSS, skewed towards “most upvoted” feeds I use Reeder because it has a polished user interface, formats the feed in an aggregated, chronological timeline across devices, and has native reddit and filtering support, but there are many other RSS readers too . I subscribe to around 25 feeds and 25 subreddits through Reeder. To increase the signal to noise, I try to find “most upvoted” feeds where possible. For example, for subreddits, I usually use the top posts for the week, which you can get for any subreddit like this: https://www.reddit.com/r/economics/top.rss?t=week (just replace ‘economics’ with the subreddit of your choice). Doing so will get you on the order of five top posts per day, but you can also change ‘week’ to ‘day’ to increase that number to about twenty or to ‘month’ to decrease it to about one, which I do for some feeds. To find generally interesting subreddits I looked through the top few hundred subreddits , and then I also added some niche subreddits for specific interests I have. Below is part of my reddit list (alphabetical). You can see I have some really large subreddits (technology, science, todayilearned) mixed in with more niche ones (singularity, truereddit), as well as communities (rootsofprogress, slatestarcodex) and hobbies (phillyunion, usmnt). Getting about twenty five across a range of your interests makes a good base feed. Many publications still have RSS feeds if you search for publication name+RSS . If they don’t, it’s likely RSS.app or Feedspot has made one you can use instead. There is usually support through one of these methods for sub-section publication feeds, for example the tech section. Here are some other examples of non-reddit “most upvoted” feeds that might be more widely appealing: Hacker News RSS - for example, I added the 300 and 600 points feeds, meaning you get notified when a story hits 300 or 600 points (you can pick any number). NYT RSS - they have most emailed/shared/viewed Techmeme RSS - curated by the techmeme team LessWrong RSS - they have a curated feed Then I also just consume the main RSS feeds of some really high signal publications like Ars Technica (full articles come through for subscribers), The Information , Quanta , etc. Even with all this curation, the signal to noise for me isn’t that great. I skim through the timeline mostly, but I do end up getting a bunch of interesting articles this way every day. I do use the filtering feature of Reeder to drop out some really low hit keywords. I subscribe to about 20 podcasts via Overcast . I like the Overcast “Voice Boost (clear, consistent volume)” and “Smart Speed (shorter silences)” features as well as the ability to do a custom playback speed for each podcast. The signal to noise ratio is better here than the RSS feeds, but I still don’t listen to every episode, and for ones I do I often skip around. I like having a queue to listen to in the car and at the gym. I find new podcast discovery pretty hard. I’ve looked through the Overcast top podcasts lists in all the different categories, and tried lots of them, but not many stick for me. Email newsletters I subscribe to about the same amount (20-25) of email newsletters, some daily but most weekly or less. Signal/noise is less than podcasts, but greater than the RSS feeds. I’d guess my hit rate is about 20% in terms of reading them through vs. maybe 50% for podcasts listening through and 5% for the full RSS amalgamation reading through. About half of the email newsletters I subscribe to are through Substack and half are direct from websites/organizations. People sending me links I really appreciate when people send me curated links, which happens less than I’d like but I can’t complain because the signal to noise here is the highest with a hit ratio maybe 80%. I try to encourage it by saying thank you and responding when I have thoughts. With those four inputs, I feel decently covered, but sometimes I do wonder what I’m missing out on and occasionally relapse back to going directly to a news or social media app and skimming the front page. This method of course depends on having a good list of feeds, podcasts, and newsletters. But in general, I’m personally happier with this approach, though of course your mileage my vary. If you’re doing something similar and have any ideas on process tweaks or specific recommendations for feeds, podcasts, or newsletters, I’d love to hear them. Thanks for reading! Subscribe for free to receive new posts or get the audio version . RSS, skewed towards “most upvoted” feeds Many publications still have RSS feeds if you search for publication name+RSS . If they don’t, it’s likely RSS.app or Feedspot has made one you can use instead. There is usually support through one of these methods for sub-section publication feeds, for example the tech section. Here are some other examples of non-reddit “most upvoted” feeds that might be more widely appealing: Hacker News RSS - for example, I added the 300 and 600 points feeds, meaning you get notified when a story hits 300 or 600 points (you can pick any number). NYT RSS - they have most emailed/shared/viewed Techmeme RSS - curated by the techmeme team LessWrong RSS - they have a curated feed Email newsletters People sending me links

0 views
Gabriel Weinberg 2 months ago

China has a major working-age population advantage through at least 2075

In response to “ A U.S.-China tech tie is a big win for China because of its population advantage ,” I received feedback along the lines of shouldn’t we be looking at China’s working-age population and not their overall population? I was trying to keep it simple in that post, but yes, we should, and when we do, we find, unfortunately, that China’s population advantage still persists. Here’s the data: Source: Our World in Data According to Our World in Data, China’s working-age population is 983 million to the U.S.’s 223 million, or 4.4x. Source: Our World in Data Source: Our World in Data The projections put China’s 2050 working-age population at 745 million to the U.S.’s 232 million, or 3.2x. Source: Our World in Data Source: Our World in Data The projections put China’s 2075 working-age population at 468 million to the U.S.’s 235 million, or 2.0x. Noah Smith recently delved into this rather deeply in his post “ China’s demographics will be fine through mid-century ” noting: China’s economic might is not going to go “poof” and disappear from population aging; in fact, as I’ll explain, it probably won’t suffer significant problems from aging until the second half of this century. And even in the second half, you can’t count on their demographic decline then either, both because even by 2075 their working-age population is still projected to be double the U.S.’s under current conditions, but also because those conditions are unlikely to hold. As Noah also notes: Meanwhile, there’s an even greater danger that China’s leaders will panic over the country’s demographics and do something very rash…All in all, the narrative that demographics will tip the balance of economic and geopolitical power away from China in the next few decades seems overblown and unrealistic. Check out my earlier article for details, but here’s a summary. [A] U.S.-China tech tie is a big win for China because of its population advantage . China doesn’t need to surpass us technologically; it just needs to implement what already exists across its massive workforce. Matching us is enough for its economy to dwarf ours. If per person output were equal today, China’s economy would be over 4× America’s because China’s population is over 4× the U.S. That exact 4× outcome is unlikely given China’s declining population and the time it takes to diffuse technology, but 2 to 3× is not out of the question. China doesn’t even need to match our per-person output: their population will be over 3× ours for decades, so reaching ⅔ would still give them an economy twice our size since 3 × ⅔ = 2. …With an economy a multiple of the U.S., it’s much easier to outspend us on defense and R&D, since budgets are typically set as a share of GDP. …What if China then starts vastly outspending us on science and technology and becomes many years ahead of us in future critical technologies, such as artificial superintelligence, energy, quantum computing, humanoid robots, and space technology? That’s what the U.S. was to China just a few decades ago, and China runs five-year plans that prioritize science and technology. …Our current per person output advantage is not sustainable unless we regain technological dominance. …[W]e should materially increase effective research funding and focus on our own technology diffusion plans to upgrade our jobs and raise our living standards . Thanks for reading! Subscribe for free to receive new posts or get the audio version .

0 views
Gabriel Weinberg 2 months ago

Introducing Duck Tales

Quick FYI post. DuckDuckGo has a new substack you might want to check out: I’ll still be primarily writing here on my personal newsletter, but I’m also one of the hosts for the video/audio series on the new substack we’re calling Duck Tales , which takes you inside DuckDuckGo by casually interviewing team members about new features and other things we’re doing and thinking about. It’s six episodes in and, in addition to Substack , you can consume it on YouTube , Apple Podcasts , or other players using the RSS feed . Episodes are 10-20 min, and the ones I’ve done so far are on filtering AI images , tuning AI personality , using AI to make easter-egg logos , excluding domains from search results , and our browser’s scam blocker , with more recorded and scheduled to come out soon.

0 views
Gabriel Weinberg 2 months ago

Total Factor Productivity needs a rebrand (and if you don't know what that is you probably should).

If you don’t know about Total Factor Productivity (TFP), you probably should. It’s an economic concept that is arguably the most important driver of long-term economic prosperity. An International Monetary Fund (IMF) primer on TFP explains it like this (emphasis added): It’s a measure of an economy’s ability to generate income from inputs— to do more with less …If an economy increases its total income without using more inputs…it is said to enjoy higher TFP [Total Factor Productivity]. TFP is an important macroeconomic statistic [because] improvements in living standards must come from growth in TFP over the long run. This is because living standards are measured as income per person —so an economy cannot raise them simply by adding more and more people to its workforce. Meanwhile, economists have amassed lots of evidence that investments in capital have diminishing returns. This leaves TFP advancement as the only possible source of sustained growth in income per person , as Robert Solow, the late Nobel laureate, first showed in a 1957 paper . So, it’s important. Critically important to long-term progress. To learn more about TFP, check out the full IMF primer referenced above and then this post I wrote about TFP titled “The key to increasing standard of living is increasing labor productivity,” which also has more links embedded in it. It explains how the only sustainable way to increase TFP is to “to invent new technology that enables workers to do more per hour.” And this is why I’m always going on and on about increasing research funding. Let’s assume for a second that most people want more prosperity and that long-term prosperity does indeed primarily flow through Total Factor Productivity. Then why aren’t we talking about TFP a lot more? Why isn’t Total Factor Productivity front and center in our political agendas? I think there are a host of reasons for that, including those I outlined in the paradox of progress . But another even simpler reason has to be that Total Factor Productivity is a terrible, inscrutable name, at least from the perspective of selling the concept to the mainstream public. Every word of it isn’t great. It starts with “total,” which isn’t as off-putting as the other words, but doesn’t add much especially as the first word, let alone the fact that economists quibble that it isn’t an actual total. “Factor” seems like a math word and doesn’t add much either. And then you have “productivity,” which is confusing to most people because it has an unrelated colloquial meaning, and from a political perspective it also codes as job-cutting which is inherently unappealing. Now, lots of economics jargon has similar problems, case in point “Gross Domestic Product” (GDP). Given GDP hasn’t been rebranded, I doubt TFP will either. That said, I think for anyone trying to communicate this concept to the public, we shouldn’t take the TFP name or acronym as a given, but try to use something more appealing and inherently understandable. I’m looking to switch to something else but not sure to exactly what. My thinking so far has led me to work in the words “prosperity” or “innovation” directly like: Prosperity Driver Prosperity Component Innovation Multiplier Do you have any other suggestions? Thanks for reading! Subscribe for free to receive new posts or get the audio version . Now, lots of economics jargon has similar problems, case in point “Gross Domestic Product” (GDP). Given GDP hasn’t been rebranded, I doubt TFP will either. That said, I think for anyone trying to communicate this concept to the public, we shouldn’t take the TFP name or acronym as a given, but try to use something more appealing and inherently understandable. I’m looking to switch to something else but not sure to exactly what. My thinking so far has led me to work in the words “prosperity” or “innovation” directly like: Prosperity Driver Prosperity Component Innovation Multiplier

0 views
Gabriel Weinberg 2 months ago

Is consumer AI heading for a duopoly?

Fifteen years ago Google started using their search monopoly to create a browser monopoly by pushing people to use Chrome through in-product promotions in Google search. It worked. Now they’re repeating that same playbook for consumer AI with Gemini and it’s working again. In the last 30 days, Gemini has been downloaded about the same amount of times as ChatGPT, and nothing else is even close. Data from Appfigures Top Apps While ChatGPT had a massive head start, Google is rapidly turning consumer AI into a duopoly. Despite endless headlines mentioning Anthropic, Perplexity, and others, none of the alternatives seem to be meaningfully gaining market share right now relative to ChatGPT, except Gemini. The reason is simple: the others don’t have the distribution channels to match Google’s. The next phase of consumer AI competition will favor Google even more. As I recently noted , consumer internet workflows increasingly span across search, browsing, and AI. Who has the most entrenched position in search and browsing to complement consumer AI? Google. For example, their monopoly browser (Chrome) can get AI features to most consumers the fastest. Google’s ability to leverage its market position to distribute its own AI products continues unabated, and U.S. v. Google made clear that distribution powers a scale advantage. That is, Google’s search assets are not easily replicable because of the vast user engagement data Google alone possesses. And an increasing number of sites don’t even allow web crawlers or access to their content except for Google. We shouldn’t settle for a shift from Google’s search monopoly to an AI duopoly. Thus far, regulators have only addressed Google’s advantages at the margins. There remains time to address these dynamics and unlock innovation. One possible (non-regulatory) response is deeper partnerships and consolidation between the other AI companies, search engines, and browsers in an effort to compete more with more scale in this new market. This has already started around the edges, for example we (at DuckDuckGo) have partnered with You.com to develop a better news search index and are looking to partner with others to advance the web index we’ve been working on, as well as to enhance our browser and AI features . But the market is ripe for larger deals. To see where those might come from, here’s the top mobile (consumer AI is used primarily on mobile) search engines in the U.S. according to Cloudflare , who sees the most traffic. Google is #1, followed by us (DuckDuckGo) at #2, then Yahoo and Bing. Everyone else is sub 1%. Data from Cloudflare Radar Search Engine Referral Report for 2025 Q2 Similarly, here’s the top mobile browsers in the U.S. Safari and Chrome dominate, followed by Samsung Internet, DuckDuckGo and Firefox above 1%. Data from Cloudflare Radar Browser Market Share Report for 2025 Q2 And finally, here are the top 20 consumer web destinations in the U.S., according to SEMRush . The top ten are Google, Reddit, Facebook, Amazon, Yahoo, Wikipedia, Instagram, DuckDuckGo, and ChatGPT. Data from SEMRush Partnerships and consolidation between these companies could produce some more effective competition. So far, consumer AI has actually driven more traditional search and browser usage, not less. We see that in our numbers, and SparkToro reports similar for others. AI is driving people to do more information seeking in general, and as mentioned those workflows increasingly span across search, browsing, and AI. The best experiences seamlessly blend all three in the browser , and so it is natural companies with assets in some of the three areas would want to partner with companies with non-overlapping assets. Additionally, a company with a large consumer user base could help directly drive distribution of consumer AI, browsers, and search engines, especially if that company has unique content assets. A duopoly in consumer AI will not just be bad for innovation, but will further erode privacy . That’s why I believe DuckDuckGo will remain an important alternative regardless of what happens, but I’m still a little hopeful that innovative partnerships and consolidation could challenge the rapidly emerging consumer AI duopoly. Thanks for reading! Subscribe for free to receive new posts or get the audio version .

0 views
Gabriel Weinberg 3 months ago

The paradox of progress

Progress doesn’t have a single agreed-upon definition , but for the sake of anchoring, let’s say progress is rising living standards . While this definition seems unambiguously good, deserving of top billing on policy agendas for both major parties, the paradox is that long-term progress agendas rarely get top billing. Why? Here are three underlying reasons I’ve noticed in thinking about advocating for a lot more basic research funding, which I think needs to be the cornerstone of any credible progress agenda: Timescale mismatch. People want benefits now, not decades from now. U.S. politics runs on two-year cycles, while progress policies need decades to compound into large increases in living standards. For example, 2% vs. 3% growth, which would be a great outcome for a progress agenda, seems like a rounding error to most people even though it is meaningful when compounded. And the politicians championing these policies won’t be around to claim credit when they pay off decades later. Resolving this part of the paradox would involve articulating short-term benefits in some manner, for example that research funding is a jobs engine in the short term. It could also involve bundling longer-term investments like research funding in a particular field with shorter-term concrete results like rollout projects in that same field, which people can start seeing the phsyical results from within a couple of years. Change aversion. Advocating for far-future progress is selling a sci-fi world, which a lot of people take (and creative media often depict) as dystopian, not utopian. True progress means society changes for the better, delivering better-paid jobs using more advanced technology, and products that bring new conveniences and experiences. But change also means at least some disruption of current ways of life and thinking, and that creates winners and losers in the short term, which in turn creates reasonable anxiety. Resolving this part of the paradox would involve painting a clearer picture of what exactly will change in the short term, paired with explicit transition support for people most directly affected. It could also involve less focus on the far-future altogether, focusing instead on shorter timeframes that could be more easily contextualized. Lacking urgency. Not only is there not a clear picture, but progress agenda framing lacks urgency, emphasizing future opportunities rather than short-term crisis. Crisis framing comes with inherent urgency that opportunity framing lacks. Resolving this part of the paradox would involve reframing progress agendas as a response to crisis, such as the risk of China leapfrogging us in critical technology and the military and economic consequences that brings. All these resolutions share a common thread: making distant abstractions concrete. I’m increasingly convinced that advocating for progress, whether it be basic research funding or otherwise, requires bundling long-term promises with near-term demonstrations, including explicit workforce transition plans, and framing progress as helping to address competitive threats we’re already facing. Thanks for reading! Subscribe for free to receive new posts or get the audio version . Timescale mismatch. People want benefits now, not decades from now. U.S. politics runs on two-year cycles, while progress policies need decades to compound into large increases in living standards. For example, 2% vs. 3% growth, which would be a great outcome for a progress agenda, seems like a rounding error to most people even though it is meaningful when compounded. And the politicians championing these policies won’t be around to claim credit when they pay off decades later. Resolving this part of the paradox would involve articulating short-term benefits in some manner, for example that research funding is a jobs engine in the short term. It could also involve bundling longer-term investments like research funding in a particular field with shorter-term concrete results like rollout projects in that same field, which people can start seeing the phsyical results from within a couple of years. Change aversion. Advocating for far-future progress is selling a sci-fi world, which a lot of people take (and creative media often depict) as dystopian, not utopian. True progress means society changes for the better, delivering better-paid jobs using more advanced technology, and products that bring new conveniences and experiences. But change also means at least some disruption of current ways of life and thinking, and that creates winners and losers in the short term, which in turn creates reasonable anxiety. Resolving this part of the paradox would involve painting a clearer picture of what exactly will change in the short term, paired with explicit transition support for people most directly affected. It could also involve less focus on the far-future altogether, focusing instead on shorter timeframes that could be more easily contextualized. Lacking urgency. Not only is there not a clear picture, but progress agenda framing lacks urgency, emphasizing future opportunities rather than short-term crisis. Crisis framing comes with inherent urgency that opportunity framing lacks. Resolving this part of the paradox would involve reframing progress agendas as a response to crisis, such as the risk of China leapfrogging us in critical technology and the military and economic consequences that brings.

0 views
Gabriel Weinberg 3 months ago

The overlooked front in the browser wars

The browser wars are back . Agents are one front; the overlooked front is reducing switching costs in workflows that bridge across search, browse, and AI. Traditional web search, browsing, and AI chatting aren’t going away, even as many people will eventually start interacting more with agents that automate tasks. And all three of these core Internet activities increasingly overlap into complex workflows. Many queries you could start either with a web search or an AI chat, and many of those end with wanting to browse to a website. Sometimes you’re on a website and want to ask a question about it. Sometimes you are in a chat and want to run a related web search, and vice versa. The best product lets you complete whole tasks with the least friction. Agents are one approach. The other approach, which keeps users in the driver seat, is to seamlessly integrate search, browse, and AI into one interface. Both are important fronts in the new browser wars. We’ve been working on AI at DuckDuckGo for several years now, with an overall approach to provide private, useful, and optional AI features— including chat and search instant answers—to people who want the productivity benefits of AI without the privacy risks. Our chat service at duck.ai , which allows you to chat privately with popular chatbots and get real-time answers from the web, has the highest satisfaction ratings we’ve ever seen in a new service, and Search Assist, our take on Google’s AI Overviews , is currently our highest-rated search feature. While we’re the second largest search engine on mobile in the U.S., we’re also the 4th largest mobile browser (and #3 on iOS specifically). In fact, we think of ourselves as a browser company at this point. I believe the best search, AI chat, and web user experience is where they are all integrated deeply in the browser (vs. in separate apps or services), supporting workflows that allow you to seamlessly move between modes as needed. Independent, network-level data from Cloudflare Radar , covering traffic for 20%+ of the Internet, places our browser #4 on U.S. mobile (and #3 on iOS specifically). Seamless workflows: what we’re creating at DuckDuckGo I believe our browser is so popular because we focus not just on protection but on also continually refining the user experience, aiming for both dependability and delight. That’s why we have been actively working on creating such seamless workflows across search, browse, and AI for some time now. Here are a few examples. We have a sidebar in our desktop browser that is easily accessible from any webpage to ask an AI chat question, optionally using page context. From there, you can ask follow-ups, pop it out into a new tab if you need more space, or hide it to return to give more room to the website without losing your place. Toggle between Duck.ai and search via address bar / homepage We now have an optional address bar mode on mobile (coming to desktop next) that allows you to easily toggle between private web search and private AI chat when starting queries. Ideally this makes it just as easy to start a web search or an AI chat as in a standalone search or chat app, and further allows you to switch modes mid-query without re-typing. The toggle is also available on (or coming soon to) our homepage and new tab page. Handoff between search results and chat If you start a web search, we’ve built in ways to more easily jump into chat mode if desired. You can click Duck.ai to go into AI mode, or the chat icon from within Search Assist to ask a specific follow-up question related to the answer, which will carry over the answer and sources as context. Integrating traditional search into Duck.ai From within Duck.ai chat conversations, we automatically search the web for you and provide links to related searches or websites when appropriate, allowing you to jump back into search or browse mode if desired. We’re working on refining these features based on user feedback as well as designing more. The best user experience will improve the workflows of the many people that are bouncing between these modes dozens of times a day. Your thoughts are welcomed! As mentioned, our overall approach to AI features is to keep them useful, private, and optional. The above illustrates usefulness, so now a few closing words on private and optional. Similar to our search engine, our AI chats are anonymized. In addition, chats are not used to train AI models. There’s a lot more about private AI chats on our help pages . We also make sure everything we do with AI is optional, since we know not all of our users want to use AI for a variety of reasons, and that is fine with us. All AI features can be turned off, in both our browser and our search engine. Thanks for reading! Subscribe for free to receive new posts or get the audio version .

0 views
Gabriel Weinberg 3 months ago

What banning AI surveillance should look like, at a minimum

Minority Report (2002) I previously called on Congress to ban AI surveillance because of its heightened potential to easily manipulate people, both for commercial and ideological ends. Essentially, we need an AI privacy law. Yet Congress has stalled on general privacy legislation for decades, even in moments of broad public privacy focus, like after the Snowden revelations and the Cambridge Analytica scandal . So, instead of calling for another general privacy bill that would encompass AI, I believe we should focus on an AI-specific privacy bill. Many of the privacy frameworks floated over the years for general privacy regulation could essentially be repurposed to apply more narrowly to AI. For example, one approach is to enumerate broad consumer AI rights, such as rights of access, correction, deletion, portability, notice, transparency, opt-out, human review, etc., with clear processes to exercise those rights. Another approach is to create legally binding duties of care and/or loyalty on organizations that hold AI data, requiring them to protect consumers' interests regarding this data, such as to minimize it, avoid foreseeable harm, prohibit secondary use absent consent or necessity, etc. There are more approaches out there and they are not mutually exclusive. While I have personal thoughts on some of them, my overriding goal is to get something, anything useful passed, and so I remain framework-agnostic. However, I believe within whatever framework Congress adopts, certain fundamentals are non-negotiable: Ban a set of clearly harmful practices. Start with what (I hope are) universal agreement items, like identity theft, deceptive impersonation, unauthorized deepfakes, etc. The key is explicitly defining this as a category so that we can debate politically harder cases like personalized pricing and predictive policing (both of which I think should also be banned). Practices near the ban threshold should face higher scrutiny. For example, if we can’t manage to outright ban using AI to assist in law enforcement decisions, at the very least this type of use should always be subject to human review, reasonable auditing procedures, etc. Using AI for consequential decisions, like for loan approvals, or for processing sensitive data, like health information, should at least be in this category. And many practices within this category, especially with regards to consumer AI, should be explicitly opt-in. Make everything else transparent and optional. Outside the bright-line bans and practices subject to higher scrutiny, any other AI profiling must be transparent and at least come with the ability to opt-out, with only highly limited exceptions where opt‑outs would defeat the purpose, like for legal compliance. Consumers also need meaningful transparency, including prominent disclosures that indicate clearly when you are interacting with an AI system. That means not just generic data collection notices or folding into existing privacy policies, but plain-language explanations shown (or spoken) prominently at the time of processing, which detail what AI systems are inferring and deciding. States must maintain authority to strengthen, not undermine, federal minimums. I wrote a whole post about why , with the gist being AI is changing rapidly, the federal government doesn’t react to these changes quickly enough, and states have shown they will act, both in AI and privacy . Finally, these protections won't stifle progress. Some oppose any AI regulation because they believe it will hinder AI adoption or innovation. In terms of innovation, privacy makes a good analogy: Despite fears that a “patchwork” of state privacy laws would wreak havoc on innovation by going too far, they haven’t. Innovation hasn’t stalled, and neither have Big Tech privacy violations. In terms of adoption, the backlash against AI is real and rising, and smart regulation can help build the trust necessary for sustained AI adoption, not hinder it. We can get the productivity benefits of AI without the privacy harms. Thanks for reading! Subscribe for free to receive new posts or get the audio version . Minority Report (2002) I previously called on Congress to ban AI surveillance because of its heightened potential to easily manipulate people, both for commercial and ideological ends. Essentially, we need an AI privacy law. Yet Congress has stalled on general privacy legislation for decades, even in moments of broad public privacy focus, like after the Snowden revelations and the Cambridge Analytica scandal . So, instead of calling for another general privacy bill that would encompass AI, I believe we should focus on an AI-specific privacy bill. Many of the privacy frameworks floated over the years for general privacy regulation could essentially be repurposed to apply more narrowly to AI. For example, one approach is to enumerate broad consumer AI rights, such as rights of access, correction, deletion, portability, notice, transparency, opt-out, human review, etc., with clear processes to exercise those rights. Another approach is to create legally binding duties of care and/or loyalty on organizations that hold AI data, requiring them to protect consumers' interests regarding this data, such as to minimize it, avoid foreseeable harm, prohibit secondary use absent consent or necessity, etc. There are more approaches out there and they are not mutually exclusive. While I have personal thoughts on some of them, my overriding goal is to get something, anything useful passed, and so I remain framework-agnostic. However, I believe within whatever framework Congress adopts, certain fundamentals are non-negotiable: Ban a set of clearly harmful practices. Start with what (I hope are) universal agreement items, like identity theft, deceptive impersonation, unauthorized deepfakes, etc. The key is explicitly defining this as a category so that we can debate politically harder cases like personalized pricing and predictive policing (both of which I think should also be banned). Practices near the ban threshold should face higher scrutiny. For example, if we can’t manage to outright ban using AI to assist in law enforcement decisions, at the very least this type of use should always be subject to human review, reasonable auditing procedures, etc. Using AI for consequential decisions, like for loan approvals, or for processing sensitive data, like health information, should at least be in this category. And many practices within this category, especially with regards to consumer AI, should be explicitly opt-in. Make everything else transparent and optional. Outside the bright-line bans and practices subject to higher scrutiny, any other AI profiling must be transparent and at least come with the ability to opt-out, with only highly limited exceptions where opt‑outs would defeat the purpose, like for legal compliance. Consumers also need meaningful transparency, including prominent disclosures that indicate clearly when you are interacting with an AI system. That means not just generic data collection notices or folding into existing privacy policies, but plain-language explanations shown (or spoken) prominently at the time of processing, which detail what AI systems are inferring and deciding. States must maintain authority to strengthen, not undermine, federal minimums. I wrote a whole post about why , with the gist being AI is changing rapidly, the federal government doesn’t react to these changes quickly enough, and states have shown they will act, both in AI and privacy .

0 views
Gabriel Weinberg 3 months ago

On reddit, roughly 500 views = 1 click

A couple weeks ago I wrote a post titled AI survelliance should be banned while there is still time . Someone submitted it to Hacker News where it got over 600 upvotes , so I decided to submit it myself to reddit (on /r/technology) where it got over 1,100 upvotes . Because I submitted it, I was able to get “Post Insights” (pictured above, left) that indicated the post got 175,000 views. Similarly, substack reports “Traffic sources” (pictured above, right) and shows 310 views came from reddit. This roughly 1:500 ratio is consistent with others I’ve gathered across several different posts and subreddits, so I don’t think it is particularly anomalous. Reddit views count impressions (when posts appear in feeds), making this ratio also comparable to other platforms. The bottom line is lots of views on social doesn’t equate to lots of clicks, and certainly not lots of email subscribers, which experiences another 1:100 type of ratio, that is, clicks to email subscribers. My takeaways: Social ≠ list growth. Social posts don't build email lists: social post views to new email subscribers is likely less than 50,000 to 1 (500 x 100). Optimize the headline. If you do chase social views, nail the headline since that's where 99% of the value lives given almost nobody clicks through. For example, you could expose your brand name or logo, or just raise awareness for a crisp point or concept you can fit in a headline. 0.2% is common for ads; I expected higher for a top organic post on a popular subreddit, but this data suggests otherwise. Of course, your mileage may vary, but I thought it would nevertheless be helpful to put out a real data point I found interesting. Thanks for reading! Subscribe for free to receive new posts or get the audio version . A couple weeks ago I wrote a post titled AI survelliance should be banned while there is still time . Someone submitted it to Hacker News where it got over 600 upvotes , so I decided to submit it myself to reddit (on /r/technology) where it got over 1,100 upvotes . Because I submitted it, I was able to get “Post Insights” (pictured above, left) that indicated the post got 175,000 views. Similarly, substack reports “Traffic sources” (pictured above, right) and shows 310 views came from reddit. This roughly 1:500 ratio is consistent with others I’ve gathered across several different posts and subreddits, so I don’t think it is particularly anomalous. Reddit views count impressions (when posts appear in feeds), making this ratio also comparable to other platforms. The bottom line is lots of views on social doesn’t equate to lots of clicks, and certainly not lots of email subscribers, which experiences another 1:100 type of ratio, that is, clicks to email subscribers. My takeaways: Social ≠ list growth. Social posts don't build email lists: social post views to new email subscribers is likely less than 50,000 to 1 (500 x 100). Optimize the headline. If you do chase social views, nail the headline since that's where 99% of the value lives given almost nobody clicks through. For example, you could expose your brand name or logo, or just raise awareness for a crisp point or concept you can fit in a headline.

0 views
Gabriel Weinberg 4 months ago

A U.S.-China tech tie is a big win for China because of its population advantage

China’s population is declining, but UN projections show it will remain at least twice the size of the U.S. for decades. UN WPP 2024 medium scenario ; China remains ≥2× the U.S. population for decades. So what? Population size matters because economy size (GDP) matters and GDP = population × output per person. For example: 100 million people × $50,000 per year = a $5 trillion economy 1 billion people × $50,000 per year = a $50 trillion economy It’s because China’s output per person is still much lower than America’s. China has about 4× the people but about ¼ the output per person. 4 × ¼ = 1. This means that by any standard GDP measure, such as market exchange rates or Purchasing Power Parity , the two economies are in the same ballpark today. Source: The Great Race ; Population via Our World in Data ; $ per person via World Bank (figures shown in PPP; the conclusion is the same under market exchange rates). OK, but why is China’s economic output per person so much lower than America’s? A primary reason is that large swaths of its workforce aren’t yet at the technological frontier. About 23% of Chinese workers are in agriculture vs. about 1½% in the U.S. However, if China continues to educate its population, mechanize its workforce, and diffuse technology across it, that gap will continue to narrow and per-worker output will continue to climb. Only a decade ago, over 30% of China’s workforce was in agriculture , and per-person output has grown much faster than in the U.S. for decades. Technology is the driving force enabling China to catch up with the U.S. in economic output per person. As long as China diffuses increasingly sophisticated technology through its workforce significantly faster than in the U.S., then it will keep raising output per person relative to the U.S., growing its economy faster. Diffusion is not automatic; it depends on continued private-sector dynamism and sound policy. It isn’t guaranteed, but it is certainly plausible, if not likely. Put another way, a U.S.-China tech tie is a big win for China because of its population advantage . China doesn't need to surpass us technologically; it just needs to implement what already exists across its massive workforce. Matching us is enough for its economy to dwarf ours. If per person output were equal today, China’s economy would be over 4× America’s because China’s population is over 4× the U.S. That exact 4× outcome is unlikely given China’s declining population and the time it takes to diffuse technology, but 2 to 3× is not out of the question. China doesn't even need to match our per-person output: their population will be over 3× ours for decades, so reaching ⅔ would still give them an economy twice our size since 3 × ⅔ = 2. Some may recall similar predictions about Japan in the 1980s that never materialized. But China is fundamentally different: Japan's population peaked at less than ½ the U.S., while China's is over 4× ours. Japan’s workforce had already reached the technological frontier when it stalled out, while China is still far behind with massive room to catch up. China wins a much bigger economy. With an economy a multiple of the U.S., it’s much easier to outspend us on defense and R&D, since budgets are typically set as a share of GDP. Once China’s economy is double or triple ours, trying to keep up would strain our economy and risk the classic guns-over-butter trap . (This is the same trap that contributed to the Soviet Union’s collapse: too much of its economy steered toward military ends.) Alliances could help offset raw population scale, but only if we coordinate science, supply chains, and procurement, which we have not achieved at the needed scale. What if China then starts vastly outspending us on science and technology and becomes many years ahead of us in future critical technologies, such as artificial superintelligence, energy, quantum computing, humanoid robots, and space technology? That’s what the U.S. was to China just a few decades ago, and China runs five-year plans that prioritize science and technology. Our current per person output advantage is not sustainable unless we regain technological dominance. By dominance, I don’t mean a few months ahead like today’s AI cycles. I mean many years ahead in developing, diffusing, and commercializing frontier science and technology. My takeaway: we need to recognize how quickly we are losing our privileged position to China. If its economy doubles or triples ours, it can outspend us to lock in technological and military dominance. That may not happen, but we shouldn’t bet on it. Instead, we should materially increase effective research funding and focus on our own technology diffusion plans to upgrade our jobs and raise our living standards . The net job effect of AI automation is hotly debated, but any outcome doesn’t change this calculus. If employment levels remain about the same then the status quo population advantage remains. If net jobs drop dramatically due to an AI-dominated economy, staying ahead in AI systems becomes even more important. So, either way, doing more effective research and development is critical. This should be the most important and bipartisan political issue. Research and technology diffusion isn’t everything, but it is the cornerstone of future prosperity. If we don’t get it right, we definitely lose, and we’re currently not getting it right. Thanks for reading. Subscribe for free to receive new posts or get the audio feed . UN WPP 2024 medium scenario ; China remains ≥2× the U.S. population for decades. So what? Population size matters because economy size (GDP) matters and GDP = population × output per person. For example: 100 million people × $50,000 per year = a $5 trillion economy 1 billion people × $50,000 per year = a $50 trillion economy

0 views
Gabriel Weinberg 4 months ago

AI surveillance should be banned while there is still time.

Original cartoon by Dominique Lizaambard (left), updated for AI, by AI (right). All the same privacy harms with online tracking are also present with AI, but worse. While chatbot conversations resemble longer search queries, chatbot privacy harms have the potential to be significantly worse because the inference potential is dramatically greater. Longer input invites more personal information to be provided, and people are starting to bare their souls to chatbots. The conversational format can make it feel like you’re talking to a friend, a professional, or even a therapist. While search queries reveal interests and personal problems, AI conversations take their specificity to another level and, in addition, reveal thought processes and communication styles, creating a much more comprehensive profile of your personality. This richer personal information can be more thoroughly exploited for manipulation, both commercially and ideologically, for example, through behavioral chatbot advertising and models designed (or themselves manipulated through SEO or hidden system prompts) to nudge you towards a political position or product. Chatbots have already been found to be more persuasive than humans and have caused people to go into delusional spirals as a result. I suspect we’re just scratching the surface, since they can become significantly more attuned to your particular persuasive triggers through chatbot memory features , where they train and fine-tune based on your past conversations, making the influence much more subtle. Instead of an annoying and obvious ad following you around everywhere, you can have a seemingly convincing argument, tailored to your personal style, with an improperly sourced “fact” that you’re unlikely to fact-check or a subtle product recommendation you’re likely to heed. That is, all the privacy debates surrounding Google search results from the past two decades apply one-for-one to AI chats, but to an even greater degree. That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible. But unfortunately, such protected chats are not yet standard practice, and privacy mishaps are mounting quickly. Grok leaked hundreds of thousands of chatbot conversations that users thought were private. Perplexity’s AI agent was shown to be vulnerable to hackers who could slurp up your personal information. Open AI is openly talking about their vision for a “super assistant” that tracks everything you do and say (including offline). And Anthropic is going to start training on your chatbot conversations by default (previously the default was off). I collected these from just the past few weeks! It would therefore be ideal if Congress could act quickly to ensure that protected chats become the rule rather than the exception. And yet, I’m not holding my breath because it’s 2025 and the U.S. still doesn’t have a general online privacy law, let alone privacy enshrined in the Constitution as a fundamental right, as it should be . However, there does appear to be an opening right now for AI-specific federal legislation, despite the misguided attempts to ban state AI legislation . Time is running out because every day that passes further entrenches bad privacy practices. Congress must move before history completely repeats itself and everything that happened with online tracking happens again with AI tracking. AI surveillance should be banned while there is still time. No matter what happens, though, we will still be here, offering protected services, including optional AI services, to consumers who want to reap the productivity benefits of online tools without the privacy harms. Thanks for reading! Subscribe for free to get new posts or get the podcast.

0 views
Gabriel Weinberg 4 months ago

Progress isn't automatic

Everyone living today has lived in a world where science and technology, globally, have progressed at a relatively high rate compared to earlier times in human history . For most of human history, a random individual could expect to use roughly the same technology in one decade that they did the previous decade. That’s obviously not the case today. In fact, most of us alive today have little to no personal experience with such a degree of technological stagnation. That’s a good thing because long-term technological stagnation puts an upper bound on possible increases in our collective standard of living. From an earlier post : [W]ithout new technology, our economic prosperity is fundamentally limited. To see that, suppose no breakthroughs occur from this moment onward; we get no new technology based on no new science. Once we max out the hours we can work, the education people will seek, and the efficiency with existing technology, then what? We’d be literally stuck. Fundamentally, if you don’t have new tools, new technology, new scientific breakthroughs, you stagnate. That is, standard of living is fundamentally a function of labor productivity . To improve your standard of living, you need to make more money so you can buy better things, like housing, healthcare, leisure, etc. Once you get the best education you can, and maximize your hours, you are then limited in how much you can make based on how much you can produce, your output. How do you increase your output? Through better technology. At an economy-wide level, therefore, if we’re not introducing new technology, we will eventually hit a maximum output level we cannot push beyond. This is a counterintuitive and profound conclusion that I think gets overlooked because we take technological progression for granted. Science and technology don’t just progress on their own. There were many periods in history where they essentially completely stagnated in parts of the world. That’s because it takes considerable effort, education, organization, and money to advance science and technology. Without enough of any one of those ingredients, it doesn’t happen. And, if technological progression can go slower, perhaps it could also go faster, by better attuning the level of effort, education, organization, and money. For example, I’ve been arguing in this blog that the political debate now around science funding has an incredible amount of status quo bias embedded in it. I believe reducing funding will certainly slow us down, but I also believe science funding was already way too low, perhaps 3X below optimal levels . Put another way, I think a primary goal of government and society should be to increase our collective standard of living. You simply can’t do that long-term without technological progression. A couple of quick critiques I may tackle more in-depth in the future. Some people are worried that we’re just producing more stuff for the sake of producing more stuff, and that’s not really increasing the standard of living. First, with technological progression, the stuff gets both better and cheaper, and that is meaningful, for example, take medicines. Better medicines mean better health spans, and cheaper medicines mean more access to medicine. Second, people buy things, for the most part, on their own free will, and maybe people do want more stuff, and that’s not a bad thing in and of itself, as long as we can control for the negative externalities. Third, controlling for those negative externalities, like combating climate change effectively, actually requires new science and technology. Another common critique is that technology causes problems, for example, privacy problems. As someone who started a privacy-focused company, I’ve been holding this position for decades and continue to do so. But we shouldn’t throw the baby out with the bathwater. We need to do a more effective job regulating technology without slowing down its progression. Thanks for reading! Subscribe for free to receive new posts or get the podcast .

0 views
Gabriel Weinberg 4 months ago

Musings on evergreen content

FYI: This post is a bit meta—about writing/blogging itself—so it may not be your cup of tea. I’ve been having some cognitive dissonance about blogging. On the one hand, I don’t believe in doing things primarily for legacy purposes, since in the long arc of history, hardly anything is likely to be remembered or matter that much, and I won’t be here regardless. On the other hand, I also don’t like spending a lot of my writing time on crafting for the ephemeral—like a social media post—because it seems that same writing time could be spent on developing something more evergreen. I stopped blogging a decade ago following that same logic, focusing my writing time on books instead, which are arguably more evergreen than blogging. But, I’m obviously back here blogging again, and with that context, here are some dissonant thoughts I’m struggling with: I think not because long-term legacy is about after you’re dead, and I’m not looking for something that will last that long. I’m more looking to avoid the fate of most content that has a half-life of one day, such that my writing can have more of an impact in my lifetime. That is, it’s more about maximizing the amount of impact per unit time of writing than any long-term remembrance. I’m coming to believe yes, which is why I’ve started blogging again, despite most blog posts still having that short half-life I’m trying to avoid. Specifically, I think there can be cumulative value in more ephemeral content when it: Builds to something like a movement of people behind a thematic idea that can spring into action collectively at some point in the future, which is also why I started this up again on an email-first (push) platform. Helps craft a more persuasive or resonant argument, given feedback from smaller posts, such as how comedians build up their comedy specials through lots of trial and error. This last piece reminds me of Grounddog Day (the movie) where he keeps revising his day to achieve the perfect day, much like you can try to keep refining your argument until it perfectly resonates. In any case, it’s hard to achieve occasional evergreen content if you don’t have an audience to seed it with and if you don’t have a fantastic set of editors to help craft it (which hardly anyone does except in a professional context). That is, putting out more ephemeral content can be seen as part of the process of putting out quality evergreen content, both in terms of increasing its quality (from continuous feedback) and in terms of increasing its reach (from continuous audience building). Probably not, given that it is very rare for one of these posts to go viral / become evergreen. The problem is, I like editing. However, trying to stick roughly to a posting frequency and using formats like this one (Q/A headings) really helps me avoid my over-editing tendencies. There’s no doubt that some blog posts are evergreen in that people refer back to them years after they were written (assuming they are still accessible). Does the probability of becoming evergreen have any relationship to the frequency of posting? You can make compelling arguments for both sides: If you post more, you have more chances to go viral, and most people in a viral situation don’t know your other posts anyway, so the frequency isn’t seemingly inhibiting any particular post from going viral. If you post less, you will likely spend more time crafting each post, increasing each post’s quality, and thus increasing its chances of virality, which I think (though I am not sure) is a necessary condition of evergreenness. My current sense is that if you post daily, then you are unlikely to be creating evergreen content in those posts. Still, you can nevertheless have a significant impact (and perhaps more) by being top of mind in a faster-growing audience and influencing the collective conversation through that larger audience more frequently. That’s because there does seem to be a direct relationship between posting frequency and audience growth. However, posting daily is a full-time job in and of itself, and one I can’t personally do (since I already have a full-time job) and one I don’t want to do (since I don’t like being held to a schedule and also like editing/crafting sentences too much). So, yes, I do think there is a relationship between frequency and evergreenness, and there is probably some sweetspot in the middle between weekly and monthly that maximizes your evergreen chances. You need to be top of mind enough to retain and build an audience (including through recommendations), you need enough posting to get thorough feedback to improve quality, and you need enough time with each post to get it to a decent quality in the first place. The full-timers also have other options, like daily musings paired with more edited weekly or monthly posts. Yes, I think there is. If you want to maximize audience size, the optimal post frequency is at least daily, vastly increasing the surface area with which your audience can grow, relative to a weekly posting schedule (or even less). But, that frequency, as previously stated, is not the optimal frequency for optimizing the probability of producing evergreen content. So, you have a tradeoff—audience size vs. evergreen probability. And it is a deeper tradeoff than just frequency, since I also think the kind of content that best grows audience size is much shallower than the kind of content that is more likely to go evergreen. As noted, you can relax this tradeoff with more time input, which I don’t have. So, for right now, acknowledging this tradeoff, I think I’m going to stick to a few, deeper posts a month, maybe edited a bit less though. I’d rather build a tighter audience that wants to engage more deeply in ideas that can last than a larger audience that wants to consume shallower content that is more ephemeral. I hope you agree and I could also use more feedback! Thanks for reading. Subscribe for free to receive new posts or get the podcast . Builds to something like a movement of people behind a thematic idea that can spring into action collectively at some point in the future, which is also why I started this up again on an email-first (push) platform. Helps craft a more persuasive or resonant argument, given feedback from smaller posts, such as how comedians build up their comedy specials through lots of trial and error. If you post more, you have more chances to go viral, and most people in a viral situation don’t know your other posts anyway, so the frequency isn’t seemingly inhibiting any particular post from going viral. If you post less, you will likely spend more time crafting each post, increasing each post’s quality, and thus increasing its chances of virality, which I think (though I am not sure) is a necessary condition of evergreenness.

0 views
Gabriel Weinberg 5 months ago

Rule #3: Every decision involves trade-offs

One of the universal rules of decision-making is that every decision involves trade- offs. By definition, when making a decision, you are trading off one option for another. But other, less obvious trade-offs are also lurking beneath the surface. For example, whatever you choose now will send you down a path that will shape your future decisions, a mental model known as path dependence . That is, your future path is both influenced and limited by your past decisions. If you enroll your child in a language immersion school, you significantly increase the chances that they will move to an area of the world where that language is spoken, decades from now. Maybe you're OK with that possible path, but you should at least be aware of it. In a real sense, you aren’t just trading the immediate outcomes of one option for another, but one future path for another. From that perspective, you can consider path-related dimensions, such as how the various options vary in terms of future optionality or reversibility. It’s not that more optionality or reversibility is always better, as often they come at a cost, but just that these less-obvious trade-offs should be considered. A related, less-obvious trade-off involves opportunity cost. If you pick from the options in front of you, what other opportunities are you forgoing? By explicitly asking this question, more options might reveal themselves. And, you should also always ask specifically what opportunities you are forgoing by deliberating on the decision. Sometimes waiting too long means you miss out on the best option; other times, putting off the decision means more options will emerge. Again, one side isn’t always better, since every situation is different. But the opportunity costs, including from waiting, should be explored. Another common but less-obvious trade-off concerns the different types of errors you can make in a decision. From our book Super Thinking: [C]onsider a mammogram, a medical test used in the diagnosis of breast cancer. You might think a test like this has two possible results: positive or negative. But really a mammogram has four possible outcomes…the two possible outcomes you immediately think of are when the test is right, the true positive and the true negative; the other two outcomes occur when the test is wrong, the false positive and the false negative. These error models occur well beyond statistics, in any system where judgments are made. Your email spam filter is a good example. Recently our spam filters flagged an email with photos of our new niece as spam (false positive). And actual spam messages still occasionally make it through our spam filters (false negatives). Because making each type of error has consequences, systems need to be designed with these consequences in mind. That is, you have to make decisions on the trade-off between the different types of error, recognizing that some errors are inevitable. For instance, the U.S. legal system is supposed to require proof beyond a reasonable doubt for criminal convictions. This is a conscious trade-off favoring false negatives (letting criminals go free) over false positives (wrongly convicting people of crimes). To uncover other less obvious trade-offs, you can brainstorm and explicitly list out (for example, in a spreadsheet) the more subtle dimensions on which options differ. Think of a comparison shopping page that compares numerous features and benefits. The obvious ones — such as cost — may immediately come to mind, but others may take time to surface, like how choosing one option might impact your personal quality of life in the future. The point is not to overthink decisions, but to be conscious about inherent trade-offs, especially the less obvious yet consequential ones. Just as I think you should take the time to write out assumptions explicitly , I also believe you should do the same for trade-offs. See other Rules . The Transporter (2002) Thanks for reading! Subscribe for free to receive new posts or get the podcast .

0 views
Gabriel Weinberg 5 months ago

9 ways DuckDuckGo's Search Assist Differs from Google’s AI Overviews

At DuckDuckGo, our approach to AI is to only make AI features that are useful, private, and optional. If you don’t want AI, that’s cool with us. We have settings to turn off all of our AI features, and even a new setting to help you avoid AI-generated images in our search results. At the same time, we know a lot of people do want to use AI if it is actually useful and private (myself included). Our private chat service at duck.ai has the highest satisfaction ratings we’ve seen in a new feature, and Search Assist, our equivalent of Google’s AI Overviews, is currently our highest-rated search feature. Our goal with Search Assist is to improve search results, not to push AI. We’ve been continually evolving it in response to feedback, seeking better UX, and here’s how we’re thinking about that UX right now, relative to Google’s AI Overviews: You can turn Search Assist off or turn it up — your preference . When it does show, Search Assist keeps vertical space to a minimum so you can still easily get to other search results. The initial Search Assist summary is intentionally short, usually two brief sentences. This brevity keeps hallucinations to a minimum since less text means less surface area to make things up. You also get the complete thought without having to click anything. However, you can still click for a fuller explanation. This is a subtle but important distinction: clicking more on Google is getting more of the same, longer summary; clicking more on DuckDuckGo is getting a new, completely independent generation. You can use the Assist button to either generate an answer on demand if one isn’t showing automatically, or collapse an answer that is showing to zero vertical space. When we don’t think a Search Assist answer is better than the other results, we don’t show it on top. Instead, we’ll show it in the middle, on the bottom, or not at all. This flexibility enables a more fine-tuned UX. All source links are always visible, not hidden behind any clicks or separated from the answer. We’ve also been keeping sources to a minimum (usually two) to both increase answer quality (since LLMs can get confused with a lot of overlapping information) and increase source engagement. Our thumbs up/down is also visible by default, not hidden behind a click. This anonymous feedback is extremely valuable to us as a primary signal to help us find ways to improve. To generate these answers, we have a separate search crawling bot for Search Assist answers called DuckAssistBot that respects robots.txt headers. By separating DuckAssistBot from our normal DuckDuckBot, and unlike Google, we allow publishers to opt-out of just our Search Assist feature. Like all of our search results, Search Assist is anonymous. We crawl sites and generate answers on your behalf, not exposing your personal information in the process or storing it ourselves. I’m sure our Search Assist UX will evolve further from here as we’re actively working on it every day. For example, we’re working now on making it easier to enter a follow-up question in-line, which allows you to more easily stay in context when entering your question. That is to say, the above is not set in stone and the answers for these queries will surely change over time, but I hope this post helps illustrate how we’re approaching Search Assist to be consistent with our overall approach to AI to be useful, private, and optional. Feedback is welcomed! Thanks for reading! Subscribe for free to receive new posts or get the podcast . You can turn Search Assist off or turn it up — your preference . When it does show, Search Assist keeps vertical space to a minimum so you can still easily get to other search results. The initial Search Assist summary is intentionally short, usually two brief sentences. This brevity keeps hallucinations to a minimum since less text means less surface area to make things up. You also get the complete thought without having to click anything. However, you can still click for a fuller explanation. This is a subtle but important distinction: clicking more on Google is getting more of the same, longer summary; clicking more on DuckDuckGo is getting a new, completely independent generation. You can use the Assist button to either generate an answer on demand if one isn’t showing automatically, or collapse an answer that is showing to zero vertical space. When we don’t think a Search Assist answer is better than the other results, we don’t show it on top. Instead, we’ll show it in the middle, on the bottom, or not at all. This flexibility enables a more fine-tuned UX. All source links are always visible, not hidden behind any clicks or separated from the answer. We’ve also been keeping sources to a minimum (usually two) to both increase answer quality (since LLMs can get confused with a lot of overlapping information) and increase source engagement. Our thumbs up/down is also visible by default, not hidden behind a click. This anonymous feedback is extremely valuable to us as a primary signal to help us find ways to improve. To generate these answers, we have a separate search crawling bot for Search Assist answers called DuckAssistBot that respects robots.txt headers. By separating DuckAssistBot from our normal DuckDuckBot, and unlike Google, we allow publishers to opt-out of just our Search Assist feature. Like all of our search results, Search Assist is anonymous. We crawl sites and generate answers on your behalf, not exposing your personal information in the process or storing it ourselves.

0 views
Gabriel Weinberg 5 months ago

The key to increasing standard of living is increasing labor productivity

Standard of living doesn’t have a strictly agreed-upon definition, but for the sake of anchoring on something, let’s use “the level of income, comforts, and services available to an individual, community, or society” ( Wikipedia ). Gross Domestic Product (GDP) per capita, that is, the average economic output per person in a country, is often used as a proxy metric to compare the standard of living across countries. Of course, this proxy metric, being solely about money, doesn’t directly capture non-monetary aspects of standard of living associated with quality of life or well-being. However, most of these non-monetary aspects are tightly correlated with GDP per capita, rendering it a reasonable proxy. Our World in Data features numerous plots of such measures against GDP per capita . Here are a few of the ones people tend to care about most: Life expectancy vs. GDP per capita, 2023. Our World in Data . Child mortality rate vs. GDP per capita, 2022. Our World in Data . Self-reported life satisfaction vs. GDP per capita, 2024. Our World in Data . National poverty line vs. GDP per capita, 2017. Our World in Data . These measures are clearly tightly correlated to GDP per capita, as are common aggregate measures such as the UN’s Human Development Index that combines lifespan, education levels, and GDP per capita. Human Development Index vs. GDP per capita, 2023. Our World in Data . These tight correlations are somewhat intuitive because GDP per capita by definition means more money to buy things, and that includes buying more healthcare, education, leisure time, and luxiries, which one would expect to be correlated to healthspan, life satisfaction, and other measures of quality of life and well-being. Nevertheless, at some level of GDP per capita, you reach diminishing returns for a given measure, and we would then expect the corellation to cease for that measure. For example, here is access to clean (“improved”) water sources, which maxxes out at medium incomes after you reach 100% since you can’t go higher than 100% on this measure. Improved water sources vs. GDP per capita, 2022. Our World in Data . However, we haven’t seen that yet for the most important measures like life expectancy, the poverty line, and self-reported life satisfaction. All of those can go higher still, and are expected to do so with further increases to GDP per capita, certainly for lower GDP-per-capita countries (climbing up the existing curve) but also for the U.S. (at or near the fronteir). In other words, with enough broad-based increases in income, many are lifted out of poverty, the middle class is more able to afford much of the current luxury and leisure of the rich, and the rich gets access to whatever emerges from new cutting-edge (and expensive) science and technology. We should continue to watch and ensure these correlations remain tight. But as they remain tight, I think it is safe to say right now that we would expect increases in standard of living to be tightly correlated with increasing GDP per capita. While there are other necessary conditions like maintaining rule of law, broadly giving people more money to buy better healthcare, education, and upgraded leisure time should increase standard of living. That part is pretty intuitive. What’s not intuitive is how to do so. You can’t just print money, because that results in inflation. It has to be increases in real income, that is, after inflation. So, how do you do that? If you’re a country where a large % of the working-able population doesn’t currently have a job, the easiest way is to find those people jobs. Unfortunately, that won’t work for the U.S. anymore since most everyone who wants a job has a job. It worked for a while through the 1960s, 70s, and 80s as ever greater %s of women entered the workforce, but then plateaued in the 1990s. U.S. Employment-Population Ratio - 25-54 Yrs. Federal Reserve Bank of St. Louis . You could try to get people with jobs to work more hours (and therefore make more money per person), but that also doesn’t work for the U.S. since we already work a lot relative to other frontier countries, and as people get more money they seem to want to work less, not more. For example, in the U.S. we’re working a lot less hours per worker than we did in 1950, let alone 1870. This makes intuitive sense since quality of life and well-being can’t get to the highest levels if you’re working all of the time. Annual working hours per worker in the U.S., 1870 to 2017. Our World in Data . That leaves upgrading the jobs people already have in the form of higher income for the same amount of hours worked. And this means, by definition, increasing labor productivity, which is the amount of goods and services produced per hour of labor. To pay people more without triggering inflation, they also have to produce more output. That’s the counterintuitive piece and also it is our biggest opportunity for higher GDP per capita, and therefore higher standard of living. OK, but how do you increase labor productivity? I’m glad you asked. There are three primary ways, but only one has unbounded upside. Can you guess what it is? First, you can educate your workforce more, providing them with, on average, better skills to produce higher quality output per hour worked, a.k.a. investment in human capital. The U.S. is currently going in the wrong direction on this front when you look at the % of recent high-school graduates enrolled in “tertiary” education (which includes vocational programs). U.S. Gross enrolment ratio in tertiary education, 1971 to 2022. Our World in Data . If we had continued to make steady progress through the 2010s and 2020s, we would be headed towards diminishing returns on this front. While it will surely be good to increase this further to get those gains—and there is more you can do than just tertiary education such as on-the-job training—like we saw earlier with access to clean water, there is effectively a max out point for education in terms of its effect on GDP per capita. Think of a point in the future where everyone who is willing and able has a college degree, or even a graduate degree. Second, you can buy your workforce more tools, equipment, and facilities to do their job more efficiently, a.k.a. investment in physical capital. This isn’t inventing new technology, just spending more money to get workers access to the best existing technology. Again, you clearly reach diminishing returns here too, that is, another max out point, as you buy everyone the best tech. Think of the point where everyone has a MacBook Pro with dual Studio Displays—or whatever the equivalent is in their job—to maximize their productivity. Third, and the only way that doesn’t have a max out point, is to invent new technology that enables workers to do more per hour. These are better tools than the existing tools on the market. Think of upgrading to the latest software version with updated features that make you a bit more productive. Or, more broad-based: Think of how worker productivity increased in construction with the introduction of power tools and heavy machinery or in offices with the introduction of computers and the Internet. We need more of these, many times over: true leaps forward in technology applications that will dramatically increase our worker productivity. ( The Great Race ) AI is likely one of these leaps, but by investing much more in basic research we can make higher labor productivity growth more continuous instead of the bumpy road it has recently been on . These leaps don’t come out of nowhere. They require decades of investment in research, and that investment requires a decent level of government investment at the earliest stages. This was the case for AI , as it was for the Internet , and as it is for life-saving drugs . This is actually good news, since it means we have a lever to pull to increase labor productivity that we’re not currently fully pulling: increase federal investment in basic research. The level we’ve ended at today is somewhat arbitrary, an output of a political process that wasn’t focused on increasing standard of living. In any case, I estimate at the bottom of this post that we’re off by about 3X. If you want another view on this topic, here is a good post from the International Monetary Fund (IMF): [I[mprovements in living standards must come from growth in TFP [Total Factor Productivity] over the long run. This is because living standards are measured as income per person —so an economy cannot raise them simply by adding more and more people to its workforce. Meanwhile, economists have amassed lots of evidence that investments in capital have diminishing returns. This leaves TFP advancement as the only possible source of sustained growth in income per person, as Robert Solow, the late Nobel laureate, first showed in a 1957 paper . TFP growth is also the answer to those who say that continued economic growth will one day exhaust our planet’s finite resources. When TFP improves, it allows us to maintain or increase living standards while conserving resources, including natural resources such as the climate and our biosphere. Or, as Paul Krugmam put it even more succinctly in his 1990 book The Age of Diminished Expectations : Productivity isn’t everything, but, in the long run, it is almost everything. A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker. —Paul Krugman Thanks for reading! Subscribe for free to receive new posts or get the podcast .

0 views
Gabriel Weinberg 5 months ago

Most chatbot users miss this key setting, so we moved it up front

The magic of chatbots is that they make it seem like you’re chatting with a real person. But the default personality of this “person” isn’t one I particularly enjoy talking to, and in many cases I find downright annoying. Based on feedback from duck.ai users—who rely on our service for private access to popular chatbots—I know I’m not alone. What people want in a chatbot’s personality varies widely: I cringe at extra exclamation points and emojis , while others love them. I also find the default output too verbose, whereas some appreciate the added exposition. Of course, I could tell the chatbot every time to keep its replies short and emoji-free, but pasting that constantly is enough friction that I rarely bother. OpenAI and Anthropic do offer customization options in their settings, yet those options are buried and feature intimidating blank text boxes, such that I highly suspect most people never touch them. Recently, we’ve been considering this issue in the context of duck.ai. I’m sure what we’ll do here will continue to evolve as we get feedback, but to get started we’ve just introduced a much easier-to-find customization dialog. Not only does it make the responses feel better, it can make the actual content significantly better as well. As you can see in the video, it provides customization guidance through drop-downs and fields, including options to customize: The tone of responses The length of responses Whether the chatbot should ask clarifying questions The role of the chatbot (for example, teacher) Your role (for example, student) The nickname of the chatbot Your nickname All fields are optional, and you can also add additional info if desired, as well as inspect what the instructions will look like in aggregate. If you select role(s), then there are detailed instructions that get created specifically for those. Here’s an example using the ‘Tech support specialist’ role, which asks you clarifying questions to drill down faster to a solution vs. the more generic (and lengthier) default response. Customized response: Generic response: All of this works through the “system prompt.” In an excellent post titled AI Horseless Carriages , Pete Koomen explains system prompts: LLM providers like OpenAI and Anthropic have adopted a convention to help make prompt writing easier: they split the prompt into two components: a System Prompt and a User Prompt , so named because in many API applications the app developers write the System Prompt and the user writes the User Prompt. The System Prompt explains to the model how to accomplish a particular set of tasks, and is re-used over and over again. The User Prompt describes a specific task to be done. When you set the duck.ai customization options, the instructions that are created are appended to the default system prompt, which is repeated (in the background) when you start a new conversation. That is, the instructions will apply to the current conversation as well as subsequent ones, until you change them again. Like everything we do at DuckDuckGo, these system prompt tweaks are also private. They are stored locally on your device only, along with your recent chats (if you choose to save them). When we ultimately add an optional ability to sync settings and chats across devices, it will be part of our end-to-end encrypted sync service , which DuckDuckGo cannot decrypt. And Duck.ai itself anonymizes chats to all model providers, doesn’t store chats itself, and ensures your chats aren’t used for AI training. More at the Duck.ai Privacy Policy . Our approach to AI is to make features that are useful, private, and optional. We believe these new duck.ai customization options tick all three boxes, but please try them out and let us know what you think. As always, please feel free to leave comments here. However, the best method for sharing feedback about duck.ai is to do so directly through the product, as it will then be shared with the entire team automatically. Thanks for reading. Subscribe for free to receive new posts or get the podcast . The magic of chatbots is that they make it seem like you’re chatting with a real person. But the default personality of this “person” isn’t one I particularly enjoy talking to, and in many cases I find downright annoying. Based on feedback from duck.ai users—who rely on our service for private access to popular chatbots—I know I’m not alone. What people want in a chatbot’s personality varies widely: I cringe at extra exclamation points and emojis , while others love them. I also find the default output too verbose, whereas some appreciate the added exposition. Of course, I could tell the chatbot every time to keep its replies short and emoji-free, but pasting that constantly is enough friction that I rarely bother. OpenAI and Anthropic do offer customization options in their settings, yet those options are buried and feature intimidating blank text boxes, such that I highly suspect most people never touch them. Recently, we’ve been considering this issue in the context of duck.ai. I’m sure what we’ll do here will continue to evolve as we get feedback, but to get started we’ve just introduced a much easier-to-find customization dialog. Not only does it make the responses feel better, it can make the actual content significantly better as well. As you can see in the video, it provides customization guidance through drop-downs and fields, including options to customize: The tone of responses The length of responses Whether the chatbot should ask clarifying questions The role of the chatbot (for example, teacher) Your role (for example, student) The nickname of the chatbot Your nickname

0 views