Posts in History (8 found)
Stratechery 2 weeks ago

Apple’s 50 Years of Integration

Listen to this post : There is a weird phenomenon as a sports fan where the athletes on the field or court are older than you…and then they’re your age…and then they’re all younger than you; for me the last athlete I could look up to, at least in terms of age, was Tom Brady. Tech companies are similar, in a way. I like to write about tech history, and the importance of origin stories for understanding company cultures, and I’m fortunate enough to have witnessed most of those origins. However, there are still some companies that pre-date me — the Tom Brady’s of the industry, if you will — and one of those is Apple, which turns 50 tomorrow. My first computer was a hand-me-down IBM-compatible 286 — I don’t even remember the brand — but I mostly cut my teeth building my own computers with overclocked Celeron chips in college, using parts procured by leveraging unsustainable dot-com era customer acquisition strategies (a unique email address meant a PayPal account with a free $25 and a single-use credit card with another free $25 used for a Value America account with a $50 off coupon). Needless to say I not only witnessed many of these companies’ births, but also their deaths! There were Apple II’s at my elementary school, where I would type out programs in BASIC, but my first serious interaction with the company’s products was at the college newspaper doing layout in QuarkXPress; after I graduated I was smitten by the iMac G4 and its adjustable arm, and the GarageBand addition to the iLife suite; I ended up buying an iBook, and here I am, a quarter of a century later, typing this Article on a MacBook Pro. In my history is much of Apple’s history. I missed the very early years, when the Apple I was a mere circuit board created by Steve Wozniak; Steve Jobs bought the parts for the initial batch on net-30 terms and paid them off by receiving cash-on-delivery from a computer shop in Mountain View; it was the Apple II, released in 1977, that made the company, and that was my first encounter with Apple. The Mac came out in 1984, and found its niche in desktop publishing; that’s how I came back to Apple in college. Apple, however, was struggling in the face of more capable modular Windows PCs, which I was happily building in the meantime. It was OS X that changed Apple’s fortunes with nerds , and Jony Ive’s stunning designs that changed the value proposition for everyone else; iLife, meanwhile, made the Mac useful from day one. It was the combination of all three that made me a customer, and as the Internet destroyed lock-in, it was the fit and finish of the operating system and Apple’s independent developer ecosystem that made my two years at Microsoft with Windows a drag; then, in 2020, Apple’s differentiation came full circle : Macs were the fastest personal computers — particularly laptops — in the world. There were, of course, other parts of the Apple story, including the iPod and, most importantly, the iPhone. Those were the products that made Apple the most valuable company in the world for years (today Apple is surpassed only by Nvidia). These products, however, might have been in a form that addressed a far larger market, but were still very much Apple, a company that, all these years later, faces no competition when it comes to integrating hardware and software. What do I mean by “no competition”? Well, consider Apple’s nominal competitors through the years: IBM: This is, perhaps, the most iconic photo from early Apple: The Apple I launched in a world where computing was primarily for the enterprise, and primarily happened on IBM’s mainframes. Increased accessibility of processors and memory, however, made hobbyist computers possible, which is exactly what the Apple I was. It was the Apple II, however, that made IBM pay attention; I explained in 2013’s The Truth About Windows Versus the Mac : In the late 1970s and very early 1980s, a new breed of personal computers were appearing on the scene, including the Commodore, MITS Altair, Apple II, and more. Some employees were bringing them into the workplace, which major corporations found unacceptable, so IT departments asked IBM for something similar. After all, “No one ever got fired…” IBM spun up a separate team in Florida to put together something they could sell IT departments. Pressed for time, the Florida team put together a minicomputer using mostly off-the-shelf components; IBM’s RISC processors and the OS they had under development were technically superior, but Intel had a CISC processor for sale immediately, and a new company called Microsoft said their OS — DOS — could be ready in six months. For the sake of expediency, IBM decided to go with Intel and Microsoft. IBM was, in the end, just a hardware maker; they couldn’t be bothered to make the software. Microsoft: Software fell to Microsoft. Continuing from that 2013 Article: The rest, as they say, is history. The demand from corporations for IBM PCs was overwhelming, and DOS — and applications written for it — became entrenched. By the time the Mac appeared in 1984, the die had long since been cast. Ultimately, it would take Microsoft a decade to approach the Mac’s ease-of-use, but Windows’ DOS underpinnings and associated application library meant the Microsoft position was secure regardless. For decades after the fact, conventional wisdom was that Microsoft’s modular approach — the one that let me build my own computers — was unquestionably superior to Apple’s integration of hardware and software. In fact, it was Apple’s integration that kept the company afloat: all of those Macs used for desktop publishing were expensive, and gave Apple enough revenue to (barely) stay in business; the company’s brief foray into licensing Macintosh OS was a major contributor to the company nearly going bankrupt. Or, to put it another way, Apple only briefly competed with Microsoft, and it nearly killed them. Consumer Electronics Companies: It’s difficult to choose a company to represent the iPod era, because Apple didn’t really face any meaningful competition. There was Sony and the Discman, and Diamond and Creative with some of the first MP3 players, but the reality is that no one had the combination of hardware and software that made the iPod special; in this case, the software was iTunes, and putting iTunes on Windows is what propelled Apple far beyond the Macintosh, and laid the groundwork for what came next. RIM, Palm, and Nokia: It was early smartphone makers who were, in the framing I am taking in this Article, the only true competition Apple has ever had. All three of these companies integrated hardware and software, which makes sense given that the smartphone category was so nascent — that’s when integration is particularly important. The iPhone, however, was different in one important regard: RIM, Palm (which also sold phones with Microsoft’s Windows Mobile), and Nokia first and foremost made phones ; the iPhone was a full-blown computer, built on a foundation of OS X. That, combined with the iPhone’s innovative multi-touch input method, resulted in a vastly more capable and compelling device that wiped out all three companies. Android: Android is, in many respects, the Windows to Apple’s iOS — which was why many commentators predicted that Apple was doomed . One critical difference, however, is in the Article I excerpted above: whereas DOS came before the Mac, the iPhone came before Android. That meant that Apple had a critical mass of users and developers first, in contrast to the 1980s. Another difference is that the iPhone sold to end users, not IT departments, who actually cared about the look and feel of the device they were spending their money on. A third difference is that Apple had (and continues to have) the performance advantage, thanks to their investment in their own silicon, a stark difference from the dead end the company found itself in with the Mac. Android is, of course, a big success, with more unit market share worldwide (although the iPhone has majority share in the U.S.). There is a place for modularity, and companies like Samsung have done well to build high-end Android-powered devices, with a host of Chinese companies in particular filling in the lower-end. And, it should be noted, that Google makes its own Pixel phones as well; that is true competition, albeit one that barely registers given Google’s commitment to the entire Android ecosystem (so few, if any Pixel-exclusive features, at least not for long), and Apple’s grip on the high-end of the market. Perhaps Apple’s most interesting new product is one that takes the company full circle. The MacBook Neo is the cheapest Mac laptop ever, and has the company poised for major gains in the low-end of the market. Notably, in defiance of the assumption that modular offerings take share by being cheaper and “good enough”, Apple, by making everything from operating system to device to chip, is selling a computer that is both higher quality and has higher performance with lower component costs than the alternatives in its class; and, now that there is no more software lock-in — the Neo runs a browser and an AI chat client just like Windows machines do — Apple is poised to make major gains in its oldest market. More generally, Apple’s market share in all of its markets, including the phone, continues to increase over time, not decrease. This is happening despite the fact that Apple is not investing at a meaningful level — at least compared to its Big Tech peers — in AI server capacity, and has yet to ship the new AI-empowered Siri it promised nearly two years ago . The reason it doesn’t matter is that no matter how powerful AI becomes, you still need to access it with a device, and Apple, thanks to its integration of hardware and software, makes the best devices. Now, according to Bloomberg , Apple is planning to leverage its position with end users to give access to multiple AI providers: Apple Inc. plans to open Siri to outside artificial intelligence assistants, a major move aimed at bolstering the iPhone as an AI platform. The company is preparing to make the change as part of a Siri overhaul in its upcoming iOS 27 operating system update, according to people with knowledge of the matter. The assistant can already tap into ChatGPT through a partnership with OpenAI, but Apple will now allow competing services to do the same… The company is developing new tools to allow AI chatbot apps installed via the App Store to integrate with the Siri assistant, said the people, who asked not to be identified because the plans haven’t been announced. The chatbots will also work with an upcoming Siri app and other features in the Apple Intelligence platform. That means, for instance, if users have Alphabet Inc.’s Google Gemini or Anthropic PBC’s Claude installed, they’d be able to send queries to those services from within the Siri voice assistant, just like they have been able to with ChatGPT since Apple Intelligence launched in 2024. The approach also should allow Apple to generate more money from third-party AI subscriptions through the App Store. This isn’t quite Safari search, wherein Apple earns a revenue share from Google for searches made through the iPhone’s built-in browser, but given that AI assistants are largely monetized through subscriptions, it’s not far off: Apple will happily sell subscriptions through the App Store and take 30% of the price for the first year, and 15% after that. Owning the device means Apple gets to aggregate AI (and the company is already making $1 billion a year from chatbot subscriptions ). This is exactly what I expected after Apple announced that initial partnership with OpenAI; from a 2024 Update Apple, probably more than any other company, deeply understands its position in the value chains in which it operates, and brings that position to bear to get other companies to serve its interests on its terms; we see it with developers, we see it with carriers, we see it with music labels, and now I think we see it with AI. Apple — assuming it delivers on what it showed with Apple Intelligence — is promising to deliver features only it can deliver, and in the process lock in its ability to compel partners to invest heavily in features it has no interest in developing but wants to make available to Apple’s users on Apple’s terms. The company that owns the point of integration in the value chain never wants to have an exclusive supplier; it wants to commoditize its complements, which means creating a modular interface for multiple companies to compete on the integrator’s terms, which is exactly what these AI extensions for App Store apps sound like. Of course there still is the matter of getting Apple Intelligence to work; this upcoming feature is separate from Apple’s deal with Gemini for foundation models for Siri. I explained the distinction in this Update , and concluded: The big problem with this vision is that it assumed that Apple Intelligence would be competent, and it simply wasn’t; just as the iPhone search deal wouldn’t be worth much if the iPhone sucked, Siri chatbot integration isn’t worth much if Siri sucks. Now, however, Google is selling the underlying model to make Siri good, and their biggest hope is that they can pay Apple all of their money back — and more! — to have a money-making Gemini sit on top. Apple will let the users decide who is on top; I’m sure the company would also be amenable to be paid to be the default! Many people are taking a victory lap about Apple’s decision to not compete in AI models, claiming that the company is winning by not trying; I previously linked to Horace Dediu’s The most brilliant move in corporate history? , but it’s a good articulation of the argument: The hyperscalers are now spending 94% of their operating cash flows on AI infrastructure. Amazon is projected to go negative free cash flow this year with as much as $28 billion in the red. Alphabet’s free cash flow is expected to collapse 90% from $73 billion to $8 billion. These companies used to be the greatest cash machines ever built. Now they’re borrowing money to keep the data center lights on… And what are they getting for that $650 billion? AI services generate roughly $35 billion in total revenue or 5% of what’s being spent on infrastructure. There are dreams of more of course, but the business models of AI have yet to resonate, especially for consumers… Apple didn’t miss the AI revolution. It just bet that the winners won’t be the ones who build the infrastructure. They’ll be the ones who own the customer and no one else on Earth owns the best customers. Apple owns the best customers because it makes the best devices, thanks to its integration of hardware and software. And, as I recounted above, it is somehow, fifty years on, the only company of its kind. There is, however, an emerging threat that Apple is seeking to head off. Again from Bloomberg : Apple Inc. awarded rare bonuses to iPhone hardware designers this week, aiming to stem a wave of departures to AI startups like OpenAI that are building their own devices. The company granted out-of-cycle bonuses worth several hundred thousand dollars to many members of its iPhone Product Design team, according to people with knowledge of the matter. Apple’s leadership has grown increasingly concerned about the number of engineers being poached by potential rivals. OpenAI, which has tapped former Apple design chief Jony Ive to help design a new generation of AI-centric products, has emerged as a particular threat…OpenAI’s hardware division is run in part by Apple veteran Tang Tan. He used to oversee the iPhone product design team that’s receiving the bonuses. Tan’s group at OpenAI has hired several dozen Apple engineers, and not just ones who worked on the iPhone. The startup has lured employees who helped develop the iPad, Apple Watch and Vision Pro. OpenAI isn’t just hiring designers; the company is also building out operations capabilities to be able to actually make the upcoming Ive-designed device at scale (presumably in China). Still, many are wondering about the status of OpenAI’s hardware device given the news about Sora; from the Wall Street Journal : OpenAI is planning to pull the plug on its Sora video platform, a product it released to great fanfare last year that has since fallen from public view. The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year. CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either. OpenAI is in the middle of a strategy shift to redirect the company’s computing resources and top talent toward so-called productivity tools that can be used by both enterprises and individual users. Last week, OpenAI announced that it was combining its ChatGPT desktop app, coding tool Codex and browser into one “superapp.” The company expects the consolidated product to align its employees around a single vision. In fact, cutting Sora but keeping the hardware initiative fits this strategy shift: Sora, along with the also indefinitely delayed adult-mode , were products that drive more attention, which lends itself to the more traditional consumer business model of advertising. Productivity, on the other hand, is a much better fit for enterprise, where Anthropic is making major gains. The problem, however, is that most consumers aren’t willing to pay for software; what they are willing to pay for are devices . This was the secret of the iPhone; from 2016’s Everything as a Service : Apple has arguably perfected the manufacturing model: most of the company’s corporate employees are employed in California in the design and marketing of iconic devices that are created in Chinese factories built and run to Apple’s exacting standards (including a substantial number of employees on site), and then transported all over the world to consumers eager for best-in-class smartphones, tablets, computers, and smartwatches. What makes this model so effective — and so profitable — is that Apple has differentiated its otherwise commoditizable hardware with software. Software is a completely new type of good in that it is both infinitely differentiable yet infinitely copyable; this means that any piece of software is both completely unique yet has unlimited supply, leading to a theoretical price of $0. However, by combining the differentiable qualities of software with hardware that requires real assets and commodities to manufacture, Apple is able to charge an incredible premium for its products. OpenAI is approaching this space from the opposite direction: it has a massive consumer user base for ChatGPT, and an impressively large number of subscribers; it is also adding advertising. However, to truly monetize consumers the most attractive business model is the Apple model: integrated hardware and software. The truth is that Apple’s lack of investment in AI was always going to be a short to medium-term win: the company doesn’t have to spend on infrastructure, and everyone still needs a device. The real threat is in the long-term: what happens if AI becomes so good that it obviates traditional user interfaces? Or, to put it another way, what if the point of integration that is most compelling is not a traditional operating system and hardware device, but rather AI and a dedicated device? If this threat materializes, it won’t be with OpenAI’s initial offering; the smartphone is the ultimate form factor, and does so many jobs that depend on its flexibility and capability and 3rd-party ecosystem that no new entrant could hope to compete (indeed, Google and Android is arguably a bigger threat for this reason). However, just how capable might AI be not just next year, but in five years, or ten years? If ever a better interaction paradigm were to succeed the smartphone surely it will be rooted in AI — and Apple, by giving up now, won’t be in the game. This absolutely is not a prediction. Indeed, if I had to bet, I would bet on Apple keeping its place: It’s also worth noting that OpenAI has, in its relatively short life, managed to frame itself as a competitor to basically everyone in tech, from Google to Meta to Microsoft, only to find itself forced to pivot in the face of Anthropic and its focused approach on coding and productivity in the enterprise. The audacity of taking on everyone is impressive; the effectiveness of fighting everyone for everything may be less so. Still, there is an angle here for OpenAI, and a point of vulnerability for Apple. The company made it fifty years with no one truly competing with its integrated business model; the fate of its next fifty years may rest on the question of just how compelling AI ends up being — and if OpenAI can out-Apple the original. First, there is the likelihood that the smartphone, thanks to its screen, connectivity, and battery life, is in fact the best device for AI, and that furthermore, AI will be just one capability alongside everything a smartphone already does. Second, to the extent that AI inference moves to the edge, Apple has a big advantage thanks to its industry-leading chips. Third, Apple always has the option of opening up its devices to allow for much deeper integration with 3rd-party AI providers other than OpenAI, in order to effectively fight off a potential threat.

0 views
Brain Baking 1 months ago

25 Years Of ADSL Speed

Twenty-five years ago, I captured a screenshot of my FTP client showcasing the download of a SuSE Linux gcc compilation package at the dazzling rate of : Downloading the gcc cross-compiler for s390x through the ftp.belnet.be mirror. Note the then very new Windows XP Olive theme. For some reason, that screenshot must have been relevant, as I found it uploaded as part of my UnionVault.NET museum from 2002. Nowadays, such a download speed can officially be scoffed at as being slower than a snarky snail. Yet in 2000-2002, that was lightning-fast. Perspectives change. In Belgium, telecom company Belgacom introduced ADSL in 1999, significantly boosting our digital lives. No longer did I have to hang up the ISDN line when chatting over ICQ when mom wanted to do a quick phone call to grandma to ask about next week’s party. No longer did we have to listen to squeaky sounds and wait and wait and wait… for an image or file to appear. The future was here! For our family, the future was here a smidge earlier than the average Flemish family as my dad worked very close to the source. He was one of the Belgacom employees responsible for testing out various early ADSL modems at home, so our dialup method changed frequently. I do remember that we too were blessed with “The Frog”: the Alcatel ‘Stingray’ ADSL SpeedTouch USB Modem that looked like a frog or ray, depending on who you’d ask: The first iteration of the Alcatel SpeedTouch modem. That lovely shape was capable of handling at most downstream but our cables/ISP was not ready to handle that just yet. In September 2002, Belgacom announced they would further increased the ADSL bandwidth : Snelheidsverhoging: alle Belgacom ADSL-abonnementen. De maximum downstreamsnelheid bedroeg sinds de lancering 750 Kbit/s (ADSL GO) en 1Mbit/s (ADSL Plus-Pro-Office-Premium). Door de bijkomende investeringen en netwerkaanpassingen van Belgacom zal de meerderheid van de klanten pieksnelheden kunnen halen tot . Deze werkzaamheden zullen vermoedelijk voltooid zijn in het eerste kwartaal van 2003. Three whoppin’ megabits (not bytes) per second! Can you imagine that? I guess you can given the current average download speeds of… Wait, let me check speedtest.net … or, in other words, 93 times faster than the bleeding-edge 2003 speeds 1 . Try streaming your favourite YouTube video with a few megabits per second. YouTube didn’t exist until two years later (2005). Perspectives change. In that statement they mention they have 400k customers. Given the widespread adoption of internet in Belgium, that number can be safely multiplied by ten nowadays. The Skynet ISP that was bought up by Belgacom and hosted our very first personal homes under provided a monthly limit of . According to Belgacom in that same announcement, only a tiny portion of their users effectively hit that limit. Nowadays, everyone is accustomed to “stream whatever, whenever! YOLO!”. Back then, speeds were “high”, but we still had to be mindful of the stuff we downloaded each month, especially when wading through newsgroups looking for shady new releases Perspectives change. I wonder if my dad kept a list of the routing hardware we burned through in those late nineties/early noughties. All I can recall is that it was a lot . Since he was employed by the national telecom company that only really was (and still is) rivalled by a single other company—Telenet—we never tried the alternative. Nowadays, multiple “shadow” ISPs exist like Orange, Mobile Vikings, and Scarlet that hire the Proximus cable network. Proximus is the rebranding and full privatisation of Belgacom that was the rebranding of the institute RTT ( Regie voor Telegraaf en Telefoon —or, as my dad would call it, Rap Terug Thuis ). Unfortunately, the Web Archive never crawled all homes and I neglected to backup whatever my dad uploaded on there so our stuff is forever gone. I regret taking only a single screenshot of my download speed, so I cannot repeat this enough: archive your stuff ! That’s also the oldest screenshot of my machine/OS I have; the other desktop screenshots are from 2004+. This blog post is just an excuse to get that image under the moniker. According to meter.net historical speed tests results , only five years ago, for Belgium, that average was . Does this mean that in five years it’ll be on average ? That’s more than a CD-ROM in less than a second. Perspectives change. In twenty more years, nobody will remember what a CD-ROM even is.  ↩︎ Related topics: / adsl / screenshots / By Wouter Groeneveld on 11 March 2026.  Reply via email . According to meter.net historical speed tests results , only five years ago, for Belgium, that average was . Does this mean that in five years it’ll be on average ? That’s more than a CD-ROM in less than a second. Perspectives change. In twenty more years, nobody will remember what a CD-ROM even is.  ↩︎

0 views
Rik Huijzer 2 months ago

The 95 Theses of the Reformationsfest (1917)

This is an English translation of the 95 Leitsätze zum Reformationsfest 1917. These theses were meant to bring the church up-to-date to the new ideas of national socialism. Like all man-made doctrines, it is probably best ignored, but it may be useful for research purposes. The translation below is created by feeding the images to Google Gemini. This means the translation is probably far from perfect, but it should still be sufficient to get a rough idea of the main messages of the text. The 95 theses are as follows: ## I 1. The necessity to make clear to the German people what Christiani...

0 views
Rik Huijzer 6 months ago

Timeless quote from Martin Luther on the Basilica of St. Pet...

er > Christians are to be taught that the pope would and should wish to give of his own money, even though he had to sell the basilica of St. Peter, From his 95 theses (1517).

0 views
Rik Huijzer 7 months ago

Welsh revival 1904

An interesting event in 1904-1905. From Wikipedia: > The 1904–1905 Welsh revival was the largest Christian revival in Wales during the 20th century. It was one of the most dramatic in terms of its effect on the population, and triggered revivals in several other countries. The movement kept the churches of Wales filled for many years to come, seats being placed in the aisles in Mount Pleasant Baptist Church in Swansea for twenty years or so, for example. Meanwhile, the Awakening swept the rest of Britain, Scandinavia, parts of Europe, North America, the mission fields of India and the Orien...

0 views
Evan Hahn 8 months ago

Notes from "Where Wizards Stay Up Late: The Origins of the Internet"

Last month, I read Empire of AI , a scathing tale of the invention of ChatGPT. This month, I read Where Wizards Stay Up Late: The Origins of the Internet , a much rosier story of the invention of a more important technology: the internet. Authors Katie Hafner and Matthew Lyon cover the history starting in the 1960s all the way up to 1994, just two years before the book was published. 1 Here are my notes. This book argues that the space race was a precursor to the invention of the Internet, because it led to the creation of ARPA. This early sentence introduced one of the book’s main themes: The relationship between the military and computer establishments began with the modern computer industry itself. This tech-and-military romance has not gone away in 2025 . This is still a problem today, only partly solved by containers: In [the 1960s], software programs were one-of-a-kind, like original works of art, and not easily transferred from one machine to another. Packet switching (in contrast to store-and-forward) was apparently a simultaneous invention. I love simultaneous invention. Cold War nuclear tensions motivated Paul Baran to design a more resilient communications system. To him, “it was a necessary condition that the communications systems for strategic weapons be able to survive an attack”. Etymology of the word “packet”: Before settling on the word, [Donald Davies] asked two linguists from a research team in his lab to confirm that there were cognates in other languages. Every single person in this book is a man until page 74, where a woman is named but only to introduce two men. The book acknowledges the lack of gender diversity at times, but doesn’t go into it. It also omits any mention of other kinds of diversity. I suppose one could argue that this book is supposed to be an easygoing historical account with minimal editorializing, but I wish it were more critical. MIT’s first computer programming course was offered in 1951. This problem affects me in my modern software career: Eight months weren’t enough for anyone to build the perfect network. Everyone knew it. But BBN’s job was more limited than that; it was to demonstrate that the network concept could work. Heart was seasoned enough to know that compromises were necessary to get anything this ambitious done on time. Still, the tension between Heart’s perfectionism and his drive to meet deadlines was always with him, and sometimes was apparent to others as an open, unresolved contradiction. This was the first explicit mention of the internet inventors’ homogeneity: In keeping with the norms of the time, with the exception of Heart’s secretary, the people who designed and built the ARPA network were all men. Few women held positions in computer science. [Frank] Heart’s wife, Jane, had quit her programming job at Lincoln to raise their three children. They mentioned building something “to perform as well and as unobtrusively as a household socket or switch”. I liked the way this sentence was written. Reminds me of how people exhalt the creator of Roller Coaster Tycoon for doing everything in assembly: To program in assembly language was to dwell maniacally on the mechanism. This book has numerous anecdotes of brilliant idiosyncratic weirdos. Is that better than the homogenous tech bro of today? A little anecdote about the tensions between scientists and the war machine: [Severo] Ornstein was an outspoken opponent of the Vietnam War. By 1969 a lot of people who had never questioned their own involvement in Pentagon-sponsored research projects began having second thoughts. Ornstein had taken to wearing a lapel pin that said RESIST. The pin also bore the Ω sign, for electrical resistance, a popular antiwar symbol for electrical engineers. One day, before a Pentagon briefing, Ornstein conceived a new use for his pin. In meetings at the Pentagon, it wasn’t unusual for the men around the table to remove their jackets and roll up their shirt sleeves. Ornstein told Heart that he was going to pin his RESIST button onto a general’s jacket when no one was looking. “I think Frank actually worried that I would,” said Ornstein. (Ornstein didn’t, but he did wear his pin to the meeting.) Story of the first network test: The quality of the connection was not very good, and both men were sitting in noisy computer rooms, which didn’t help. So Kline fairly yelled into the mouthpiece: “I’m going to type an L !” Kline typed an L . “Did you get the L ?” he asked. “I got one-one-four,” the SRI researcher replied; he was reading off the encoded information in octal, a code using numbers expressed in base 8. When Kline did the conversion, he saw it was indeed an L that had been transmitted. He typed an O . “Did you get the O ?” he asked. “I got one-one-seven,” came the reply. It was an O . Kline typed a G . “The computer just crashed,” said the person at SRI. No one had come up with a useful demonstration of resource-sharing […] The ARPA network was a growing web of links and nodes, and that was it—like a highway system without cars. …so they did a big demo! There was a company people wanted to criticize, but the ARPANET was U.S. government property. Was it appropriate to criticize this company using ARPANET technology? Debates raged. Reminds me of how Douglas Crockford claims to have discovered, not invented, JSON : “Standards should be discovered, not decreed,” said one computer scientist in the TCP/IP faction. ARPANET was dismantled by the end of 1989. “How about women?” asked the reporter, perhaps to break the silence. “Are there any female pioneers?” More silence. I wish Where Wizards Stay Up Late had been more critical. Not because I want people to poo-poo the internet or its inventors, but because I think some history was lost. The book mentions tensions between the engineers and the military, but I would have loved to learn more. The authors acknowledge that the inventors were all men, but what were the consequences of that? There’s plenty of texture on the good and neutral sides of this story, but that’s only part of the saga. I’m no historian but I suspect this book will serve as a reference for future readers. It’s also fun to read a book written before the internet became such a dominant force; I’m sure a modern version would prioritize different details. If you know another book I might like, contact me ! For the eagle-eyed among you, I think I read an updated 1998 edition.  ↩︎ For the eagle-eyed among you, I think I read an updated 1998 edition.  ↩︎

0 views
Tara's Website 11 months ago

Liberation day, 80th anniversary

Liberation day, 80th anniversary: Bella Ciao Today, April 25th, we celebrate Liberation Day in Italy. Today marks the 80th anniversary of the victory of the partisans over the nazi-fascists and the end of the fascist regime. Last year, on this very same day, I marched for the first time and wrote that some fundamental rights are slowly being removed from us and that our rights are at risk. Today, more than ever, we need to remember what our grandfathers and people from around the world fought for.

0 views

We Live In a Golden Age of Interoperability

Yesterday I was reading Exploring the Internet , an oral history of the early Internet. The first part of the book describes the author’s efforts to publish the ITU ’s Blue Book: 19 kilopages of standards documents for telephony and networks. What struck me was the description of the ITU’s documentation stack: A week spent trolling the halls of the ITU had produced documentation on about half of the proprietary, in-house text formatting system they had developed many years ago on a Siemens mainframe. The computer division had given me nine magnetic tapes, containing the Blue Book in all three languages. […] We had two types of files, one of which was known to be totally useless. The useless batch was several hundred megabytes of AUTOCAD drawings, furnished by the draftsmen who did the CCITT illustrations. Diagrams for the Blue Book were done in AUTOCAD, then manually assembled into the output from the proprietary text formatting system. […] Turned out that AUTOCAD was indeed used for the diagrams, with the exception of any text in the illustrations. The textless diagrams were sent over to the typing pool, where people typed on little pieces of paper ribbon and pasted the itsy-bitsy fragments onto the illustrations. Come publication time, the whole process would be repeated, substituting typeset ribbons for typed ribbons. A nice production technique, but the AUTOCAD files were useless. The rationale for this bizarre document production technique was that each diagram needed text in each of the three official languages that the ITU published. While AUTOCAD (and typing) was still being used, the ITU was slowly moving over to another tool, MicroGrafix Designer. There, using the magical concept of layers, they were proudly doing “integrated text and graphics.” The second batch of DOS files looked more promising. Modern documents, such as the new X.800 recommendations, were being produced in Microsoft Word for Windows. My second batch of tapes had all the files that were available in the Word for Windows format, the new ITU publishing standard. Proprietary tape drives with proprietary file systems. AutoCAD for vector graphics. Text documents in the proprietary, binary Word format. Note that the diagrams were being assembled physically , by pasting pieces of paper together. And then they were photographed. That’s why it’s called a “camera ready” copy. And this is 1991, so it’s not a digital camera: it’s film, silver-halogen crystals in collagen. It’s astounding to think that this medieval process was happening as recently as the 90s. Compare this to today: you drag some images into Adobe FrameMaker and press print. The ITU had documented the format we could expect the tapes to be in. Each file had a header written in the EBCDIC character set. The file itself used a character set seemingly invented by the ITU, known by the bizarre name of Zentec. The only problem was that the header format wasn’t EBCDIC and the structure the ITU had told us would be on the tape wasn’t present. Proprietary character sets! Next, we had to tackle TPS. This text formatting language was as complicated as any one could imagine. Developed without the desire for clarity and simplicity I had come to expect from the UNIX operating system and its tools, I was lost with the Byzantine, undocumented TPS. The solution was to take several physical volumes of the Blue Book and compare the text to hexadecimal dumps of the files. I then went to the Trident Cafe and spent a week drinking coffee trying to make sense of the data I had, flipping between the four files that might be used on any given page of text trying to map events in the one-dimensional HexWorld to two-dimensional events in the paper output. Finally, after pages and pages of PERL code, we had the beginnings of a conversion program. We had tried to use the software developed at the ITU to convert from TPS into RTF , but the code had been worse than useless. A proprietary, in-house, (ironically) undocumented document-preparation system! Today this would be a Git repo with Markdown files and TikZ / Asymptote source files for the diagrams, and a Makefile to tie it all together with Pandoc . Maybe a few custom scripts for the things Markdown can’t represent, like complex tables or asides. Maybe DITA if you really like XML. This reminded me of a similar side quest I attempted many years ago: I tried to build a modern version of the Common Lisp HyperSpec from the source text of the ANSI Common Lisp draft (the draft being in the public domain, unlike the officially blessed version). The sources are in TeX, not “modern” LaTeX but 90’s TeX. Parsing TeX is hard enough, the language is almost-but-not-quite context free, it really is meant to be executed as it is parsed; rather than parsed, represented, and transformed. But even if you managed to parse the TeX sources using a very flexible and permissive TeX parser, you have to apply a huge long tail of corrections just to fix bad parses and obscure TeX constructs. In the end I gave up. We live in much better times. For every medium, we have widely-used and widely-implemented open formats: Unicode and Markdown for text, JSON and XML for data exchange, JPEG/PNG/SVG for images, Opus for audio, WebM for videos. Unicode is so ubiquitous it’s easy to forget what an achievement it is. Essentially all text today is UTF-8 except the Windows APIs that were designed in the 90s for “wide characters” i.e. UTF-16. I remember when people used to link to the UTF-8 Everywhere manifesto. There was a time, not long ago, when “use UTF-8” was something that had to be said. Rich text is often just Markdown. Some applications have more complex constructs that can’t be represented in Markdown, in those cases you can usually get the document AST as JSON. The “worst” format most people ever have to deal with is XML, which is really not that bad . Data exchange happens through JSON, CSV, or Parquet . Every web API uses JSON as the transport layer, so instead of a thousand ad-hoc binary formats, we have one plain-text, human-readable format that can be readily mapped into domain objects. Nobody would think to share vector graphics in DWG format because we have SVG, an open standard. TeX is probably the most antediluvian text “format” in widespread use, and maybe Typst will replace it. Math is one area where we’re stuck with embedding TeX (through KaTeX or equivalent) since MathML hasn’t taken off (understandably, since nobody wants to write XML by hand). Filesystems are usually proprietary, but every operating system can read/write a FAT32/NTFS flash drive. In any case networking has made filesystems less important: if you have network access you have Google Drive or S3. And filesystems are a lot less diverse nowadays: except for extended attributes, any file tree can be mapped losslessly across ext4, NTFS, and APFS. This was not true in the past! It took decades to converge on the definition of a filesystem as “a tree of directories with byte arrays at the leaf nodes”, e.g. HFS had resource forks , the VMS file system had versioning built in. File paths were wildly different. Open standards are now the default. If someone proposes a new data exchange format, a new programming language, or things of that nature, the expectation is that the spec will be readable online, at the click of a button, either as HTML or a PDF document. If implementing JSON required paying 300 CHF for a 900 page standards document, JSON would not have taken off. Our data is more portable than ever, not just across space (e.g. if you use a Mac and a Linux machine) but across time. In the mid-80s the BBC wanted to make a latter-day Domesday Book . It was like a time capsule: statistical surveys, photographs, newsreels, people’s accounts of their daily life. The data was stored on LaserDisc , but the formats were entirely sui generis , and could only be read by the client software, which was deeply integrated with a specific hardware configuration. And within a few years the data was essentially inaccessible, needing a team of programmer-archeologists to reverse engineer the software and data formats. If the BBC Domesday Book was made nowadays it would last forever: the text would be UTF-8, the images JPEGs, the videos WebM, the database records would be CSVs or JSON files, all packaged in one big ZIP container. All widely-implemented open standards. A century from now we will still have UTF-8 decoders and JSON parsers and JPEG viewers, if only to preserve the vast trove of the present; or we will have ported all the archives forward to newer formats. All this is to say: we live in a golden age of interoperability and digital preservation.

0 views