Posts in History (5 found)
Rik Huijzer 3 months ago

Timeless quote from Martin Luther on the Basilica of St. Pet...

er > Christians are to be taught that the pope would and should wish to give of his own money, even though he had to sell the basilica of St. Peter, From his 95 theses (1517).

0 views
Rik Huijzer 4 months ago

Welsh revival 1904

An interesting event in 1904-1905. From Wikipedia: > The 1904–1905 Welsh revival was the largest Christian revival in Wales during the 20th century. It was one of the most dramatic in terms of its effect on the population, and triggered revivals in several other countries. The movement kept the churches of Wales filled for many years to come, seats being placed in the aisles in Mount Pleasant Baptist Church in Swansea for twenty years or so, for example. Meanwhile, the Awakening swept the rest of Britain, Scandinavia, parts of Europe, North America, the mission fields of India and the Orien...

0 views
Evan Hahn 5 months ago

Notes from "Where Wizards Stay Up Late: The Origins of the Internet"

Last month, I read Empire of AI , a scathing tale of the invention of ChatGPT. This month, I read Where Wizards Stay Up Late: The Origins of the Internet , a much rosier story of the invention of a more important technology: the internet. Authors Katie Hafner and Matthew Lyon cover the history starting in the 1960s all the way up to 1994, just two years before the book was published. 1 Here are my notes. This book argues that the space race was a precursor to the invention of the Internet, because it led to the creation of ARPA. This early sentence introduced one of the book’s main themes: The relationship between the military and computer establishments began with the modern computer industry itself. This tech-and-military romance has not gone away in 2025 . This is still a problem today, only partly solved by containers: In [the 1960s], software programs were one-of-a-kind, like original works of art, and not easily transferred from one machine to another. Packet switching (in contrast to store-and-forward) was apparently a simultaneous invention. I love simultaneous invention. Cold War nuclear tensions motivated Paul Baran to design a more resilient communications system. To him, “it was a necessary condition that the communications systems for strategic weapons be able to survive an attack”. Etymology of the word “packet”: Before settling on the word, [Donald Davies] asked two linguists from a research team in his lab to confirm that there were cognates in other languages. Every single person in this book is a man until page 74, where a woman is named but only to introduce two men. The book acknowledges the lack of gender diversity at times, but doesn’t go into it. It also omits any mention of other kinds of diversity. I suppose one could argue that this book is supposed to be an easygoing historical account with minimal editorializing, but I wish it were more critical. MIT’s first computer programming course was offered in 1951. This problem affects me in my modern software career: Eight months weren’t enough for anyone to build the perfect network. Everyone knew it. But BBN’s job was more limited than that; it was to demonstrate that the network concept could work. Heart was seasoned enough to know that compromises were necessary to get anything this ambitious done on time. Still, the tension between Heart’s perfectionism and his drive to meet deadlines was always with him, and sometimes was apparent to others as an open, unresolved contradiction. This was the first explicit mention of the internet inventors’ homogeneity: In keeping with the norms of the time, with the exception of Heart’s secretary, the people who designed and built the ARPA network were all men. Few women held positions in computer science. [Frank] Heart’s wife, Jane, had quit her programming job at Lincoln to raise their three children. They mentioned building something “to perform as well and as unobtrusively as a household socket or switch”. I liked the way this sentence was written. Reminds me of how people exhalt the creator of Roller Coaster Tycoon for doing everything in assembly: To program in assembly language was to dwell maniacally on the mechanism. This book has numerous anecdotes of brilliant idiosyncratic weirdos. Is that better than the homogenous tech bro of today? A little anecdote about the tensions between scientists and the war machine: [Severo] Ornstein was an outspoken opponent of the Vietnam War. By 1969 a lot of people who had never questioned their own involvement in Pentagon-sponsored research projects began having second thoughts. Ornstein had taken to wearing a lapel pin that said RESIST. The pin also bore the Ω sign, for electrical resistance, a popular antiwar symbol for electrical engineers. One day, before a Pentagon briefing, Ornstein conceived a new use for his pin. In meetings at the Pentagon, it wasn’t unusual for the men around the table to remove their jackets and roll up their shirt sleeves. Ornstein told Heart that he was going to pin his RESIST button onto a general’s jacket when no one was looking. “I think Frank actually worried that I would,” said Ornstein. (Ornstein didn’t, but he did wear his pin to the meeting.) Story of the first network test: The quality of the connection was not very good, and both men were sitting in noisy computer rooms, which didn’t help. So Kline fairly yelled into the mouthpiece: “I’m going to type an L !” Kline typed an L . “Did you get the L ?” he asked. “I got one-one-four,” the SRI researcher replied; he was reading off the encoded information in octal, a code using numbers expressed in base 8. When Kline did the conversion, he saw it was indeed an L that had been transmitted. He typed an O . “Did you get the O ?” he asked. “I got one-one-seven,” came the reply. It was an O . Kline typed a G . “The computer just crashed,” said the person at SRI. No one had come up with a useful demonstration of resource-sharing […] The ARPA network was a growing web of links and nodes, and that was it—like a highway system without cars. …so they did a big demo! There was a company people wanted to criticize, but the ARPANET was U.S. government property. Was it appropriate to criticize this company using ARPANET technology? Debates raged. Reminds me of how Douglas Crockford claims to have discovered, not invented, JSON : “Standards should be discovered, not decreed,” said one computer scientist in the TCP/IP faction. ARPANET was dismantled by the end of 1989. “How about women?” asked the reporter, perhaps to break the silence. “Are there any female pioneers?” More silence. I wish Where Wizards Stay Up Late had been more critical. Not because I want people to poo-poo the internet or its inventors, but because I think some history was lost. The book mentions tensions between the engineers and the military, but I would have loved to learn more. The authors acknowledge that the inventors were all men, but what were the consequences of that? There’s plenty of texture on the good and neutral sides of this story, but that’s only part of the saga. I’m no historian but I suspect this book will serve as a reference for future readers. It’s also fun to read a book written before the internet became such a dominant force; I’m sure a modern version would prioritize different details. If you know another book I might like, contact me ! For the eagle-eyed among you, I think I read an updated 1998 edition.  ↩︎ For the eagle-eyed among you, I think I read an updated 1998 edition.  ↩︎

0 views
Tara's Website 8 months ago

Liberation day, 80th anniversary

Liberation day, 80th anniversary: Bella Ciao Today, April 25th, we celebrate Liberation Day in Italy. Today marks the 80th anniversary of the victory of the partisans over the nazi-fascists and the end of the fascist regime. Last year, on this very same day, I marched for the first time and wrote that some fundamental rights are slowly being removed from us and that our rights are at risk. Today, more than ever, we need to remember what our grandfathers and people from around the world fought for.

0 views
Fernando Borretti 9 months ago

We Live In a Golden Age of Interoperability

Yesterday I was reading Exploring the Internet , an oral history of the early Internet. The first part of the book describes the author’s efforts to publish the ITU ’s Blue Book: 19 kilopages of standards documents for telephony and networks. What struck me was the description of the ITU’s documentation stack: A week spent trolling the halls of the ITU had produced documentation on about half of the proprietary, in-house text formatting system they had developed many years ago on a Siemens mainframe. The computer division had given me nine magnetic tapes, containing the Blue Book in all three languages. […] We had two types of files, one of which was known to be totally useless. The useless batch was several hundred megabytes of AUTOCAD drawings, furnished by the draftsmen who did the CCITT illustrations. Diagrams for the Blue Book were done in AUTOCAD, then manually assembled into the output from the proprietary text formatting system. […] Turned out that AUTOCAD was indeed used for the diagrams, with the exception of any text in the illustrations. The textless diagrams were sent over to the typing pool, where people typed on little pieces of paper ribbon and pasted the itsy-bitsy fragments onto the illustrations. Come publication time, the whole process would be repeated, substituting typeset ribbons for typed ribbons. A nice production technique, but the AUTOCAD files were useless. The rationale for this bizarre document production technique was that each diagram needed text in each of the three official languages that the ITU published. While AUTOCAD (and typing) was still being used, the ITU was slowly moving over to another tool, MicroGrafix Designer. There, using the magical concept of layers, they were proudly doing “integrated text and graphics.” The second batch of DOS files looked more promising. Modern documents, such as the new X.800 recommendations, were being produced in Microsoft Word for Windows. My second batch of tapes had all the files that were available in the Word for Windows format, the new ITU publishing standard. Proprietary tape drives with proprietary file systems. AutoCAD for vector graphics. Text documents in the proprietary, binary Word format. Note that the diagrams were being assembled physically , by pasting pieces of paper together. And then they were photographed. That’s why it’s called a “camera ready” copy. And this is 1991, so it’s not a digital camera: it’s film, silver-halogen crystals in collagen. It’s astounding to think that this medieval process was happening as recently as the 90s. Compare this to today: you drag some images into Adobe FrameMaker and press print. The ITU had documented the format we could expect the tapes to be in. Each file had a header written in the EBCDIC character set. The file itself used a character set seemingly invented by the ITU, known by the bizarre name of Zentec. The only problem was that the header format wasn’t EBCDIC and the structure the ITU had told us would be on the tape wasn’t present. Proprietary character sets! Next, we had to tackle TPS. This text formatting language was as complicated as any one could imagine. Developed without the desire for clarity and simplicity I had come to expect from the UNIX operating system and its tools, I was lost with the Byzantine, undocumented TPS. The solution was to take several physical volumes of the Blue Book and compare the text to hexadecimal dumps of the files. I then went to the Trident Cafe and spent a week drinking coffee trying to make sense of the data I had, flipping between the four files that might be used on any given page of text trying to map events in the one-dimensional HexWorld to two-dimensional events in the paper output. Finally, after pages and pages of PERL code, we had the beginnings of a conversion program. We had tried to use the software developed at the ITU to convert from TPS into RTF , but the code had been worse than useless. A proprietary, in-house, (ironically) undocumented document-preparation system! Today this would be a Git repo with Markdown files and TikZ / Asymptote source files for the diagrams, and a Makefile to tie it all together with Pandoc . Maybe a few custom scripts for the things Markdown can’t represent, like complex tables or asides. Maybe DITA if you really like XML. This reminded me of a similar side quest I attempted many years ago: I tried to build a modern version of the Common Lisp HyperSpec from the source text of the ANSI Common Lisp draft (the draft being in the public domain, unlike the officially blessed version). The sources are in TeX, not “modern” LaTeX but 90’s TeX. Parsing TeX is hard enough, the language is almost-but-not-quite context free, it really is meant to be executed as it is parsed; rather than parsed, represented, and transformed. But even if you managed to parse the TeX sources using a very flexible and permissive TeX parser, you have to apply a huge long tail of corrections just to fix bad parses and obscure TeX constructs. In the end I gave up. We live in much better times. For every medium, we have widely-used and widely-implemented open formats: Unicode and Markdown for text, JSON and XML for data exchange, JPEG/PNG/SVG for images, Opus for audio, WebM for videos. Unicode is so ubiquitous it’s easy to forget what an achievement it is. Essentially all text today is UTF-8 except the Windows APIs that were designed in the 90s for “wide characters” i.e. UTF-16. I remember when people used to link to the UTF-8 Everywhere manifesto. There was a time, not long ago, when “use UTF-8” was something that had to be said. Rich text is often just Markdown. Some applications have more complex constructs that can’t be represented in Markdown, in those cases you can usually get the document AST as JSON. The “worst” format most people ever have to deal with is XML, which is really not that bad . Data exchange happens through JSON, CSV, or Parquet . Every web API uses JSON as the transport layer, so instead of a thousand ad-hoc binary formats, we have one plain-text, human-readable format that can be readily mapped into domain objects. Nobody would think to share vector graphics in DWG format because we have SVG, an open standard. TeX is probably the most antediluvian text “format” in widespread use, and maybe Typst will replace it. Math is one area where we’re stuck with embedding TeX (through KaTeX or equivalent) since MathML hasn’t taken off (understandably, since nobody wants to write XML by hand). Filesystems are usually proprietary, but every operating system can read/write a FAT32/NTFS flash drive. In any case networking has made filesystems less important: if you have network access you have Google Drive or S3. And filesystems are a lot less diverse nowadays: except for extended attributes, any file tree can be mapped losslessly across ext4, NTFS, and APFS. This was not true in the past! It took decades to converge on the definition of a filesystem as “a tree of directories with byte arrays at the leaf nodes”, e.g. HFS had resource forks , the VMS file system had versioning built in. File paths were wildly different. Open standards are now the default. If someone proposes a new data exchange format, a new programming language, or things of that nature, the expectation is that the spec will be readable online, at the click of a button, either as HTML or a PDF document. If implementing JSON required paying 300 CHF for a 900 page standards document, JSON would not have taken off. Our data is more portable than ever, not just across space (e.g. if you use a Mac and a Linux machine) but across time. In the mid-80s the BBC wanted to make a latter-day Domesday Book . It was like a time capsule: statistical surveys, photographs, newsreels, people’s accounts of their daily life. The data was stored on LaserDisc , but the formats were entirely sui generis , and could only be read by the client software, which was deeply integrated with a specific hardware configuration. And within a few years the data was essentially inaccessible, needing a team of programmer-archeologists to reverse engineer the software and data formats. If the BBC Domesday Book was made nowadays it would last forever: the text would be UTF-8, the images JPEGs, the videos WebM, the database records would be CSVs or JSON files, all packaged in one big ZIP container. All widely-implemented open standards. A century from now we will still have UTF-8 decoders and JSON parsers and JPEG viewers, if only to preserve the vast trove of the present; or we will have ported all the archives forward to newer formats. All this is to say: we live in a golden age of interoperability and digital preservation.

0 views