Usenet, Authentication, and Engineering (or: Early Design Decisions for Usenet)

admin

Print A Twitter thread on trolls brought up mention of trolls on Usenet . The reason they were so hard to deal with, even then, has some lessons for today; besides, the history is interesting. (Aside: this is, I think, the first longish thing I’ve ever written about any of the early design decisions for…

Print
A Twitter thread on trolls brought up mention of trolls on Usenet . The reason they were so hard to deal with, even then, has some lessons for today; besides, the history is interesting. (Aside: this is, I think, the first longish thing I’ve ever written about any of the early design decisions for Usenet. I should note that this is entirely my writing, and memory can play many tricks across nearly 40 years.

)
A complete tutorial on Usenet would take far too long; let it suffice for now to say that in the beginning, it was a peer-to-peer network of multiuser time-sharing systems, primarily interconnected by dial-up 300 bps and 1200 bps modems. (Yes, I really meant THREE HUNDRED BITS PER SECOND. And someday, I’ll have the energy to describe our home-built autodialers — I think that the statute of limitations has expired..

.) Messages were distributed via a flooding algorithm. Because these time-sharing systems were relatively big and expensive and because there were essentially no consumer-oriented dial-up services then (even modems and dumb terminals were very expensive), if you were on Usenet it was via your school or employer. If there was abuse, pressure could be applied that way — but it wasn’t always easy to tell where a message had originated — and that’s where this blog post really begins: why didn’t Usenet authenticate requests?
We did understand the need for authentication. Without it, there was no way to perform control functions, such as deleting articles. We needed site authentication; as will be seen later, we needed user authentication as well.

But how could this be done?
The obvious solution was something involving public key cryptography, which we (the original developers of the protocol: Tom Truscott , the late Jim Ellis , and myself) knew about: all good geeks at the time had seen Martin Gardner’s “Mathematical Games” column in the August 1977 issue of Scientific American (paywall), which explained both the concept of public key cryptography and the RSA algorithm. For that matter, Rivest, Shamir, and Adleman’s technical paper had already appeared; we’d seen that, too. In fact, we had code available: the xsend command for public key encryption and decryption, which we could have built upon, was part of 7th Edition Unix, and that’s what is what Usenet ran on.

What we did not know was how to authenticate a site’s public key. Today, we’d use certificate issued by a certificate authority. Certificates had been invented by then, but we didn’t know about them, and of course, there were no search engines to come to our aid.

(Manual finding aids? Sure — but apart from the question of whether or not anything accessible to us would have indexed bachelor’s theses, we’d have had to know enough to even look. The RSA paper gave us no hints; it simply spoke of a “public file” or something like a phone book.

It did speak of signed messages from a “computer network” — scare quotes in the original! — but we didn’t have one of those except for Usenet itself. And a signed message is not a certificate.) Even if we did know, there were no certificate authorities, and we certainly couldn’t create one along with creating Usenet.
Going beyond that, we did not know the correct parameters: how long a key to use (the estimates in the early papers were too low), what was secure (the xsend command used an algorithm that was broken a few years later), etc.

Maybe some people could have made good guesses. We did not know and knew that we did not know.
The next thing we considered was neighbor authentication: each site could, at least in principle, know and authenticate its neighbors, due to the way the flooding algorithm worked. That idea didn’t work, either.

For one thing, it was trivial to impersonate a site that appeared to be further away. Every Usenet message contains a Path: line; someone trying to spoof a message would simply have to claim to be a few hops away.

(This is how the famous kremvax prank worked.)
But there’s a more subtle issue.

Usenet messages were transmitted via a generic remote execution facility. The Usenet program on a given computer executed the Unix command,
uux neighborsite !rnews
where neighborsite is the name of the next-hop computer on which the rnews command would be executed. (Before you ask: yes, the list of allowable remotely requested commands was very small; no, the security was not perfect. But that’s not the issue I’m discussing here.

) The trouble is that any knowledgeable user on a site could issue the uux command; it wasn’t and couldn’t easily be restricted to authorized users. Anyone could have generated their own fake control messages, without regard to authentication and sanity built into the Usenet interface. (Could uux have been secured? This is itself a complex question that I don’t want to go into now; please take it on faith and don’t try to argue about setgid() , wrapper programs, and the like. It was our judgment then — and my judgment now — that such solutions would not be adopted.

The minor configuration change needed to make rnews an acceptable command for remote execution was a sufficiently high hurdle that we provided alternate mechanisms for sites that wouldn’t do it.)
That left us with no good choices.

The infrastructure for a cryptographic solution was lacking. The uux command rendered illusory any attempts at security via the Usenet programs themselves. We chose to do nothing. That is, we did not implement fake security that would give people the illusion of protection but not the reality.
This was the right choice.
But the story is more complex than that. It was the right choice in 1979 but not necessarily right later, for several reasons.

The most important is that the online world in 1979 was very different than it is now. For one thing, since only a very few people had access to Usenet, mostly CS students and tech-literate employees of large, sophisticated companies — the norms were to some extent self-enforcing: if someone went too far astray, their school or employer could come down on them. For another, our projections of participation and volume were very low.

In my most famous error, I projected that Usenet would grow to 50-100 sites, and 1-2 articles a day, ever. The latest figures , per Wikipedia, puts traffic at about 74 million posts per day, totaling more than 37 terabytes . (I suppose it’s an honor to be off by seven orders of magnitude — not many people help create a system that’s successful enough to have a chance at such a lack of foresight!) On the one hand, a large network has much more need for management, including ways to deal with people and traffic that violates the norms. On the other, simply as a matter of statistics a large network will have at the least proportionately more malefactors.

Furthermore, the increasing democratization of access meant that there were people who were not susceptible to school or employer pressure.
Traffic volume was the immediate driver for change. B-news came along in 1981, only a year or so after the original A-news software was released. B-news did have control messages. They were necessary, useful — and abused. Spam messages were often countered by cancelbots , but of course cancelbots were not available only to the righteous. And online norms are not always what everyone wants them to be.

The community was willing to act technically against the first large-scale spam outbreak, but other issues — a genuine neo-Nazi, posts to the misc.

kids newsgroup by a member of NAMBLA , trolls on the soc.motss newsgroup, and more were dealt with by social pressure.
There are several lessons here. One, of course, is that technical honesty is important. A second, though, is that the balance between security and functionality is not fixed — environments and hence needs change over time. B-news was around for a long time before cancel messages were used or abused on a large scale, and this good mass behavior was not because the insecurity wasn’t recognized: when I had a job interview at Bell Labs in 1982, the first thing Dennis Ritchie said to me was “[B-news] is a tool of the devil!” A third lesson is that norms can matter, but that the community as a whole has to decide how to enforce them.

There’s an amusing postscript to the public key cryptography issue. In 1979-1981, when the Usenet software was being written, there were no patents on public key cryptography nor had anyone heard about export licenses for cryptographic technology.

If we’d been a bit more knowledgeable or a bit smarter, we’d have shipped software with such functionality. The code would have been very widespread before any patents were issued, making enforcement very difficult. On the other hand, Tom, Jim, Steve Daniel (who wrote the first released version of the software — my code, originally a Bourne shell script that I later rewrote in C — was never distributed beyond UNC and Duke) and I might have had some very unpleasant conversations with the FBI. But the world of online cryptography would almost certainly have been very different. It’s interesting to speculate on how things would have transpired if cryptography was widely used in the early 1980s. By Steven Bellovin , Professor of Computer Science at Columbia University To post comments, please login or create an account . Related Humming an Open Internet Demise in London?
In mid-March, the group dubbed by Wired Magazine 20 years ago as Crypto-Rebels and Anarchists – the IETF – is meeting in London. With what is likely some loud humming, the activists will likely seek to rain mayhem upon the world of network and societal security using extreme end-to-end encryption, and collaterally diminish some remaining vestiges of an “open internet.

” Ironically, the IETF uses what has become known as the “NRA defence”: extreme encryption doesn’t cause harm, criminals and terrorists do. more Feb 25, 2018 12:20 PM PST Views: 1,430 Have We Reached Peak Use of DNSSEC?
The story about securing the DNS has a rich and, in Internet terms, protracted history. The original problem statement was simple: how can you tell if the answer you get from your query to the DNS system is ‘genuine’ or not? The DNS alone can’t help here. You ask a question and get an answer.

You are trusting that the DNS has not lied to you, but that trust is not always justified. more Feb 24, 2018 3:57 PM PST Views: 1,671 Why Is It So Hard to Run a Bitcoin Exchange?
One of the chronic features of the Bitcoin landscape is that Bitcoin exchanges screw up and fail, starting with Mt. Gox. There’s nothing conceptually very hard about running an exchange, so what’s the problem? The first problem is that Bitcoin and other blockchains are by design completely unforgiving. If there is a bug in your software which lets people steal coins, too bad, nothing to be done. more Feb 13, 2018 8:42 AM PST Views: 8,393 The New State Department Cyberspace Bureau: from Multilateral Diplomacy to Bilateral Cyber-Bullying
These days in Washington, even the most absurd proposals become the new normal. The announcement yesterday of a new U.

S. State Department Cyberspace Bureau is yet another example of setting the nation up as an isolated, belligerent actor on the world stage. In some ways, the reorganization almost seems like a companion to last week’s proposal to take over the nation’s 5G infrastructure. Most disturbingly, it transforms U.S.

diplomacy assets from multilateral cooperation to becoming the world’s bilateral cyber-bully nation. more Feb 08, 2018 3:27 PM PST Views: 2,861 Preventing ‘Techlash’ in 2018: Regulatory Threats
U.S. Chamber of Commerce President Thomas J. Donohue on January 10, 2018, warned that “techlash” is a threat to prosperity in 2018. What was he getting at? A “backlash against major tech companies is gaining strength — both at home and abroad, and among consumers and governments alike.

” “Techlash” is a shorthand reference to a variety of impulses by government and others to shape markets, services, and products; protect local interests; and step in early to prevent potential harm to competition or consumers. more Jan 16, 2018 10:55 AM PST Views: 5,991 The Over-Optimization Meltdown
In simple terms, Meltdown and Spectre are simple vulnerabilities to understand.

Imagine a gang of thieves waiting for a stage coach carrying a month’s worth of payroll. There are two roads the coach could take, and a fork, or a branch, where the driver decides which one to take. The driver could take either one. What is the solution? Station robbers along both sides of the branch, and wait to see which one the driver chooses.

more Jan 16, 2018 9:44 AM PST Views: 3,485 China’s Pursuit of Public International Cybersecurity Law Leadership
There are relatively few venues today for the development of public international cybersecurity law among Nation States. One was the United Nations Group of Governmental Experts (UNGGE) at which the U.S. several months ago announced its de facto withdrawal with some concern expressed.

A much older, well-established venue is newly assuming considerable significance – the Expert Group on the International Telecommunication Regulations (EG-ITRs). more Jan 08, 2018 11:17 AM PST Views: 4,016 CircleID’s Top 10 Posts of 2017
It is once again time for our annual review of posts that received the most attention on CircleID during the past year. Congratulations to all the 2017 participants for sharing their thoughts and making a difference in the industry.

2017 marked CircleID’s 15th year of operation as a medium dedicated to all critical matters related to the Internet infrastructure and services. We are in the midst of historic times, facing rapid technological developments and there is a lot to look forward to in 2018. more Jan 07, 2018 5:20 PM PST Views: 8,739 Internet Governance Outlook 2018: Preparing for Cyberwar or Promoting Cyber Détente?
In 2018, Internet Governance will be one of the top priorities in the geo-strategic battles among big powers. In today’s world, every global conflict has an Internet-related component. There is no international security without cybersecurity.

The world economy is a digital economy. And human rights are relevant offline as well as online. It is impossible to decouple cyberspace from the conflicts of the real world. more Jan 06, 2018 5:41 PM PST Views: 6,732 The Net Neutrality Reversal Order: Why the FCC Will Prevail
It is now out — all 539 pages entitled “Declaratory Ruling, Report and Order, and Order” (Reversal Order). As someone who has dealt with this subject matter at a working level over the past 40+ years, it seems clear that the FCC will readily prevail here and the protagonists need to move on.

(Admittedly that is wishful thinking given the appellate revenue to be made and press blather opportunities.

) The document from a Federal Administrative Law perspective is very thorough and well-crafted. more Jan 05, 2018 4:17 PM PST Views: 3,471 Meltdown and Spectre: Security is a Systems Property
I don’t (and probably won’t) have anything substantive to say about the technical details of the just-announced Meltdown and Spectre attacks.

What I do want to stress is that these show, yet again, that security is a systems property: being secure requires that every component, including ones you’ve never heard of, be secure. These attacks depend on hardware features… and no, many computer programmers don’t know what those are, either. more Jan 04, 2018 8:44 AM PST Views: 3,242 Do We Really Need a New BGP?
From time to time, I run across (yet another) article about why Border Gateway Protocol (BGP) is so bad, and how it needs to be replaced.

This one, for instance, is a recent example. It seems the easiest way to solve this problem is finding new people – ones who don’t make mistakes – to work on BGP configuration, building IRR databases, and deciding what should be included in BGP? more Dec 26, 2017 9:00 AM PST Views: 4,302 A Safe Pharmacy Environment in the Digital Age
Today’s ever-evolving, digital world has fundamentally changed, enhanced and challenged the way in which businesses all over the world must operate. For organizations and professions that have existed for centuries, this has created the opportunity and the test of adapting to change to remain successful and relevant. The National Association of Boards of Pharmacy (NABP) was founded in 1904, at a time when there was little uniformity in the practice of, or standards for pharmacy.

more Dec 19, 2017 5:45 PM PST Views: 12,109 The Digital Geneva Convention Exists: Just Use It
It is one of those surreal, ironic moments in time. This coming week, an event called the Internet Governance Forum (IGF) 2017 will be held at Geneva in the old League of Nations headquarters now known as the Palais des Nations. On its agenda is a workshop to discuss “A Digital Geneva Convention to protect cyberspace.” If the IGF participants, as they enter the Palais grounds, simply look in the opposite direction south across the Place des Nations, they would see 100 meters away, a glass cube building provided by the Republic and Canton of Geneva.

more Dec 16, 2017 9:41 AM PST Views: 8,242 It’s Time to Move From ‘Broadband’ to ‘Infrastructure’
The success of the internet demonstrates that we now depend on network operators to assure that services like telephony work. The carriers are pushing back on neutrality because their business model is threatened by a level playing field. We should be encouraging innovative internet-native business models rather than working to preserve an industry threatened by innovation. more Dec 14, 2017 11:53 AM PST Views: 4,214.

Leave a Reply

Next Post

Why Decentralization Matters – Chris Dixon – Medium

Why Decentralization Matters – Chris Dixon – Medium The first two eras of the internet During the first era of the internet — from the 1980s through the early 2000s — internet services were built on open protocols that were controlled by the internet community. This meant that people or organizations could grow their internet presence knowing the rules…

Subscribe US Now