What's in a name? The History and Future of the Domain Name System [Upcoming.org], an Oxford Internet Institute event organised in collaboration with and sponsored by Afilias and .ASIA, took place at the The Royal Society in London on 28 January 2008.
The event was intended to look at the history and future of the Internet Domain Name System (DNS), looking back at 25 years of the DNS, but also at the 10 years of its management under ICANN, and consider possible future developments. My reflections follow a report on the introductions and the Q&A. The event was recorded and a Webcast is available.
The keynote was delivered by Paul Mockapetris, inventor of the Domain Name System, and Board Chair of Nominum; panelists were Lynn St Amour, CEO and President of ISOC; Mike Roberts, first ICANN CEO, and Managing Director The Darwin Group Inc.; Jonathan Zittrain, Professor of Internet Governance and Regulation, OII; Edmon Chung, CEO of The DotAsia Organisation (.ASIA); Dennis Jennings, ICANN Board; and Markus Kummer, Head of the UN Internet Governance Forum (IGF) Secretariat (chair).
Introductions
Key points from the speakers follow. Points noted pick up on current challenges, features leading to innovation, and unusual perspectives. These notes are not to be cited as direct quotes:
Paul Mockapetris: in 1983 [when the network moved from Network Control Protocol to use TCP/IP protocols], everything was up for grabs 'in the Internet stack'. 'There were many many things to re-think', such as whether email should be part of FTP. We know DNS is not a directory... but now we have Google anyway... The easiest way to compromise was to ignore everyone's proposals. When questioned I said: 'It was big of you to admit I didn't used your work'. This worked every time... The first new application on DNS was MX [mail routing] designed by Craig Partridge. Version 3 was the first to be successful. Developments since then include more complex pages, such as MySpace, which require more lookups... What seems important now? Problems that make DNS binding less trustworthy... What seems important to the author? In 1983 we said Let everyone/thing own a name, publish their data, let anyone retrieve it. In 2008: Take names away from undesirables, guaranteening integrity between source and destination... ICANN is just politics... Don't worry about overloading DNS. The real world pushes back, excesses provoke reform.
Lynn St Amour: [discusses the problems of not delivering end-to-end connectivity, national walled gardens, etc]. We shouldn't consider policy goals and technical issues in isolation.
Mike Roberts: the Internet we have is not the Internet we started with – we now have 20 billion name resolutions per day. We can't assume scaling will continue successfully. "We want [expect?] to end up being as fossilised as the telco engineers we have kind of done out of a job"... If you ask them what they want, and to the extent they can answer, users want an efficient, apolitical system... Some DNS issues: the weakness of political institutions means that ICANN itself is weak. What kind of political instituions might we create to do something about this?... Positive steps forward: independence for ICANN; separate its economic and other functions; successful distribution of Internet infrastruture, such as root servers.
Edmon Chung: [discusses new gTLD and IBN. Discusses the impact of multiple TLDs pointing to same site. Discusses complex issues around who controls nomenclature in new non-Latin-script domain names.] We still need to explain to people that a domain is not a Web site.
Comments from Mockapetris: [story of assigning top level country domain codes] "We don't want to get into the business of deciding who a country is" [Postel?]. On the introduction the dotcom suffix, we said "if it doesn't catch on we can always delete it!".
Dennis Jennings: We should have as many gTLDs as we can. Issues: TLDs: how to resolve conflicts? How do deal with offensive names? Whether to allow corporate strings, such as Coca-Cola? Or single character gTLDs. IDNs: issue of browsers breaking. Does idea of country codes in, for instance Cyrillic, make sense? Why should France not control .france in Chinese? Are ccTLDs properties of the sovereign territory or of ICANN? What influences would other languages have on, for instance, Cyrillic TLDs? We need to deal with multiple languages and scripts. You can't do both at once. So which first?
Jonathan Zittrain: The Internet works in practice but not in theory! If we were to design the Internet for a global audience for today it would look nothing like it does, for instance the idea of best efforts routing. The philosophy of the pioneers was rough consensus and running code. Votes were based on a hum! The assumption was that people are reasonable and nice, leading to no requirement for a login to the Internet. The concepts of Requests for Comment (RFCs) that end up becoming standards and not subject to comment. Someone [Jon Postel'srunning a system that wears sandals and doesn't want to make money [is hard for people to countenance]. [Discusses Postel's authority to change top level country suffixes, and tells the story of Postel 'hijacking' the root server.] ICANN was meant to be the answer to catastrophic success. Other possibilities: see the ITU's Focus Group on Next Generation Networks, which is not much of a break, seeking end-to-end quality of service, and compliance with all regulatory requirements (such as for emergency services). With the end of end-to-end, we stand to lose Skype, etc.
Questions and Answers
Questions addressed the impact of spam; lousy management of DNS; IPv6; the myth of the openness of the processes described; the impact of governments allowing only ccTLDs; and the possibility of registrar failure. And Bill Manning recounted the story of the Internet explosion, and concluded that we can't ignore constituencies, such as government, just because they don't play by 'our' rules.
Mockapetris noted that you don't know if something is spam until you read it, but that we could authenticate and prioritise mail. DNS is ready for IPv6, he said, but it [IPv4?] is a victim of its own success. He encouraged people to demand openness from ICANN. St Amour argued that you can't change too fast as the intelligence is at the edge of the network and you can't mandate change. Also, that we should aim for solutions with 'least surprise'. Jennings argued that ICANN is the most open organisation you will find. He asserted that there will be failures in the systems and we have yet to test some failure scenarios, such as the reclamation of a TLD. Roberts said that progress on Internet will be a product of political consensus, and noted that the old [more informal] model is just 'memory lane'.
In conclusion St Amour noted that the Internet wouldn't have developed if scrutiny had been as wide and public as it is today and asked the audience to "Remain open to the absurdity and possible lunacy of the Internet". A number of panelists endorsed her sentiments.
Reflections
The proliferation of TLDs seems to be both problematic and academic. For instance, the .mobi TLD is a workaround for a failure of Web technology. That is, the failure to code and design pages to at least 'fail gracefully' on mobile browsers, and at best structure them such that, by identifying the browser agent, code can be delivered that is optimised for that device. (The proliferation of of 'm.' hosts, such as m.twitter.com is another symptom of the same problem.) But perhaps I am just stuck in the an out-of-date 'perfect' model of Web code servering.
I was also struck by Jennings's comment that we should have as many gTLDs as we can, and Chung's about the impact of multiple TLDs pointing to same site. As Mockapetris pointed out, we now have Google as a our [ad hoc] Web database. Google has delivered what companies such as RealNames tried and failed to deliver: natural language shortcuts to Web sites. The Inquisitor plug-in for Safari (the default browser on MacOS) gets even closer to this with its eery real time prediction of the site for which one is searching.
But the reality is that the more TLDs one has pointing to same site the worse Google PageRank is likely to work (unless Google has developed some amazing TLD aggregation algorithm), and the more difficult it will be to find sites using this new navigation model. In addition, more and more material is encountered not on the owner's site but in other spaces to which it is syndicated, such as Facebook, making TLDs less important. We are also circumventing TLDs in other ways, for instance by using barcodes read by mobile phones to get to online information. (See Google's Newspaper Ads: Big Hopes For Small Barcodes, Silicon Alley Insider, January 29, 2008.)
Facebook also figures in the discussion of junk mail, having provided a way of ensuring one only receives communications from trusted parties. (As an aside, I agreed with Mockapetris's comment that "you don't know if something is spam until you read it".) While the current Facebook model has its limitations, some kind of social network authentication is likely to be a more important tool in dealing with junk mail than playing around at the DNS level.
Overall, this event was very valuable, being both informative, and stimulating me to think in new ways about this thing we take for granted: the Domain Name System.
Comments