X-Originating-IP: [188.8.131.52] From: "Ethan Zuckerman" email@example.com To: firstname.lastname@example.org BcHotmail: Subject: 70 hops Date: Fri, 14 Mar 2003 13:13:31 -0500 X-OriginalArrivalTime: 14 Mar 2003 11:13:31.0535 (UTC) FILETIME=[471B39F0:01C2D6B0] X-Loop-Detect: 1 Hey Andrew - Checking Hotmail from my office in Accra - just got your email from Mongolia. Glad you're enjoying Ulaanbaatar. If I'm counting correctly, receiving and reading your email involved a minimum of 70 computers in 5 nations - makes you realize just how cool the net really is! Take care, -E
Reply-To: email@example.com From: "Andrew McLaughlin" firstname.lastname@example.org To: "Ethan Zuckerman" email@example.com Subject: 70 hops Date: Fri, 14 Mar 2003 08:59:23 -0500 X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2911.0) Importance: Normal X-Loop-Detect: 1 Ethan - 70 computers - really! If my count is correct, that means that there are at least x organizations or entities involved with our email exchange. No wonder Internet law is so complicated! -Andrew
Fifty years ago, communication between Ghana and Mongolia would have taken months and transpired via mail. Ten years ago, it would have involved international phonecalls costing several US dollars a minute and required the intervention of international operators to make the call possible. Now Ethan and Andrew are able to communicate over immense distances, across dozens of national borders, with near-zero cost, no human assistance and mere seconds of lag-time between message transmission and reception. What happened? And how is this possible?
The Internet, and its attendant communication miracles, are based on a key principle of network engineering - Keep It Simple, Stupid. (KISS). Every computer connected to the Internet is capable of doing a few, very simple tasks very quickly. By linking millions of comparatively simple systems together, complex functionality is achieved.
At the heart of any Internet transmission - sending an email, requesting a web page, downloading an audio or video file - is the Internet Protocol (IP). Invented in 1974 by Vint Cerf and Robert Khan, IP is an addressing scheme that gives every computer connected to the Internet a unique address, and a description of the "packets" of data that can be delivered to these addresses. The protocol - explained in excruciating detail at http://www.faqs.org/rfcs/rfc791.html - boils down to two simple ruleGeekcorps:
Because IP is so simple, there are lots of useful features not included in the protocol. One of these key features is "guaranteed delivery". Using "pure" IP, a message sent from one computer to another would be broken up into a number of packets and the first computer would attempt to deliver these packets to the second machine. IP wouldn't guarantee that these packets arrived in the correct order, or that they arrived at all. That's the job of another protocol, TCP (Transmission Control Protocol). TCP sits "on top" of IP and ensures that all the packets sent from one machine to another are received and assembled in the correct order. Should any of the packets get "dropped" during transmission, TCP requests that the sending machine resend the appropriate packets and ackowledge them when they arrive.
Why not just build delivery guarantees into IP, combining TCP and IP? Oddly enough, there are applications where it's less important that you receive all the data than it is that you receive the data as quickly as possible. If you're receiving streamed audio or video, you'd prefer to have a decrease in the quality of your signal than have the stream stop altogether while dropped packets get resent. Early net architects were bright enough to anticipate this sort of situation and created a TCP alternative called UDP (User Datagram Protocol). While orders of magnitude less common than TCP, it's an important part of core Internet protocols.
A ludicriously informative tutorial on IP, TCP, UDP and the basics of IP routing is available in RFC 1180 - while written in the "pre-web" Internet (1991), IP has not changed substantially since introduction and therefore the document is still a terrific introduction. (I plan to add a sidenote on RFCs and a pointer to http://www.rfc-editor.org/overview.html at some point...) Okay, so that's TCP/IP. So what? Why has the protocol gained such widespread acceptance? And how does it help us get an email from Mongolia to Ghana?
Three reasons why IP is incredibly cool: efficiency; medium independence; application support.
When we think of communications, we tend to think of the telephone. In telephony, we open a "circuit" between two people. This circuit allows communication in both directions - i.e., I can speak, as well as hearing you speak. With certain exceptions, it's private, and assuming nothing fails, it's got guaranteed availability for an unlimited period of time. All of these things are desirable, especially when you're calling your significant other halfway across the country.
These desirable features are a big part of the reason circuit-based communications are, from a networking standpoint, incredibly inefficient. In a telephone call, you've comandeered a piece of wire (or, more likely, a piece of a fiberoptic cable) connecting you and the other party. No one else gets to use those wires for the time you're tying them up. Even worse, you're not transmitting data the whole time! When you're listening to the other person talk, you're not taking advantage of the circuit's capability to carry data bidirectionally. And during pauses between sentences, words or phonemes, you're not transmitting data at all. How selfish of you!
In comparison to telephony, IP is an extremely efficient protocol. On the same underutilized piece of copper carrying a phonecall, hundreds of email exchanges can occur in the same period of time. Because Internet traffic has been packetized, there's no need to occupy a circuit for the full duration of an exchange. Instead, you can use the circuit just for the miliseconds needed to transmit the packet. And because each packet has a unique source and destination address embedded in the header, simultaneous conversations can coexist serially on the same circuit without interfering with one another. One way to understand just how efficient packetizing data can be is to consider Voice over IP. By packetizing and compressing voice traffic, VOIP is able to provide up to six voice circuits in the same bandwidth of a traditional telephone line (56kbps). (Check out this VoIP bandwidth calculator for a clearer sense of the paramaters involved with compressing voice traffic.)
We've been talking about using Internet Protocol over phone lines. And, indeed, most Internet traffic is carried over copper or fiberoptic phone lines. But IP is completely medium indepedent. The Internet Protocol can be implemented "on top" of any form of communication. Internet links via radio and microwave are becoming increasingly common - much of the developing world receives Internet connectivity via sateliite links, and WiFI links have become standard equiptment at many US universities and businesses. Less common, but fascinating, is the practice of transmitting data via lasers and "open air optics" - i.e., through the air, rather than through glass fiber. Laurence Livermore laboratories recently announced a system capable of transmitting 2.5 Gbps (the equivalent of 40,000 simultaneous phonecalls) over a single laser beam spanning 28 kilometers...
For proof of the fact that IP can run on absolutely ANY communications infrastructure, it's useful to consider RFC 1149, titled "A Standard for the Transmission of IP Datagrams on Avian Carriers" - in other words, instructions for running an Internet using carrier pigeons. A successful implementation of the standard suggested in RFC 1149, CPIP (Carrier Pigeon Internet Protocol) was recently carried out by network administrators in Bergen, Norway. While no one is suggesting that CPIP is likely to be a major factor in the growth of the global Internet, it's helpful in demonstrating that any new technological development is likely to be interoperable with the existing network.
The fact that IP is efficient and medium independent wouldn't matter to us if there weren't so many useful applications built on top of it. Every application we think of as an Internet service is built on top of IP: email, FTP, web browsing, peer to peer file sharing. By building new applications that rely on IP, developers are able to greatly hasten the development process. If Shawn Fanning had needed to design the networking protocols that made Napster possible, it's unlikely the application would ever have been created. And, without hundreds of millions of potential users already connected to the Internet, it's unlikely that a network-based application like Napster would ever have reached critical mass. The importance of the ease of creating applications that rely on IP and the ability to leverage an existing userbase cannot be underestimated.
Armed with our new understanding of TCP/IP, we return to our story of globetrotting technologistGeekcorps:
(Feel free to consult the handy network diagram that depicts the transactions documented below.)
Andrew is in Ulaanbaatar, relaxing with a cup of aimag (fermented mare's milk) and checking email at the Chinngis Khan cybercafe. He's carrying his laptop, which he's attached to the Ethernet hub in the cafe. His machine automatically requests an IP address, using a protocol called DHCP, from the Windows NT "gateway" server at the cafe. The gateway machine is connected to a local ISP, Magicnet, via a 36kbit per second dial-up modem. The gateway machine has a unique IP assigned by Magicnet's DHCP server, and it's using Network Address Translation to assign IPs to Andrew's machine. Andrew's machine thinks it's got the IP address 192.168.0.5; the rest of the world thinks that Andrew's machine has the IP of the gateway machine, 184.108.40.206. The gateway machine receives all the Internet traffic for the cybercafe and distributes the appropriate packets to the machines that request them.
Andrew is running Microsoft Outlook, a mail program that supports three protocolGeekcorps: SMTP, POP3 and IMAP. SMTP - Simple Mail Transport Protocol - is a protocol used for sending mail. When Andrew types a message to Ethan, he's giving Outlook the parameters it needs to send a series of SMTP commands to his mailserver, a machine at Harvard named cyber-mail.law.harvard.edu. When he hits send, his laptop starts attempting to send a series of packets to cyber-mail with instructions on where they should ultimately be sent, and the contents of the message itself. While Outlook is smart enough to format this message into valid SMTP commands, it leans on part of the Windows operating system - the TCP/IP stack - to translate the SMTP messages into valid IP packets.
Andrew's packets go to the gateway machine through the Ethernet, to the Magicnet server via a modem, and then through a gateway machine at Magicnet. In the next seven tenths of a second, they take an epic journey through 23 machines in Mongolia, China, Hong Kong, San Jose, New York, Washington DC and Boston. Here's the itinerary:
1 cobalt03.mn (Datacomm Mongolia) (220.127.116.11) 2 China Satnet (18.104.22.168) 3 China Satnet (22.214.171.124) 4 SATNETEX - China Digital satNet Ltd. (126.96.36.199) 5 DigitalNetworkAlliance.GW.opentransit.net (188.8.131.52) 6 P2-1-0.HKGAR1.Hong-kong.opentransit.net (184.108.40.206) 7 P2-3.HKGBB2.Hong-kong.opentransit.net (220.127.116.11) 8 P13-0.SJOCR2.San-jose.opentransit.net (18.104.22.168) 9 P4-0.SJOCR1.San-jose.opentransit.net (22.214.171.124) 10 P5-0.NYKCR2.New-york.opentransit.net (126.96.36.199) 11 P4-0.NYKCR3.New-york.opentransit.net (188.8.131.52) 12 So2-0-0.ASHBB1.Ashburn.opentransit.net (184.108.40.206) 13 dcp-brdr-01.inet.qwest.net (220.127.116.11) 14 dca-core-01.inet.qwest.net (18.104.22.168) 15 dca-core-03.inet.qwest.net (22.214.171.124) 16 jfk-core-03.inet.qwest.net (126.96.36.199) 17 jfk-core-01.inet.qwest.net (188.8.131.52) 18 bos-core-02.inet.qwest.net (184.108.40.206) 19 bos-edge-02.inet.qwest.net (220.127.116.11) 20 Harvard router (18.104.22.168) 21 border-gw-ge-wan3-1.fas.harvard.edu (22.214.171.124) 22 core-1-gw-vl415.fas.harvard.edu (126.96.36.199) 23 core-nw-gw-vl216.fas.harvard.edu (188.8.131.52)
(How the heck do you get this data? Learn more about "traceroute")
These twenty-three computers are routers. Their job is extremely simple - they're supposed to move packets to a neighboring machine as quickly as possible. Because all they do is move packets, they're able to process millions of packets a minute. Each router has a "routing table", a set of rules that determine which machine to forward packets to based on the final destination of a packet. The actual construction of routing tables is a fascinating subject, far beyond the scope of this discussion - an excellent introduction from networking engineers at Aligent is available here.
Let's unpack this itinerary. Computer 1, Cobalt3.mn, is one of the computers Magicnet uses to route traffic out of Mongolia. Cobalt3 is attached to a high-capacity phone line that connects Ulaanbaatar and China Satnet's NOC (Network Operations Center) in Beijing. China Satnet is a Network Service Provider, a company that sells internet capacity to Internet Service Providers, like Magicnet. They, in turn, buy connectivity from backbone providers, companies that operate the huge fiberoptic cables that link together continents. Satnet routes packets from computer #2, which handles traffic to and from Mongolia, to computer #5, which routes traffic from Satnet to and from Opentransit, the backbone arm of France Telecom. Opentransit sees that the packets need to get to the US, specifically to a network served by Qwest, and plans a route for the packets. They head through Hong Kong (#6, #7), across the Pacific to San Jose (#8, #9), across the continent to New York (#10, #11) and then to computer #12 in Ashburn, Virginia.
Computers #12 and #13 are worth special note. They live in a building owned by Equinix, a company that specializes in internet peering. Network service and backbone providers need to transfer data from one network to another. Historically, this happened at places called Metropolitan Area Exchanges (MAEs), where dozens of networks terminated their lines and traded packets. As the net grew, the MAEs grew unwieldy - the amounts of data that needed to be exchanged overwhelmed the capacity of switches and led to very slow data transfer. More importantly, large network providers quickly learned that MAEs put them at an economic disadvantage. Imagine that a tiny internet provider - Joe's Internet Backbone (JIB) - wants to connect to MCI/Worldcomm at a MAE. There are a few hundred computers attached to Joe's backbone; there are several million attached to the MCI backbone. It's significantly more likely that a user of Joe's network will want to reach a site on the MCI network than vice versa. As a result, if MCI connects to JIB, it will end up carrying most of Joe's traffic, and absorbing the costs associated with that carriage.
To avoid the congestion at the MAEs and to escape the MCI/JIB situation, network providers started moving to a model called "private peering". In private peering, two networks agree to interconnect. They agree on a place to put their routers, they each buy machines and connect them via fiber or gigabit ethernet. And they usually strike a financial deal that compensates the larger network for its increased costs of carriage. If networks have a similar number of users, they might decide to interconnect without exchanging money; if one network is substantially smaller, it may pay multiple millions of dollars for the privledge of interconnecting. Network providers work extremely hard to keep the fiscal details of their peering arrangements secret, so it is very difficult to know who's paying whom and how much.
As of machine #13, we're now on Qwest's network. The packets fly through a set of machines in the Washington DC area, near JFK airport in New York City and then Boston. These machines have the word "core" in their names, implying they are core nodes in Qwest's network - tremendously fast computers attached to enormous communications lines. Machine #19 is an "edge" machine - our packets have now gotten off the Qwest backbone and are to be routed to a machine in the Boston area. Machine #20 is owned by Harvard University. It interconnects the Harvard and Qwest networks. In a very real sense, Harvard and Qwest are interconnected at this point much the way Qwest and Opentransit were connected in Virginia. However, Harvard isn't a peer to Qwest - it doesn't run its own backbone extending anywhere beyond Cambridge - and hence Harvard absorbs all the costs associated with the connectivity and with the "peering" point. Harvard does have an awfully big network, though, and distinguishes between edge machines (#21) and core machines (#22, #23).
Just a quick reminder - those last four paragraphs? Seven tenths of a second. The duration of a single human heartbeat.
Machine #24 in this chain is also an edge machine, cyber-mail.law.harvard.edu. Unlike the core routers on the network, this machine has numerous jobs beyond the forwarding of packets. One is to run a mailserver, a piece of software that distributes incoming email to users and routes outgoing email to other mailservers. When Andrew's packets are received by the mailserver, it notes that the email to Ethan needs to be sent to geekcorps.org. It starts sending IP packets to geekcorps.org, striking up a conversation:
geekcorps.org: 220 SMTP Service Ready cyber-mail: HELO geekcorps.org geekcorps.org: 250 OK cyber-mail: MAIL FROM:< firstname.lastname@example.org > geekcorpGeekcorps: 250 OK cyber-mail: RCPT TO: < email@example.com > geekcorpGeekcorps: 250 OK cyber-mail: DATA geekcorpGeekcorps: 354 Start mail input; end with < CRLF >. < CRLF > cyber-mail: Hi Ethan, I'm here in Ulaanbaatar... etc. cyber-mail: . geekcorpGeekcorps: 250 OK cyber-mail: QUIT geekcorpGeekcorps: 221 Service closing transmission channelDownright mannerly, isn't it? Keep in mind that each of those messages is contained within an IP packet. And each IP packet has to wend its complex way from cyber-mail to geekcorps.org. This connection spans 18 computers, three networks and takes 12 hundreths of a second.
1 core-nw-gw-vl216.fas.harvard.edu (184.108.40.206) 2 core-1-gw-vl415.fas.harvard.edu (220.127.116.11) 3 border-gw-ge-wan3-1.fas.harvard.edu (18.104.22.168) 4 22.214.171.124 (126.96.36.199) 5 bos-edge-02.inet.qwest.net (188.8.131.52) 6 bos-core-02.inet.qwest.net (184.108.40.206) 7 jfk-core-01.inet.qwest.net (220.127.116.11) 8 jfk-core-02.inet.qwest.net (18.104.22.168) 9 ewr-core-01.inet.qwest.net (22.214.171.124) 10 ewr-core-03.inet.qwest.net (126.96.36.199) 11 ewr-brdr-01.inet.qwest.net (188.8.131.52) 12 p4-1-0-0.r01.nwrknj01.us.bb.verio.net (184.108.40.206) 13 p16-1-1-1.r21.nycmny01.us.bb.verio.net (220.127.116.11) 14 p16-1-0-1.r21.asbnva01.us.bb.verio.net (18.104.22.168) 15 p64-0-0-0.r20.asbnva01.us.bb.verio.net (22.214.171.124) 16 p16-3-0-0.r00.stngva01.us.bb.verio.net (126.96.36.199) 17 ge-1-1.r0709.stngva01.us.wh.verio.net (188.8.131.52) 18 * * * 19 www.geekcorps.org (184.108.40.206)The three asterisks signify a machine that failed to identify itself through traceroute, in this case, a Verio router in Sterling, Virginia, where the Verio data center is located. The Geekcorps machine is actually a small part of a large server owned by Verio; that single machine provides web, ftp and mail services for several dozen separate domain names. The mailserver on the Verio machine receives the email from Andrew and appends it to a "spool file", a text file containing all the uncollected email for a particular user. The mailserver now considers its job done - it could care less whether Ethan retrieves the mail, so long as it's been correctly spooled.
Ethan checks his email from Ghana in a somewhat convoluted fashion. Instead of pointing a mail client like Outlook at his mail account on Geekcorps, he points a web browser at Hotmail. His webbrowser speaks a protocol called HTTP, hypertext transfer protocol, and when he attempts to load www.hotmail.com, he's actually sending a message that looks like this to the Hotmail server:
GET /index.html From: 220.127.116.11 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows 2000)
The server responds with something like this:
HTTP/1.1 200 OK Date: Fri, 14 Mar 2003 3:35:28 PDT Content-Type: text/html Content-Length: 1354 < html > < title >Welcome to Hotmail! etc...
The webserver's response is a header, followed by some HTML - Hypertext Markup Language. All webpages are written in this language. Ethan's web browsers knows how to translate HTML into page layout instructions, and it takes this raw text and turns it into a webpage. Within the HTML are references to additional files, generally images. Ethan's browser composes GET requests for each of these images and places them in the appropriate place on the screen when they arrive.
Once again, these polite little exchanges are taking place through IP packets routed around the world. Best guess for the routing of these packets is approximately 18 hops in eight tenths of a second. (Note - I have the capability of tracing routes from Harvard's machines to machines across the net, so those routes are quite close to being accurate. For other routes, I'm using a combination of techniques to guess at the actual routing of packets. There may be egregious errors in my routing logic that could lead to these paths being inaccurate representations of the path packets actually take.) Because a webpage is built of several files - the HTML file and the associated image files - and because many of these files are too big to fit in just one packet, there are dozens of transactions involved with assembling a webpage and it can take several seconds to load:
1. vlan701.law13-msfc-b.us.msn.net (18.104.22.168) 2. pos10-0.law7-gsr-a.us.msn.net (22.214.171.124) 3. 126.96.36.199 (Level 3 - probably private peering router with MSN) 4. so-7-0-0.gar1.SanJose1.Level3.net (188.8.131.52) 5. so-0-0-0.mp1.SanJose1.level3.net (184.108.40.206) 6. ae0-54.mp2.NewYork1.Level3.net (220.127.116.11) 7. ae0-52.mp2.NewYork1.Level3.net (18.104.22.168) 8. ae0-56.mp2.NewYork1.Level3.net (22.214.171.124) 9. gige9-1-52.hsipaccess1.NewYork1.Level3.net (126.96.36.199) 10.gige9-0-53.hsipaccess1.NewYork1.Level3.net (188.8.131.52) 11.gige9-1-54.hsipaccess1.NewYork1.Level3.net (184.108.40.206) 12.unknown.Level3.net (220.127.116.11) 13.host-66-133-0-22.verestar.net (18.104.22.168) 14.unknown.Level3.net (22.214.171.124) 15.unknown.Level3.net (126.96.36.199) 16.ch-leuk-in4.interpacket.net (188.8.131.52) 17.host-64-110-84-218.interpacket.net (184.108.40.206) 18.www.idngh.com (220.127.116.11)
Andrew's packets took a fairly conventional route from Mongolia to Cambridge, through fiber and copper phone lines. Ethan's packets are conveyed by radio waves as well. His laptop is connected to a gateway machine at his office in Accra. That machine communicates with a server at his ISP, IDN, via WiFi - Ghana's phone infrastructure is so poor that WiFi is an excellent alternative for bridging distances of under 10 kilometers. From IDN, the packets hitch a ride on a satellite owned by Verestar, a subsidiary of Interpacket, the NSP that provides service to IDN. Once the packets land in the US, they're routed through conventional circuits from Interpacket to Level 3 to MSN, Microsoft's ISP.
Ethan navigates through Hotmail and sends a request (using a new command: POST) to the Hotmail server to check his email at Geekcorps. A small program running on the Hotmail server reads the parameters that Ethan's set earlier (using other HTTP POST commands) and composes a message in the POP3 protocol to the mailserver running on geekcorps.org. They, too, have a polite little exchange:
Geekcorps: +OK POP3 server ready < geekcorps.org > Hotmail: APOP ethan c4c9334bac560ecc979e58001b3e22fb Geekcorps: +OK ethan's maildrop has 2 messages (320 octets) Hotmail: STAT Geekcorps: +OK 2 320 Hotmail: LIST Geekcorps: +OK 2 messages (320 octets) Geekcorps: 1 120 Geekcorps: 2 200 Geekcorps: . Hotmail: RETR 1 Geekcorps: +OK 120 octets Geekcorps: (the first message) Geekcorps: . Hotmail: DELE 1 Geekcorps: +OK message 1 deleted Hotmail: RETR 2 Geekcorps: +OK 200 octets Geekcorps: (the second message) Geekcorps: . Hotmail: DELE 2 Geekcorps: +OK message 2 deleted Hotmail: QUIT Geekcorps: +OK hotmail POP3 server signing off (maildrop empty)In other words, the mailserver at Geekcorps opens the spool file that Ethan's email lives in. It tells Hotmail that there are two messages in the spool with a total length. It then lists the size of each message. Responding to commands from the Hotmail server, it transfers each message and then deletes it from the spool. That ugly string of text after the "APOP ethan" command is a cryptographic hash of Ethan's password. It's a one-way hash so that anyone who intercepts this packet won't know Ethan's password, but it allows the Geekcorps server to validate the password by hashing it's copy of the password against the same key. And yes, again, these polite conversations are carried out through IP packets transmitted through thirteen machines in under a hundreth of a second.
from geekcorps.org to hotmail.com (a complete guess...) 1. www.geekcorps.org 2. unknown - boca raton, probably 3. ge-1-1.r0709.stngva01.us.wh.verio.net (18.104.22.168) Verio Webhosting 4. p16-3-0-0.r00.stngva01.us.bb.verio.net (22.214.171.124) Verio Backbone, Sterling, VA 5. blah.blah.dnvrco01.us.bb.verio.net (126.96.36.199) Verio backbone, Denver, CO (guess) 6. p4-1-2-0.r00.snjsca04.us.bb.verio.net (188.8.131.52) 7. p16-0-1-0.r21.snjsca04.us.bb.verio.net (184.108.40.206) 8. p16-1-1-2.r21.plalca01.us.bb.verio.net (220.127.116.11) 9. p16-1-0-0.r00.plalca01.us.bb.verio.net (18.104.22.168) 10.198.32.176.152 (22.214.171.124) Pacific Bell network exchange point in Marina Del Ray, CA 11.pos0-0.core1.pao1.us.msn.net (126.96.36.199) 12.pos6-1.paix-osr-a.us.msn.net (188.8.131.52) 13.pos12-0.law2-gsr-a.us.msn.net (184.108.40.206) 14.gig6-0-0.law5-rsp-b.us.msn.net (220.127.116.11) 15. hotmail.com
Another program at Hotmail takes the emails retrieved via POP3, formats them into HTML and delivers them to Ethan. And Andrew's message has reached Ethan through 70 or so intermediate computers in Mongolia, China, Hong Kong and the US. We're done, right?
(Andrew, you've forgotten more about DNS than I will ever know. Feel free to throw out this entire section, rewrite it, call me an idiot, etc...)
Not by a longshot. We've abstracted a complicated part of the process for the sake of simplicity - the conversion of hostnames to IP addresses. When Andrew sends an email to firstname.lastname@example.org, his computer doesn't know that geekcorps.org is actually located at 18.104.22.168. It needs to ask the Domain Name System what IP address is currently associated with geekcorps.org. When Andrew obtained an IP address via DHCP, he was assigned a pair of DNS servers to query. His laptop sends a DNS lookup request to one of these servers (there are two for redundancy, in case one server is unavailable). If the DNS server had previously looked up the IP address for geekcorps.org, it will be "cached", stored in a local table for quick lookup.
If the address isn't cached locally, Andrew's DNS server queries a rootserver to find out what DNS server has authority for geekcorps.org. Rootservers A-M are located around the world - two in Europe, one in Japan and the remainder in the US. They're operated by universities, corsortia and huge networking corporations on a voluntary basis... and the Internet would come to a screetching halt without them. Rootserver M in Tokyo reports that authority over geekcorps.org is controlled by ns11a.verio-web.com located at 22.214.171.124.
Andrew's DNS server now queries the Verio DNS server, which reports that geekcorps.org - at the moment - is associated with 126.96.36.199. Andrew's DNS server caches the IP associated with geekcorps.org so that it doesn't have to perform another series of lookups immediately afterwards. The cache expires fairly quickly, though, to make it possible for the owner of the geekcorps.org domain to change what IP address it points to.
And yes, as you've guessed, all these DNS queries are polite, well-mannered exchanges carried out through IP packets, all of which need to be routed across the Internet. Count all the machines involved with these DNS lookups and Ethan and Andrew's exchange of email may well involve over 100 computers.
Here's where I'm hoping you'll start to take things over...
Imagine, for a moment, that Ethan and Andrew's exchange was not as innocent as it seems - perhaps, encoded within their messages, are key communications in their ongoing plan to sell crucial Berkman Center secrets to the North Korean government! When we prosecute Ethan and Andrew for espionage, how many entities are possibly involved as accessories to this crime, enabling this communication to take place?
The simple answer? Lots. At least ten backbone providers, NSPs and ISPs have routed packets related to the transaction. In each nation, those providers are licensed by one or more government entities. The spectrum allocated to the Verestar satellite is assigned by... etc, etc, etc. - you get to deal with institutions - you're better at it than I am. I think it's worth briefly describing ownership of key net standards, and the domain name registration system.