Mobile Technology News & Mobile Fun

Google’s next step in hardwiring the Internet: More location-sensitive DNS

By Scott M. Fulton, III, Betanews

If today’s Internet worked the way it was originally designed, where content took whatever route seemed most convenient at the time to reach its destination, there’s a good chance we’d already be in a state of gridlock today. As it turns out, global-scale load balancing has already been well under way for several years, with companies like Akamai providing edge caching services that move copies of frequently accessed content from a high-volume server geographically closer to the clients that access it.

Now, Google wants the entire Internet to be capable of implementing a concept that could make similar services to Akamai’s feasible on smaller scales everywhere. Earlier this week, the company announced it had submitted a formal proposal to the Internet Engineering Task Force, that would enable authoritative nameservers — the principal directories in the Domain Name Server system — to ascertain more clearly where the request for a resolved address is coming from. That way, Google says, the DNS server can craft a response that can be more directly and expressly routed to its recipient. As it stands now, recursive resolvers — the DNS entities that more often directly face the public — tend to forward requests to nameservers using their own geography, rather than that of the original client.

This is not at all the same thing as caching content, but it could have a very similar impact: The information in a geographically-centered DNS resolution could theoretically be used to help load-balance the Internet, to build a more direct route for the content being requested from the servers whose names have been resolved.

“To find the best reply for a given query, most nameservers use the IP address of the incoming query to attempt to establish the location of the end user,” begins Google’s proposal to the IETF. “Most users today, however, do not query the Authoritative Nameserver directly. Instead, queries are relayed by Recursive Resolvers operated by their ISP or third parties. When the Recursive Resolver does not use an IP address that appears to be topologically close to the end user, the results returned by those Authoritative Nameservers will be at best sub-optimal. This draft proposes a DNS protocol extension to enable Authoritative Nameservers to return answers based on the network address of the actual client, by allowing Recursive Resolvers to include it in queries.”

It’s an appealing proposition, and the way Google has phrased it to the IETF, it would not be a privacy violation for the client: Requests passed on to name servers would be required to mask the more granular portions of their sources’ IP addresses to some extent, probably just so the response centers around the client’s network center rather than his location on a map.

But it could also end up being very lucrative for Google anyway, whose own mapping services — especially to the extent we’re now seeing them tested in Google’s first branded phone, the Nexus One — definitely want to know where you are. Already last month, the company launched its free public DNS server, with the altruistically stated aim of resolving the world’s DNS addresses faster. If the IETF adopts Google’s proposal, Google would have one of the authoritative nameservers capable of gleaning DNS requesters general locations, even as they’re passed up the chain by recursive resolvers.

Google states it will not leverage such a service to serve clients advertising, and there’s good reason to believe that if the IETF implemented it as Google proposes, it couldn’t do so if it tried. However, with Google also a prominent content server, it would have a much better idea of the geographic locations from which high-volume requests are produced. That’s very valuable information for a worldwide entity that could create services that compete against Akamai — or, more accurately, that seek to render what Akamai and competitive services perform, obsolete.

What’s more, Google has grown very interested of late in the notion of providing more end-to-end Internet service — a notion that improved service, improved routing, and an improved client Web browser collectively could result in a visibly superior Web experience for users. Last November, the team behind the Chromium platform (which supports Google’s Chrome browser) announced their involvement in an experimental replacement transport protocol for HTTP, called SPDY (“speedy”). Boiled down, SPDY requires fewer TCP connections to achieve the same amount of transport.

SPDY is one of the protocols that could conceivably benefit from improved packet forwarding — a potential side-effect of Google’s own servers making use of the information it gleans from the new DNS approach it proposed to the IETF this week. A very quiet amendment to Google’s SPDY page includes a presentation to Google’s researchers by Chromium contributor Mike Belshe (PDF available here). Although much of the presentation file’s content is categorical rather than descriptive, it’s clear that Belshe made an argument in favor of joining SPDY research with packet forwarding, directly citing a passage from Internet Community RFC 2914, “Congestion Control Principles,” that spoke of the congestion problem the early Net faced during the pre-Akamai era:

The Internet protocol architecture is based on a connectionless end-to-end packet service using the IP protocol. The advantages of its connectionless design, flexibility and robustness, have been amply demonstrated. However, these advantages are not without cost: Careful design is required to provide good service under heavy load. In fact, lack of attention to the dynamics of packet forwarding can result in severe service degradation or ‘Internet meltdown.’ This phenomenon was first observed during the early growth phase of the Internet of the mid 1980s [RFC896], and is technically called ‘congestion collapse.'”

It’s fair to presume that an organization of Google’s immense scale, were it able to pull this off, would create a kind of de facto “net-non-neutral” scenario, where all the tools involved in the scheme would be open for anyone to use, but only Google would be able to connect all the dots to make them work.

Copyright Betanews, Inc. 2010

Add to digg
Add to Google
Add to Slashdot
Add to Twitter
Add to del.icio.us
Add to Facebook
Add to Technorati


Have something to add? Share it in the comments.

Your email address will not be published. Required fields are marked *