Menu
Google's datacentres grow too fast for normal networks, so it builds its own

Google's datacentres grow too fast for normal networks, so it builds its own

Google has been developing networks for 10 years and is now sharing some details

Google has been building its own software-defined datacentre networks for 10 years because traditional gear can't handle the scale of what are essentially warehouse-sized computers.

The company hasn't said much before about that homegrown infrastructure, but one of its networking chiefs provided some details on Wednesday at Open Network Summit and in a blog post.

The current network design, which powers all of Google's datacentres, has a maximum capacity of 1.13 petabits per second. That's more than 100 times as much as the first datacentre network Google developed 10 years ago. The network is a hierarchical design with three tiers of switches, but they all use the same commodity chips. And it's not controlled by standard protocols but by software that treats all the switches as one.

Networking is critical in Google's data centers, where tasks are distributed across pools of computing and storage, said Amin Vahdat, Google Fellow and networking technical lead. The network is what lets Google make the best use of all those components. But the need for network capacity in the company's datacentre has grown so fast that conventional routers and switches can't keep up.

"The amount of bandwidth that we have to deliver to our servers is outpacing even Moore's Law," Vahdat said. Over the past six years, it's grown by a factor of 50. In addition to keeping up with computing power, the networks will need ever higher performance to take advantage of fast storage technologies using flash and non-volatile memory, he said.

Back when Google was using traditional gear from vendors, the size of the network was defined by the biggest router the company could buy. And when a bigger one came along, the network had to be rebuilt, Vahdat said. Finally, that didn't work.

"We could not buy, for any price, a datacentre network that would meet the requirements of our distributed systems," Vahdat said. Managing 1,000 individual network boxes made Google's operations more complex, and replacing a whole datacentre's network was too disruptive.

So the company started building its own networks using generic hardware, centrally controlled by software. It used a so-called Clos topology, a mesh architecture with multiple paths between devices, and equipment built with merchant silicon, the kinds of chips that generic white-box vendors use. The software stack that controls it is Google's own but works through the open-source OpenFlow protocol.

Google started with a project called Firehose 1.0, which it couldn't implement in production but learned from, Vahdat said. At the time, there were no good protocols with multiple paths between destinations and no good open-source networking stacks at first, so Google developed its own. The company is now using a fifth-generation homegrown network, called Jupiter, with 40-Gigabit Ethernet connections and a hierarchy of top-of-rack, aggregation and spine switches.

The design lets Google upgrade its networks without disrupting a datacentre's operation, Vahdat said. "I have to be constantly refreshing my infrastructure, upgrading the network, having the old live with the new."

Google is now opening up the network technology it took a decade to develop so other developers can use it.

"What we're really hoping for is that the next great service can leverage this infrastructure and the networking that goes along with it, without having to invent it," Vahdat said.

Stephen Lawson covers mobile, storage and networking technologies for The IDG News Service. Follow Stephen on Twitter at @sdlawsonmedia. Stephen's e-mail address is stephen_lawson@idg.com


Follow Us

Join the New Zealand Reseller News newsletter!

Error: Please check your email address.

Tags GoogleNetworkinginternetsearch engines

Featured

Slideshows

Kiwi channel comes together for another round of After Hours

Kiwi channel comes together for another round of After Hours

The channel came together for another round of After Hours, with a bumper crowd of distributors, vendors and partners descending on The Jefferson in Auckland. Photos by Maria Stefina.​

Kiwi channel comes together for another round of After Hours
Consegna comes to town with AWS cloud offerings launch in Auckland

Consegna comes to town with AWS cloud offerings launch in Auckland

Emerging start-up Consegna has officially launched its cloud offerings in the New Zealand market, through a kick-off event held at Seafarers Building in Auckland.​ Founded in June 2016, the Auckland-based business is backed by AWS and supported by a global team of cloud specialists, leveraging global managed services partnerships with Rackspace locally.

Consegna comes to town with AWS cloud offerings launch in Auckland
Veritas honours top performing trans-Tasman partners

Veritas honours top performing trans-Tasman partners

Veritas honoured its top performing partners across the channel in Australia and New Zealand, recognising innovation and excellence on both sides of the Tasman. Revealed under the Vivid lights in Sydney, Intalock claimed the coveted Partner of the Year 2017 (Pacific) award, with Data#3 acknowledged for 12 months of strong growth across the market. Meanwhile, Datacom took home the New Zealand honours, with Global Storage and Insentra winning service provider and consulting awards respectively. Dicker Data was recognised as the standout distributor of the year, while Hitachi Data Systems claimed the alliance partner award. Photos by Bob Seary.

Veritas honours top performing trans-Tasman partners
Show Comments