Space exploration is hard, not least because of how difficult it is to communicate. Astronauts need to talk to mission control, ideally by video communication, and space vehicles need to send back data they gather, preferably at high speed and with little delay as possible. At first, space missions designed and carried their own distinct communications systems; that worked well enough, but it wasn’t exactly a paragon of efficiency. Then one day in 1998, the internet pioneer Vinton Cerf imagined a network that could offer a richer capacity to serve the growing number of people and vehicles in space. The dream of an interplanetary internet was born.
But extending the internet to space isn’t just a matter of installing Wi-Fi on rockets. Scientists have novel obstacles to contend with: The distances involved are astronomical, and planets move around, potentially blocking signals. Anyone on Earth who wants to send a message to someone or something on another planet must contend with often-disrupted communication paths.
“We started doing the math for the [internet standards] which had worked perfectly well here on Earth. However, the speed of light was too slow,” Cerf said of his early work with colleagues in the InterPlanetary Networking Special Interest Group. Overcoming that problem would be a major undertaking, but this American computer scientist and former Stanford professor is used to helping make big things happen.
Decades ago, Cerf and Robert Kahn — the “fathers of the internet” — developed the architecture and protocol suite for the terrestrial internet known as Transmission Control Protocol/Internet Protocol (TCP/IP). Anyone who has ever surfed the web, sent an email or downloaded an app has them to thank, though Cerf is quick to push back on the fancy title. “A lot of people contributed to the creation of the internet,” he said in his usual measured voice.
To transfer data on Earth’s internet, TCP/IP requires a complete end-to-end path of routers that forward packets of information through links such as copper or fiber optic cables, or cellular data networks. Cerf and Kahn did not design the internet to store data, partly because memory was too expensive in the early 1970s. So if a link along a path breaks, a router discards the packet and subsequently resends it from the source. This works well in Earth’s low-delay and high-connectivity environment. However, networks in space are more prone to disruptions, requiring a different approach.
“TCP/IP doesn’t work at interplanetary distances,” Cerf said. “So we designed a set of protocols that do.”
In 2003, Cerf and a small team of researchers introduced bundle protocols. Bundling is a disruption/delay-tolerant networking (DTN) protocol with the ability to take the internet (literally) out of this world. Like the protocols that underlie Earth’s internet, bundling is packet-switched. This means that packets of data travel from source to destination by way of routers that switch the direction in which the data moves along the network’s path. However, bundling has properties the terrestrial internet does not have, such as nodes that can store information.
A data packet traveling from Earth to Jupiter might, for example, go through a relay on Mars, Cerf explained. However, when the packet arrives at the relay, some 40 million miles into the 400-million-mile journey, Mars may not be oriented properly to send the packet on to Jupiter. “Why throw the information away, instead of hanging on to it until Jupiter shows up?” Cerf said. This store-and-forward feature allows bundles to navigate toward their destinations one hop at a time, despite large disruptions and delays. His most recent paper on the subject highlights the applicability of Loon SDN — technology capable of managing a network that moves around in the sky — to NASA’s next-generation space communications architecture.
Beyond the interplanetary internet, Cerf, now in his 70s, also focuses on his day job as chief internet evangelist for Google. This is a fancy title he embraces, brimming with a preacher’s eagerness to spread the internet, via global policy development, to the billions of people around the world without it. He is at once ambitious with serious ideas, while maintaining a playful side. Even though he typically sports a well-trimmed beard and three-piece suit — some say he’s the inspiration for the god-like Architect in the Matrix movies — he once started a keynote speech by unbuttoning his jacket and shirt, Superman-style, to reveal a T-shirt that read: “I P ON EVERYTHING!”
Quanta Magazine caught up with Cerf shortly after he recovered from COVID-19 and just before his participation in the Virtual Heidelberg Laureate Forum. The interview has been condensed and edited for clarity.
What first brought about the idea of an interplanetary internet?
In the spring of 1998, nine of us got together at Jet Propulsion Laboratory to ask: What should we do in anticipation of what we might need for space exploration 25 years from now? Adrian Hooke, who was at JPL and then also served at NASA headquarters, was the guy who really got behind this and pushed. He passed away a few years ago, but he held this team together.
We’d been exploring the solar system for decades, but the exploration — both manned and robotic — has typically involved radio communication, either direct point to point or through what’s called a bent pipe: a radio relay that picks up the signal and rebroadcasts it to improve the likelihood that it reaches Earth.
Our group asked: Could we do better? Could we use the internet’s technology to improve space communication, especially as the number of spacecraft increases over time, or as we start putting settlements on the moon or Mars?
So, a couple decades after conceiving of bundle protocols, is the interplanetary internet up and running?
We don’t have to build the whole thing and then hope somebody uses it. We sought to get standards in place, as we have for the internet; offer those standards freely; and then achieve interoperability so that the various spacefaring nations could help each other.
We’re taking the next obvious step for multi-mission infrastructure: designing the capability for an interplanetary backbone network. You build what’s needed for the next mission. As spacecraft get built and deployed, they carry the standard protocols that become part of the interplanetary backbone. Then, when they finish their primary scientific mission, they get repurposed as nodes in the backbone network. We accrete an interplanetary backbone over time.
Has this repurposing already started?
In 2004, the Mars rovers were supposed to transmit data back to Earth directly through the deep space network — three big 70-meter antennas in Australia, Spain and California. However, the channel’s available data rate was 28 kilobits per second, which isn’t much. When they turned the radios on, they overheated. They had to back off, which meant less data would come back. That made the scientists grumpy.
One of the JPL engineers used prototype software — this is so cool! — to reprogram the rovers and orbiters from hundreds of millions of miles away. We built a small store-and-forward interplanetary internet with essentially three nodes: the rovers on the surface of Mars, the orbiters and the deep space network on Earth. That’s been running ever since.
And it’s just gotten bigger since then, right?
We’ve been refining the design of those protocols, implementing and testing them. The latest protocols are running back-and-forth relays between Earth and the International Space Station. We’ve done some other really cool tests. One spacecraft, EPOXI, that was off to visit a couple of comets was about 81 light-seconds away from Earth when we were told, “It’s OK with us if you upload your protocols and test them on that spacecraft.” So we did that too.
We did another test at the ISS where the astronauts were controlling a little robot vehicle in Germany. Normally, you wouldn’t do that: If you’re trying to steer a vehicle on Mars and it takes 20 minutes for your signal to get there, you might turn the wheel and, 20 minutes later, the car turns and goes over a cliff. Then, 20 minutes after that, you discover that you just lost your $6 billion vehicle. It worked between the ISS and Earth because it’s only a few hundred miles. It’s not totally crazy to imagine a mission where the astronauts don’t actually land on the planet. They simply orbit around it and deploy remote equipment on the surface in real time.
What about the user experience? Do the bundle protocols for the interplanetary internet feel the same as TCP/IP does for Earth’s internet?
It doesn’t take long before you’re no longer in interactive mode. You’re either in over-and-out mode, or you’re in hi-this-is-a-nice-video-recording-I-recorded-several-hours-ago mode, which is like email. The protocols were oriented around the recognition that the delay removes the possibility of interaction. That puts constraints on the protocol designs.
It seems as if you’ve solved the main problems, but are there any issues left to work out?
It’s one thing to get agreement on the technical design and to implement the protocols. It’s something else to get them in use where they’re needed. There’s a lot of resistance to doing something new because “new” means: “That might be risky!” and “Show me that it works!” You can’t show that unless you take the risk.
We’re working hard to convince the people designing space missions that the stuff is adequately tested. That’s been an uphill battle, and there’s still much to be done. We have to get the commercial companies that support space exploration to have off-the-shelf capability. And we have to get scientists who design missions to say: “This is what we’re capable of now.”
That way, you can be more ambitious and take advantage of the assumption that we have an interplanetary backbone. If you start out on the presumption that you don’t, then you’ll design a mission with limited communications capability.
Will the interplanetary internet allow for new approaches to space exploration that may yield new discoveries?
The interplanetary internet is an infrastructure that’s intended to support interplanetary activity, which could be research but someday could also be commercialization. It’s an infrastructure in the same way that the internet is an infrastructure. The internet doesn’t invent or discover anything. It’s simply the medium through which people can do collaborative work and can discover new things.
What about on Earth, could DTN protocols be useful here, too?
An engineer in Sweden had the idea of trying out DTN protocols to track reindeer in Lapland, where the Sami tribe has been herding for 8,000 years. Reindeer wander around and are in and out of radio contact. It’s an unpredictable environment, which is very different from an environment in which you can compute orbital mechanics and predict the likely contacts that might occur. This opportunistic capability, when communications are less predictable, is what’s being tested in Lapland.
Also, in oceanic research, you have instruments generating and accumulating data on the ocean’s surface or the sea floor, but you don’t necessarily have continuous connectivity to them. For Earth observations, sensors could sweep through a forest intermittently but not be continuously broadcast. In a fully connected internet environment, you get rid of the data as you produce it. But that won’t work in an environment with intermittent connectivity. You need a protocol that says: “Don’t panic! It’s OK, just hang onto it.” Also, battery-driven devices should not constantly transmit when they could be more efficient.
Store-and-forward, intermittent capability is quite useful terrestrially, especially after a major disaster when you may not have much communications capability. One could use DTN in a rapid recovery mode where resources are not sufficient to provide TCP/IP-style coverage.
So would it make sense to switch from TCP/IP to DTN for all of Earth’s internet?
We’ve shown that you can make the DTN protocols work at high speed, even though they have more overhead than the traditional TCP/IP protocols. But it would be very hard to introduce DTN everywhere, because look at how much TCP/IP there is. There’s also been an evolution on the internet side to another set of protocols called QUIC that achieves not only faster data rates but also faster recovery from failures or disconnects. However, this evolution is not in the direction of DTN.
On the other hand, for mobiles, where connectivity is still iffy, the DTN functionality might be pretty good. We’re now looking at implementation of DTN in the mobile environment.
As a father of, and evangelist for, the internet, do you have any concerns about your creation?
The abuse of the internet. Misinformation and malware. Harmful attacks. Phishing attacks. Ransomware. It’s painful and distressing to realize that people will take an infrastructure like this, which has so much promise and has delivered so much, and do bad things with it. Unfortunately, that’s human nature.
We need additional governance in the online environment, but that’s hard. The Chinese built a big firewall and then put all their people inside it under surveillance. That’s not necessarily a society that the rest of us want to live in, and yet we still have to cope with the problem. How do we cope without going to that extreme? I don’t have a good answer. I wish I did.
That’s very much on my mind right now. In fact, the interplanetary stuff is a refreshing shift away from that because there we’re thinking almost purely about scientific results.