Space epics are in the news, what with J.J. Abrams taking over the “Star Wars” franchise and with recent happenings in the world of online gaming. On the evening of Jan. 26, in the 3D space role-playing game Eve Online, 2,800 players (and their 2,800 spaceships)—purely by happenstance—engaged in a massive stellar shoot-out.
Imagine the final space battle of the first Star Wars trilogy (in “Return of the Jedi”), increase the number of ships by a magnitude of three, put a real person interacting in real time in every vessel, and you can begin to imagine what the event looked like. Battlestar Galactica’s tense battle scenes, Star Trek’s Borg incursions into human space, even the size of the real-world U.S. Navy pales in comparison to the size of this event. And it all took place in real time online.
It’s the largest single online battle ever to take place in a real-time multiplayer game. The entire affair took place simply because a few player-run in-game corporations and their giant war fleets happened to stumble upon each other in the same sector of space.
For system designers, that’s not something you can predict days or even weeks ahead of time. It’s an occurrence for which the software must be permanently ready, even if it took 10 years for it to happen. For a software service that has to be online 24×7 and available for tens of thousands of players, all adventuring in space at the same time, the penalty for failure is remarkably high, thanks to the hundreds of other online games available.
Halldor Fannar, CTO of CCP Games, said that it’s Perforce that makes the difference for the distributed teams tasked with building, maintaining and expanding this huge online universe.
Since the Iceland-based company launched Eve Online in 2003, one of the biggest challenges faced has been keeping up with growth. “The company has grown quite a bit,” said Fannar. “The complexity has grown as well. With the multiple development sites, we’ve been playing a bit of catch-up with our growth. There are still some methods we started applying in the beginning that have really served us well. One has been having zero manual configuration. We have a sync-and-run policy.
“Any developer that gets hired onto a project is in most cases able to sit at their desk and use a sync command that pulls down the right branch of code for them. The tools don’t require the developers to pull in any extra packages manually. We achieve this by checking a lot of things into Perforce. We have middleware and libraries, and we check those into source control as well. We take time to configure them properly in Perforce. That’s proved invaluable.”
And the storing of non-code assets doesn’t end there. Fannar said that CCP uses Perforce to keep track of server environment configurations.
“It was hard to track the exact configuration of some of the servers,” he said. “Configurations used to be stored in local text files. There wasn’t any auditability. Perforce is perfect to do that. The way we use it, you check in changes to the environments through Perforce. When you’re promoting your configuration to the next testing stage, you’re taking the configuration from staging to production. When you’re in the Web page and promoting that, on the back end we’re doing an integration in Perforce and moving that configuration to the next stage. When we spin up new servers, we’re reading this data out of Perforce, and it’s applied to the ‘virtual machine baker’ as we call it.”
Fannar also said that keeping multiple sites up to date with the same environments, code and assets is made easier thanks to Perforce.
“We have proxy servers at local offices. But for the brokers and for user management, we invested in getting all that configured for a global development organization,” he said. “We manage all our users on a central server. It doesn’t matter which office I’m with, I always check into the same server. But I am routed to the nearest proxy for the best experience.”
Randy DeFauw, technical marketing manager at Perforce, said that the CCP team is not alone in its use of central repositories as a catchall for solving versioning problems. He said that non-code assets haven’t just been leaking into repositories, “It’s been gushing for years now.”
“We do have a lot of customers who have been doing this type of thing in different domains for some time, like storing Puppet files. That’s a pretty good fit too. We can provide a really good chain of custody for all the deployed assets as well. Whether we’ll unify this somehow, we don’t have any concrete plans to produce a DevOps product, but we are looking at doing a Puppet integration. We definitely like what our customers are doing, and we’re taking a look at that. I suspect this movement is driven by DevOps.”
But Fannar and his teams have also taken some pages from the enterprise cookbook. Instead of running its game as a single large system, Eve has long been run as a series of services, each of which can be taken down and updated without disrupting the rest of the universe.
And that’s important when you’ve got thousands of people around the world relying on your game to be their connection to other people. Indeed, in that 2,800-space-ship battle, videos of the event showed players relaying orders across language barriers. Russian, English, Portuguese and Spanish, all were being spoken, with the same command or prioritized target relayed over and over in new words each time.
And that’s what online gaming is all about: sending people across the universe to connect with someone across the globe.