Other areas of difficulty came from the sheer scale of the project being undertaken. Using Amazon’s Simple Email Service was no small decision, as that service works out of a set pool of possible bandwidth and e-mails sent. That can get ugly when the Obama team was sending out e-mails to millions of people.

Thus, the team used redundant systems for almost everything. For donation processing, for example, the team used a third-party vendor, and then built an internal API that mimicked the vendor’s, allowing for redundancy in all donation-processing code.

And there were quite a few donations to process. Leeper said that the campaign’s websites took in a total of US$670 million from individual donations. Kunesh said the average donation was $55, but Leeper said the site “had crazy bumps. After convention, we were doing about $2 million an hour. We were just like, ‘What’s going on?!’ ”

VanDenPlas said that “The traffic patterns you see in a campaign are crazy. There are 25 million followers on Twitter, 35 million followers on Facebook. This e-mail list is multiple, multiple millions of people. One send through every channel hits 60 million people. The spikes go from 100 people using applications to 500,000.”

While Narwhal (the Obama campaign’s software platform for 2012) did perform properly on Election Day, the team said that in 2008, the Obama campaign’s Houdini software project was a disaster. That software failed within 40 minutes of launching on Election Day. The Narwhal team, therefore, started from square one. They even developed an application called Gordon, named after the man who killed Houdini.

“We were building everything from scratch,” said Gansen. “There was nothing held over from 2008. We started from five servers. The Narwhal guys are building the platform for all the data-integration tools while we were building the tools. They’re building platforms thinking the applications team might use this later on. We were building the airplane while flying. We started with nothing, and we ended with over 200 deployed applications.”

That list of applications ended up yielding only about two dozen that were ultimately used by the campaign at large. Those applications used a huge chunk of Amazon’s Eastern data center. “We had thousands of nodes,” said VanDenPlas. “We pushed 180TB of traffic with billions and billions of requests. We had 60% of all of Amazon’s medium [instances] in US East.”

Because of the reliance on Amazon, the team ran some game-day preparation tests internally to ensure they were ready for prime time. During one of these dry runs, Amazon actually experienced some issues, and the team went from addressing a fake issue to a real one.

And the rest, as they say, is history.