Your enterprise website needs an overhaul, and your development team has been asked to make it happen. Your ability to simultaneously move all four of the project pillars—technology, infrastructure, team skills and communication—will have bearing on the success of the project. Of all the different IT projects an enterprise can undertake, perhaps none is as challenging as a website deployment.

We have seen these challenges come in different shapes and sizes at Magnolia, ranging from migrating existing content to making sure that developers who are actually implementing the project (you) are adequately trained. Understanding what to expect will help you to create an effective risk-mitigation plan and ensure that your website project doesn’t go down in flames.

Underestimating content migration time
Migrating existing content helps visitors retain their sense of familiarity with your online presence, and it also helps your website maintain its position in search engine rankings. But content migration is a tricky process. It’s not only page content that needs to be migrated; you also have to migrate CSS, JavaScript, page templates, images and media files. Each of these may require different tools and take different amounts of time to process.

For example, a Web-scraping tool might be able to migrate the content of a single page in a few seconds, but this would not be representative of the time required to transfer other resource types. Images might need to be resized or reformatted to fit the new page templates, and so they take significantly longer to be migrated. You also need to factor in time to manually check and validate each page, to make sure they’ve been properly migrated and don’t break when rendering.

(Caution: The code captured by Web scraping does not transfer variation functionality, such as geolocation and personalization parameters. These may need to be reconfigured or rewritten prior to the launch of the new site.)

Avoiding resource duplication is another potential issue. In many cases, the same resource might be referenced from multiple website pages. If your migration tool isn’t intelligent enough to detect this, you’ll end up with multiple copies of the resource in your asset library. Cleaning up these duplications is usually a manual (and time-consuming) process, especially with libraries containing thousands of assets.

Miscalculating bandwidth and network requirements
Large-scale websites typically run as multi-tiered applications on different server hosts, so an improperly planned network can quickly become a bottleneck. A common mistake is to estimate the bandwidth required to transfer data between server and client, but to forget to also estimate the bandwidth needed to transfer data between the application and its back-end data store. Similarly, in big, distributed cluster systems with multiple nodes, data synchronization between nodes occurs periodically, and failing to account for the impact of these (sometimes large) data transfers across the network is a common error.

These miscalculations affect your website because without sufficient bandwidth, your website can’t scale to meet sudden jumps in traffic. You might have powerful servers with many gigabytes of RAM and multicore CPUs, but if your network is choked, incoming requests won’t even get to those servers, leaving website visitors high and dry. Insufficient bandwidth also makes it more difficult to prioritize certain types of content (like streaming video); put simply, if your bandwidth is fully utilized, there is no reserve available to keep your media streams flowing smoothly.

In most cases, your network equipment vendor will provide tools to perform network and bandwidth analysis. In addition, you can use tools like Wireshark, NMAP and NAST to analyze and monitor your network, identify bottlenecks, and check bandwidth usage at different times. This information goes a long way to help you accurately calculate your network and bandwidth needs.

Choosing an inflexible data-storage engine
Your website’s data spans a range of formats (structured, semi-structured and unstructured), and it is important that your data storage engine is flexible enough to accommodate them all. Some content-management tools require you to define your data types up front, and even write code to handle such data types. This restricts your degrees of freedom, making it harder to leverage your data in new ways in the future. Choose a data storage option that is completely agile and requires minimal hard-coding or up-front definition. This allows you to dynamically adapt your data to future needs with minimal manual effort.

Making security an afterthought
Tight project deadlines often mean less time for discussions of security. However, unless you design security measures from the start, you run the risk of data theft, downtime and delays. A common mistake is to assume that your development team will put in the obvious safeguards like access control lists and automated monitoring tools; they might not, so always double-check. Third-party penetration testing services can help identify gaps in your website security. You can also use the Open Web Application Security Project (OWASP) and the WebGoat project to learn more about improving website security, together with these articles.

Outsourcing scalability to the cloud
There’s a common preconceived notion that adding hardware on demand (a.k.a. “the cloud”) is sufficient for a website to scale with traffic. This is a fallacy: Web applications won’t fully use available resources, such as multicore CPUs or additional nodes, unless they are programmed to do so from the beginning.

Distributed processing frameworks like Apache Hadoop, in-memory distributed systems like Infinispan, and distributed caching engines like Ehcache can help distribute load across multiple servers. Java application servers like WebLogic and WebSphere come with clustering built in; using this feature can improve application scalability, but at the same time, be aware that doing so will always incur some cluster-management overhead.

Forgetting the post-launch plan
You won’t deploy your website once and then forget about it. As time passes, features will be added, pages will be updated and bugs will be squashed. That’s why your post-launch plan should include processes for website backup, restore and upgrade.

When defining a backup schedule, ask yourself the question, “If a website failure occurs right now, how many hours of data can be lost without a critical impact on the business? A week? A day? An hour?” The answer to this question will determine your backup interval. For most companies, a daily backup will usually suffice; however, if you have a large team of editors and near-constant website updates, you might even want to back things up every four to six hours. Remember that cloud backups are typically more complex than regular physical server backups; the shared nature of cloud servers usually means you have to work harder to ensure that your backup contains all your data and only your data.

Equally important is restoring from a backup. A common mistake is to perform a daily backup but never test it, to simply assume that it will work as needed. In many cases, this assumption is incorrect: Changes in server infrastructure might render a backup invalid, or a lack of familiarity with the restoration process might result in longer downtimes. As a general rule, plan on testing your backup and restore processes at least monthly to avoid problems during a genuine crisis.

When handling website upgrades, you will need separate environments to test the new version of the website before rolling it out to production. Typically, you will first deploy to a development environment with minimal data, then to a test environment with a larger sample of data, and finally to an integration environment that has a full copy of the production data. This last step also provides useful hints about the amount of time the upgrade will actually take. Finally, if everything works well, you’ll back up the current production website with all its data (as a failsafe against unforeseen failures) and then deploy the new website.

Skimping on developer training (and retraining)
Selecting the right technology is only part of the puzzle; you also need to make sure that your development team has the knowledge to maximize their use of the technology. A Forrester Research article said that 50% of the typical IT budget is spent on ongoing operations and maintenance. By training your team in current best practices and giving them the skills to write more maintainable code, you will reduce your long-term software costs and increase the longevity of your website.

Almost as important as training is retraining. Technology vendors are constantly evolving their products, so it’s important for your development team to “go back to school” after each major release. Retraining helps get developers up to speed on current best practices, improves their ability to identify technical risks faster, and reduces the time they spend on research. Stop thinking of training as a cost item; instead, think of it as the most effective way to reduce your long-term maintenance costs.

Ignoring the vendor’s expertise
Your implementation partners might claim to know what they’re doing while in fact they don’t. Or they might try to lock you in by selling you libraries or modules they “already have” to reduce implementation time, but which are maintainable only by them.

Adding your software vendor to the conversation helps mitigate these risks, because the vendor has the necessary expertise to identify possible implementation flaws and a vested interest in seeing the project through to a successful conclusion. The vendor will discuss key software architecture issues with you and your implementation partner and ensure that best practices are followed. The vendor can also help scrutinize and monitor your implementation partner’s performance throughout the project, highlight risks and provide ongoing consulting to resolve issues that crop up during implementation.

Doing too much at once
Many enterprise website implementations fail because they try to do too much in the first release phase. We have seen some organizations launch their main site first, followed in the coming months by regional or flanker brand sites. Not only is such an implementation plan more prone to failure (simply because of the larger number of moving parts involved), but it also creates stress for both the management and the developers implementing the system.

A baby-step philosophy, which focuses on rolling out small projects first, offers a much better chance of project success. After the initial project release, continue with subsequently more-complex site releases. This approach steadily improves the system and sets up a positive feedback loop, wherein user feedback is used to improve the system on an ongoing basis. In his article “The Beta Principle: Skip Perfection & Launch Early,” author Scott Belsky discusses the virtues of “launching in beta”:

“On a practical level, you can only get feedback and real user data when the product is released… Rather than spending many months (and lots of money) on the finer details, getting early feedback can lead to priceless realizations.”

A side effect of this incremental approach is that it ends up training the users on the processes and operations of the new system, thereby fostering faster enterprise-wise adoption over the long term. And by allowing stakeholders to incrementally define their needs so that only those features that are actually needed get implemented, it also results in optimal use of developer time.

Being afraid of change
Developing an enterprise website is as much a political process as a technical one. At the beginning, you must get all stakeholders to agree on the high-level issues: technical and branding requirements, budget, schedule, team, and vision. But as work progresses and meat begins to appear on the website’s bones, expect the lower-level details (icons, fonts, navigation, copy style) to take center stage, as more people come forward with criticism and feedback. This feedback typically reaches a crescendo in the three months prior to launch, when it will seem as though everyone, from marketing and sales to developers, IT managers and C-level executives, has something to say about every little detail.

To ensure that the large volume of often-conflicting feedback doesn’t derail the project, it’s important to establish some ground rules. Openly communicate progress and important decisions to all stakeholders, both internal and external. Ask for feedback as needed, but weigh it appropriately. And finally, don’t be afraid of change. Following an open communication policy might well result in significant changes that affect the trajectory of the project; if you expect and embrace this, instead of being afraid of it, it will always produce a better result than the alternative.

A large-scale website deployment is more fraught with risk than the typical IT project, not just because of the time and money involved, but also because of the potential impact on the firm’s reputation if it fails. For many companies and organizations, a website is its most visible face to the market. But don’t let the high stakes discourage you. You can make your website deployment a roaring success, simply by educating yourself about the likely issues you’ll encounter and making sure that you’ve planned for and mitigated important risks.

Bill Beardslee is CEO of Magnolia’s American business unit. Magnolia is a content-management system provider.