In the user experience research space, teams learn how to work in an Agile flow, how to collaborate “across the wall,” and how to stay a cycle ahead of the development teams to ensure they are creating the best possible experience for users.
But what happens when the team moves to continuous delivery? In CD, everything is changing fast, so it may seem like there isn’t time to conduct a formal research study. Lauren Stern, user research lead at XebiaLabs, shared some tips for how to keep a little science in the pipeline by leveraging the continuous updates that make CD unique, rather than working against it.
SD Times: When a team moves to continuous delivery, what happens to the workflow? How does collaboration change?
LS: When teams move to continuous delivery, the goal is to work in faster cycles with quicker deployment to production. That usually necessitates smaller change sets, and can often also mean adjustments to the way teams work in order to streamline the process, like using higher amounts of automation or focused, single-feature teams that work in parallel on specific aspects of an application. These changes are also encouraging new integrated teams where designers and developers work closely to facilitate a more efficient pipeline (the heart of CD). But so far, the role of design dominates conversations about the role of UX in a CD world, and it’s easy to lump research in as a secondary part of the designer’s duties, or to let research slide off altogether.
Instead, I want to make the call for research as a valued part of the CD pipeline, from planning all the way to delivery. In fact, many of the strengths of CD (iteration, small change sets, and collaboration) are perfect mirrors of the best practices for scientific research! There are many ways teams can leverage such practices to build a powerful knowledge base and shape feature decisions by grounding them in user evidence. Teams just need to keep in mind the value of research and integrate it into their process. We’ve seen how successful this can be with other UX methods, like design sprints or design studios. Just like these collaborative, idea-generation environments, research can include a diverse group of team members, from developers to designers to outside divisions, like marketing and support. The key is to build common ground—make sure the team is clear on the goals of the research, the current status, and what findings are available for action. By keeping a little science in the pipeline, we can take the best of CD and turn it into user understanding and better software.
Why is user research a necessity when teams switch to CD?
User research is a necessity all the time, not just when the team moves to CD! But it is especially important to build foundational research for CD teams. Foundational research is key in resisting the “spaghetti at the wall” approach: just trying a bunch of things and seeing what goes well is not an efficient use of labor (for dev, ops, or design), and it can be really frustrating for users. But if you can approach a short iteration cycle with a grounded, well-formed understanding of your user’s needs and contexts of use, you can try your best solution and then get feedback faster on whether or not it really works.
Foundational research means a general understanding of your users’ context—the conditions in which they use your software, the other things present in that environment, constraints on their behavior, and how your user functions within the general cultural context of their domain. I think this type of research especially gets sidelined with the move to faster iteration—whether it’s agile, CD, etc.—because it takes time to build depth, and it can be really daunting. But you don’t start by trying to create a huge complete picture of your users, just like you don’t try to build a product with 17 features all at once. These are fundamental topics you can study long-term; every user interaction can add to this knowledge base, and then you have it whenever you start working on a new feature.
For Enterprise DevOps (my users’ domain), we know that information security, auditing, and lack of information and context flow (i.e., stovepiping) are big concerns for many of our users. So when we work on a new feature—any new feature—we start by framing the design within that context. By thinking “how will this feature support collaboration with auditors?” or even just “will people be actually able to use this feature given information sharing constraints?” we can ensure our ideas are viable. We will still iterate, things will still come out of feedback and testing sessions, but we’re starting from a place grounded in our user domain. That isn’t possible without research, and especially without a long-term plan for collecting and leveraging that type of ethnographic data.
Why is user research not as disruptive to continuous delivery as people think? How is it working for businesses today?
The goal of CD is to deliver quickly and frequently into production, shortening the cycle time between an idea and user feedback. That can be a perfect environment for research, because the goal of UX research is to support product changes and new designs with evidence. I think about it as the way you would design a good scientific study. When you conduct laboratory research, you form a hypothesis and then build experimental conditions (e.g., activities, scenarios) that you will compare to see if your hypothesis is right or wrong. Then you have to recruit participants, run them through your study to collect data, compensate them for their time, conduct your data analysis, and determine your findings. The more conditions in your study, the more participants you need overall to reach statistical significance—which just makes all of that harder and more expensive. So in a well-designed study, you are very precise about what conditions you run based on your hypothesis.
User research in a CD framework can draw on the same best practices. Trying every idea would be labor-intensive and expensive—you can’t do it. But you can make a best guess (a hypothesis) grounded in existing user knowledge (that foundational research!) about what the “right” feature design is. You can focus on that, deliver it, and then iterate based on feedback. Worst case, your hypothesis is totally wrong, and you go back to the idea well—but that can happen regardless of whether you’re a waterfall or CD team. In CD, you just get the answer a lot sooner and have a much greater opportunity to iterate when it happens. And most of the time, it’s not the worst case scenario (especially if you started with a foundational understanding of your users). You can even build on top of that and leverage experimental science further by using methods like A/B testing where you build and release two possibilities. Netflix is famous for this, and having short development cycles makes it feasible. It certainly wouldn’t work in all domains, but if your environment can support it, that can be really useful because it provides an extra layer of evidence—actual statistics that compare use of your two designs.
But also, I want to make sure I emphasize that none of this should replace user validation/user testing. It isn’t possible to conduct full usability studies on every single change to an application, but especially when you’re designing new features or making large changes to the UI, it is so key to get users involved in that process. Part of the goal of CD is also to improve quality as you streamline, and from the UX side, a big part of that is making sure things work the way you want them to. Again, this can work within the timeline of your CD cycle, but you really have to plan effectively. For my team, that means we start recruiting users for validation as soon as we decide we need to make a change. That gives me time to find the right people and set up sessions while the design is in progress, and then we can iterate in between sessions that I’ve already booked if we need to. It takes time to recruit and schedule users! So plan for that, don’t wait until you’re ready to use them to recruit. We also leverage our in-house user representatives (our killer Customer Success and Support teams) as the “first line of defense”: the initial wireframe gets discussed with them to get a reality check, and then we iterate before going to users. By building those steps into the plan from the beginning, it doesn’t have to impact the development cycle. It just enhances the final product, leading to fewer times when we need to roll back or scratch a feature once it has been released.
How can teams keep a little science in the pipeline by leveraging the continuous updates that make CD unique, rather than working against it?
Plan, plan, organize, plan, and organize! Think about precision and control in your research process as quality and efficiency measures that support low-cost, high-value research findings. A fundamental tenet of agile development practice may be responding to change over sticking to the plan, but you can’t respond to changes if there’s no plan to begin with. I think the danger for many teams is treating research as the response to change rather than as part of the initial plan, and that is especially problematic when you are trying to accomplish tasks as efficiently as possible. This all starts with a research plan. Does it take a little bit of time? Yes. Is it worth it? Definitely. It may take 30 minutes to write, but the level of collaboration it facilitates by getting the whole team on the same page, and giving everyone a central place to organize related questions, session notes, and follow-ups makes it worthwhile. We don’t write research plans for every feature update, but using such a structured process for the bigger changes (e.g., new features, beta launches) instills a general vision of what research can be across the team, and that common ground affects all the smaller research efforts too.
By including research in the planning process, you ensure that you’ve thought about what questions you really need to answer, the best research method to get that answer, and the types of people who can provide it. That forethought empowers responsiveness to change. When you book a session, set expectations. For example, at XebiaLabs we frequently say things like: “we’re working on designs for a new feature related to risk management,” or “we’re trying to learn more about workflows at large organizations.” We also let people know about how long we expect a session to take (30 minutes, 1 hour). Is the exact feature design we plan to show set in stone? Absolutely not! We can enter the meeting with the framework we’ve set (e.g., 30 minutes to talk about workflow) and change the specific questions we ask and/or designs we show without giving our participants the impression that we’re doing something different from what we told them. By planning ahead and making sure we include research in the CD pipeline, we ensure that we make space to gather the insights we need without limiting ourselves too much, even if the plan changes.
Organization is also key because it speeds up the recruitment process, lets us continue to build on long-term research questions (that foundational knowledge again) in every interaction, and to pivot on the fly if needed. First, you don’t need to have open research to start organizing users who might participate in future projects. We have a rocking User Panel (XL UP) that includes users from a diverse group of organizations that we can reach out to whenever we need user input. That panel is open to everyone, so we’re always getting new voices and ensuring we don’t over-use any one member. Because we maintain and track the Panel, it’s easy and fast to set up recruitment for interviews and usability tests or to send out surveys. Just like best practices for CD teams, we’re streamlining and automating where we can. Second, you can also track ongoing research questions—foundational or otherwise—and draw on them whenever you have an extra minute during a scheduled session, or find yourself with some impromptu user time. That way, you’re conducting research continuously on topics that matter to your organization without having to take time out for specific sessions. Building on that, you need to also organize and track the data from those interactions so that you can actually use it! This doesn’t have to be complicated—we use a page template on Confluence to enter all our user interaction notes and leverage the system’s searchable tags to find topic-specific information when we start any new design. (We track foundational data using domain knowledge tags like “DevOps process” and “workflow.”)
These systems work together to enable anyone on the Product Team to be responsive and pivot as needed. Talking to a user and have a few extra minutes? Pull out that list of long-term questions! Have a fire to put out? Search Confluence for relevant tags! Need to revisit a previous question in light of new feedback? Grab the research plan and all associated documentation! You get the idea.
What are your specific tips for teams to get started?
I’ve thrown a lot of examples, so I’m going to summarize them here.
Involve the whole team: research doesn’t have to be handled by a research or a UX team in a silo. Bring in stakeholders, developers, and other organization members to participate in the research process. This builds a better understanding of your users and ultimately supports buy-in for feature designs.
Set expectations for decision-making: when you’re working on a research effort, make it clear what’s ongoing, what’s an open question, what findings you have so far, and what the plan is. Provide guidance on trends you’ve found that teams can act on, and where they should hold off if they can. Prioritize changes based on what you’re seeing, and make that visible to the rest of the team.
Develop a continuous research pipeline: there is always more to learn about your users, so keep a list of questions (a research backlog of sorts) that you want to answer on hand for any opportunity. Treat research as something to be continuously improved, where you can always ask questions and update your data, rather than a single study that must be done start to finish while everything else is on hold. This framing means more research will happen, and you’ll have an evidence-based foundation on which to grow new features.
Diversify your methods: not all research is usability testing, and not all research even needs to be tied to a specific feature. Grow your domain knowledge base by including foundational research (which is conveniently well suited to occurring as a background, continuous effort).
Prioritize, divide, and iterate: just like dev teams break up features into tasks to complete and deliver, you can frequently break research up into different stages that accompany each style of change.
The scientific method is your friend: think about your research as experiments. Your control group is first, and then match an “experimental” group to each updated version of the product. A new group starts when the updates are live.
Leverage your assets: it isn’t always feasible to grab real customers for small questions during iteration, but that doesn’t mean you should entirely skip validation. Who else can you reach out to? Who in your organization knows your users (e.g., support teams) and could give you 10 minutes?
Track everything: Write a research plan to think through your goals and build common ground on your team and to make using the data you collect easy for everyone. If you can use a professional recruiter, that’s great—but in a lot of domains, you can’t. Build your own network of users so you know who to reach out to when you need them.