Early stage prototyping patterns and anti-patterns

Background

Over the last year I've been working on a live data prototype team tasked with finding a repeatable process to sufficiently reduce risk in new design and product ideas. Over time, we've found certain projects that are too complex for paper prototypes or even live code that uses mocked data. We can engage with a small engineering team, but that has problems. For starters, it's tempting for engineers to do what they do best: write great code. That's great when you're marching towards a shippable product, but severely slows things down when you're still trying to figure out what to build.

We wanted to experiment by building a small team that cares more about speed than quality. Does focusing solely on the defined problems and hypotheses, ignoring code quality and design polish, reach the correct solution faster? If so, does that result in great product being shipped faster?

We're still answering the question of whether final product is shipped faster. However, we've arrived at a set of patterns and anti-patterns that have had great results in sufficiently reducing risk enough to be confident in the proposed solution.

These patterns speed up the prototyping process, validate hypotheses efficiently and cheaply, and accurately determine the minimum interactions necessary for new product features. These patterns focus on early stage prototypes of product designs. These patterns likely won't apply to late stage prototypes or to prototypes that are intended to validate engineering solutions, as opposed to design solutions.


Patterns

Value speed over quality

In early stage prototyping, it's more important to get to the correct design and required interactions as efficiently and cheaply as possible than it is to have code that is well designed, well performing, and readable. A design prototype should never be used to validate an engineering implementation. As such, ensuring performance at scale for a prototype is not priority. Further, a prototype can be ugly and buggy as long as the workflow being tested is usable.

To drive this point home, the motto that hands on our wall is "Write bad code fast."

This is not to say that well engineered products are not important. However, it's important to prove what needs to be built for the first release before it is actually built.

Domain specific applications

Focusing on testing solely the highest risk items is critical in early stage prototyping. Sometimes in order to maintain hyper focus, it's encouraged to build an application to test an idea in isolation from the application the idea is intended for. For example, during one project where we were working on a filtering language, we needed a way to rapidly develop a user friendly language that would cover all the types of queries users needed. We also needed to develop an interface for entering queries. Instead of prototyping the feature in the product it was intended for, which would have taken considerable time, a smaller inventory application was built in three days. When testing, we appeared as to be testing the inventory application, but in reality it was how the users interacted with the filtering interface that we were focused on. Building a small application greatly reduced the time necessary to iterate on the design.

Follow the evidence

The goal of design prototyping to is to sufficiently reduce risk for a design idea as quickly as possible. Since building is expensive, even when you care far less about code quality, it's important to only build prototypes that have evidence they're necessary and correct. Building a product or feature without the evidence of necessity is not a design strategy, it's a product strategy, and it's a bad one. While trying multiple designs is highly encouraged, each design must have evidence it may successfully solve a real, documented problem.

Evidence should come directly from users when possible. For example, it can be obtained from discovery interviews or usability tests.

RITE (Rapid Iterative Testing and Evaluation)

RITE is a philosophy that testing should be continuous, rapid, and result in immediate changes. RITE encourages continuous testing of in-flight development. It values evaluation immediately after learning from each test. This is in contrast to designing an implemented to a certain point, then conducting multiple tests of the design (without changes in between tests), taking the cumulative results of the tests and making necessary changes. RITE promotes steady iterative changes over a short amount of time.

While it may sound like RITE promotes reacting to superficial data, early stage prototyping ends up finding the low hanging fruit in solutions. Putting a functional early design in front of a user often results in revealing the otherwise obvious, as opposed to nuanced interactions that require complex analysis of large amounts of user data.

Design re-evaluation

Since design overhauls are difficult when using RITE, it's important to have a stopping point to allow time for redesigns, if necessary. This opens the door for design inspiration without waiting around for inspiration to hit.


Cadence

As soon as the prototype can complete a single happy path, it should be tested as soon as possible. Even if the design isn't done being implemented, it's critical to get early feedback as soon as possible. The ideal cadence is three tests every week.

Purposeful constraints

The natural instinct when designing products is to build out what you believe to be minimally necessary and reduce as you learn. Further, when you're not sure about is needed for a user to accomplish a task, it's natural to take a stab at a solution and test it, hoping to learn more from the test.

The proper response to both of these instincts is omission, not creation. Forcing a user to attempt a task you've purposefully omitted the affordance to accomplish causes the user performing the test stress and pain. By intentionally causing pain, they will more openly talk about what they expect to find and how the expect to accomplish the given task. This not only provides less biased insight into their expectations, but also provides the necessary evidence for the next step in design.

Defensibility

Good design is always apparent after the fact, but never before. It's important to show where a design started and the different paths taken, including the failed ones. Failed designs are particularly important to show. It's important for Engineering and Product to understand why the design being proposed is necessary and why it should be engineered over other designs that are perhaps easier to build. Defensibility provides credibility to proposed design.


Anti-Patterns

Anticipating user needs

Anticipating user needs at first glance sounds like it should be a pattern, not an anti-pattern. However, it's important to resist capturing all the different ideas you could implement and test. Only capture the needs you have evidence for. Capturing what you think users will need is making up user needs rather than learning them. If a user need truly is necessary, it will become apparent through following the evidence. And purposeful constraints.

Peer code review

Since design prototyping values speed over code quality, every moment not spent building a tool to validate a hypothesis is wasted. Therefore, any time spent reviewing code is wasted time. Further, since all code is disposable as soon as the hypothesis is validated, the maintainability, readability, and robustness of the code is irrelevant.

Code branching

Code branching promotes working on a piece of the codebase in isolation until it's ready to be used by the rest of the team or test pilots. It does not promote moving quickly. It is recommended that prototype projects never fork nor branch the code base. Everyone commits to a single central repository to a single master branch. Further, it is encouraged that everyone push their changes at a rapid pace even if the work is incomplete.

What is a user story and what does it represent?

I recently had a conversation with a project manager about what a user story is and when it should be created. His view is that traditionally in SCRUM shops, the epics and user stories are created just before being handed to Engineering for implementation. In this model, epics and stories solely exist for Engineering to prevent them from creating software for software's sake. While I can understand that view, I find it far too limiting.

A user story is a hypothesis of a feature that, if delivered to customers, will provide them with quantifiable value. I say it's a hypothesis because Product can never know if what is delivered to customers will provide the value expected. **Through an extensive process of user testing, a story can turn from a hypothesis to a theory, but it can never be fact.**

What's needed is a way to have a single, small, well defined piece of user value that can be talked about and passed through a series of tests to reduce risk of delivering something other than the highest possible user value. User stories should be the first thing a product manager writes and should guide UX design, marketing messaging tests, UX user interviews, Engineering sprints, and post-ship usage metrics. **Every task during the validation, design, usability testing, building, functional testing, and market feedback measuring of a feature should be tied back to the user story that defines it.**

Once a PM has an idea for something to deliver to users, it should immediately be captured as a story. Capture who the user is, what is thought they want, and why it's thought they want it. Write it entirely from the user's perspective. For example:

As an online shopper
I want to save my payment information on the site
so that I don't have to find my card every time I make a purchase
so that I don't have to always have my card nearby when I want to make a purchase

Guiding User Discovery Work
-------------------------------------------

Now that the hypothesis has been captured, we can easily construct user discovery interviews. Some example interview questions designed specifically to test the **so that** sections of the above user story:

* What is your typical checkout experience like?
* How much time do you typically spend entering your credit card information during checkout?
* Describe a time when you wanted to make a purchase, but didn't because your credit card wasn't nearby

The answers may surprise you and you should be open to being surprised. In the above interview questions, what if 80% of the interviewees said they used 1Password and therefor never had to find a credit card or take more than one second to enter their payment information? It would mean the story doesn't represent the user value expected. It also means if your current company goals are rapid customer acquisition, the user story should go back in the backlog until a later time when product polish might be more important.

Or perhaps several of the interviewees said they never make purchases away from home because they never carry the credit card they use to make online purchases outside of the house. What the user wants would be the same, but the **reason** would be different than you expected. The story's **so that** sections would then change.

Guiding Design and Engineering Tests
--------------------------------------------------------

User stories should be a critical piece of UX design. Having a clear definition of *why* a user wants to do something is critical to getting the design of the product correct. During UX development, workflows and visuals are created to fulfill the users' needs to perform tasks. Those tasks are derived from the user stories that define the feature that requires the workflows. Therefor, when doing usability testing, ask the user to perform the very tasks outlined in the story. Track the changes in the design against the user stories being developed so you can easily see the progression of the design from the perspective of the story.

The benefits for interfacing with Engineering are immense. User stories allow a clean interface for Engineering to have complete freedom over how software is implemented, but little control over what or why something is built. With that said, Engineering should have input in the creation of the stories from the very beginning. Almost every story that is written will change or be thrown out. Engineering's input during this process is crucial to knowing what's technically possible to deliver and to get early high level estimates of cost. A later blog post will dive into how to determine the cost of a user story.

Acceptance criteria is how Engineering and Product can agree the story is done being built. In the story above, how does Engineering know what credit cards to support? Is there a limit on how much can be purchased at one time? How about a minimum? These are business driven behaviors that aren't necessarily relevant to the user and therefor shouldn't be stories themselves.

I've recently developed the belief, but haven't tried it yet, that the product manager should write the actual acceptance test code. This removes any chance for mis-interpretation and has a simple interface for Engineering to have confidence a story is done and done correctly. Once I get to try this over a significant period of time, I'll write a quick post on the results.

Guiding Marketing Tests
-----------------------------------

As you can imagine, marketing messages should be derived as much as possible from the user stories. Tying the marketing campaigns and messages to the stories can be beneficial, but it also has the added benefit of tracking marketing tests. For example, I recently had an Epic that I wanted to test some marketing messages for. I wrote a blog post outlining the concept I was wanting to build (at a very high level) and worked with my product manager and marketing editor to come up with a few messages that were designed to test different ways of thinking about the feature's benefits. We sent the various messages out at different times across social media and measured which messages brought the most traffic, and which of that traffic read the longest before leaving. That exercise validated the feature and the marketing message in one. It was measurable, defendable, trackable, and even lead to a few more ideas to include in the product.

Shipping Begins the Final Test
---------------------------------------------

Only once a feature ships can you begin to test the hypothesis the story represents. There are assumptions made about how the user will use the story. How often it will be used. Expected workflow paths. Performance expectations. Write down all of these assumptions and construct tests against the usage metric data you collect (however you collect it) and make the results easily visible through Graphite or something similar.

Like all tests involving users, you should hope to be surprised. Surprise means you can dive in to further understand your users and their world. It allows you to further hone in on what to build to make their lives easier or more productive.

Guiding Documenting
-------------------------------

Documenting the feature in story format can guide the user through your documentation based on their mental model on how the software works, not on the actual model the software uses. This should also apply to APIs. A great example is how Thoughtworks constructs their documentation layout: http://www.thoughtworks.com/products/docs/go/13.2/help/


I hope I've convinced you to look at user stories as something that should undergo constant testing, even after shipping. User stories are your value and therefor should define everything you do. While sometimes that is impractical, not doing should be the exception rather than the rule.