Testing Hypotheses: Serious Play

    By: Larry Marturano on May 25, 2012

    In my last few posts, I’ve been drawing an analogy between the scientific method and customer centered design—and showing how the two are intriguingly close in practice and intent.

    I’ve shown how up-front user understanding in design is just like the scientist’s initial observation and characterization, and how prototypes are actually embodiments of hypotheses about underlying needs and technical solutions. In this post, I’ll look at testing and iteration.

    In the scientific method, once hypotheses are generated, they’re first tested via experimentation, and the data collected is used to support, modify or eliminate the hypothesis. Hypothesis modification and experimentation is iterated as the process continues. Eventually, enough confirmation of a hypothesis makes a theory.

    In design, the analogy to hypothesis testing via experimentation is prototype testing via field evaluation with users. And, iteration is, well, iteration.

    Let’s unpack that important thought.

    First, what prototypes are we talking about?

    As I wrote about previously, it’s early-stage, low fidelity prototypes that I’m talking about here—not technical feasibility exercises or early versions of the exact products, but rather prototypes aimed at conveying the core concepts to users. They’re rough, cheap, low fidelity mockups that are easy and quick to produce.

    Okay, so what does it mean to “test” such crude embodiments?

    There are several important aspects. First, when designers test prototypes, we’re looking for user reactions to the concepts themselves. We’re trying to confirm or adjust our own understanding of the core value proposition of the solution—whether it meets the underlying needs that we’ve hypothesized from our observations. In other words, instead of technical feasibility, it’s conceptual feasibility we’re looking for.

    And at the very early stages of development, we’re looking to gather quick, qualitative data. We’re trying to get in the right neighborhood with our ideas, conceptually speaking. We’re not worried, at this stage, about quantitative statistics or design of experiments. That comes later, once the product or service concept and value proposition is validated. Do the quick, cheap and basic evaluation first, then do the relatively more expensive and time consuming quantitative testing later, once we’ve improved the odds of success with our qualitative iteration on the product or service concept. The qualitative->quantitative data progression is one important aspect of user centered design where the science/design analogy is imperfect—in science, we usually just think about the quantitative half of the idea. And that’s why my engineering brethren sometimes get stuck worrying about sampling theory and statistical significance. We’re just not at the point in the process where we’re worried about that yet.

    The second important aspect is that we want to do our evaluation in the field, not in a lab. Customers should be encouraged to “use” the prototype in their work or play settings, retaining the real context of use so as to uncover the actual value in situ. In Contextual Design, we often have our customers do a bit of their work practice using their existing tools, and then do the similar behavior using the prototype—the idea being that we can get as close to the actual context of use as possible.

    Perhaps more important is to realize what we’re NOT doing at the early stages of design. We’re not trying to evaluate usability or design of an interface. We’re not trying to choose the color, or the materials, or the final form factor. We’re not trying to judge the technical performance. Our goal is to gather feedback as quickly as possible on what the user sees as the core values.

    Put another way, with our early prototypes, we’re trying to avoid what my late friend and mentor Jim Wilson called our “physics fixation”—our tendency to concentrate on what a product IS rather than to focus on what it DOES. Our first prototypes should establish and validate the DOES—ness of the product or service. Later prototypes deal with the IS—ness.

    A non-obvious corollary here is that all of this activity—from user observation through prototype testing—is in fact an exercise in requirements definition and validation.

    Critics of the “fuzziness” of front end methods often complain that this activity needlessly lengthens the development process. However, a raft of statistics on project and product failure causes indicates that while relatively little time and money is spent in the front end, an inordinate proportion of failure is due to missing or incorrect requirements. A stitch does indeed in time save nine. As well, our experience with the Agile movement has taught us that requirements evolution is not necessarily limited to the “front end,” and that customer involvement in design is perfectly consistent with methods like Agile.

    A couple of last thoughts.

    Much has been written about the inherent goodness of design simplicity—as if this were a concept Steve Jobs or Sergey Brin invented. Extending our analogy, scientists will recognize the goodness of simplicity as an application of Occam’s razor—adopt the simplest hypothesis that explains the data. Even in the thirteenth century, Frustra fit per plura quod potest fieri per pauciora held water.

    Lastly, I can’t leave this (or indeed, pretty much any other) subject without mentioning the effect culture plays. Companies who do have the best success with new product introduction have not only processes, but a real culture of experimentation. Michael Schrage calls simulation and experimentation “Serious Play”—and spends an entire book describing the culture of prototyping he finds in many successful companies—marked by a willingness to try new things, and management and reward and recognition systems that treat failure as learning and not as opportunities for reprimand and punishment. The resultant lack of fear in new things is an underlying cultural value in many innovative organizations.

    So the steps in the scientific method seem to line up very well with the steps involved in customer centered design. User observation in design is analogous to scientists’ observation and characterization of natural phenomena. Scientists generate hypotheses to explain and predict new observations. Designers create prototypes that embody hypotheses about user needs and design solutions. In both disciplines, the essential creative step is inductively recognizing patterns and development of hypotheses—in this regard neither discipline is more “magical” or “structured” than the other. Both disciplines test and iterate.

    Successful scientists end up with theories that explain a wide range of phenomena and change science. Successful designers end up with products that meet a broad range of needs and change peoples’ lives.

    - by Larry Marturano, Director, Chicago Office at InContext Design.
    follow us on Twitter @incontextdesign

    Released: May 25, 2012, 10:15 am
    Keywords: PDMA Blog | Iteration | Scientific Method


    You must create an account or login with your existing account to provide comments or article ratings.

    Copyright © 2017 PDMA - Product Development and Management Association. All Rights Reserved

    330 N. Wabash Avenue, Suite 2000, Chicago, IL 60611 USA   |   Phone: 312.321.5145   |   Fax: 312.673.6885

    Contact PDMA   |   Legal Policies & Agreements