Creativity Must Guide the Data-Driven Design Process

Posted by

Collecting data about design is easy in the digital world. We no longer have to conduct in-person experiments to track pedestrians’ behavior in an airport terminal or the movement of eyeballs across a page. New digital technologies allow us to easily measure almost anything, and apps, social media platforms, websites, and email programs come with built-in tools to track data.

And, as of late, data-driven design has become increasingly popular. As a designer, you no longer need to convince your clients of your design’s “elegance,” “simplicity,” or “beauty.” Instead of those subjective measures, you can give them data: click-through and abandonment rates, statistics on the number of installs, retention and referral counts, user paths, cohort analyses, A/B comparisons, and countless other analytical riches.

After you’ve mesmerized your clients with numbers, you can draw a few graphs on a whiteboard and begin claiming causalities. Those bad numbers? They’re showing up because of what you told the client was wrong with the old design. And the good numbers? They’re showing up because of the new and improved design.

But what if it’s not because of the design? What if it’s just a coincidence?

There are two problems with the present trend toward data-driven design: using the wrong data, and using data at the wrong time.

The problem with untested hypotheses

Let’s say you go through a major digital redesign. Shortly after you launch the new look, the number of users hitting the “share” button increases significantly. That’s great news, and you’re ready to celebrate the fact that your new design was such a success.

But what if the new design had nothing to do with it? You’re seeing a clear correlation—two seemingly related events that happened around the same time—but that does not prove that one caused the other.

Steven D. Levitt and Stephen J. Dubner, the authors of “Freakonomics,” have built a media empire on exposing the difference between correlation and causation. My favorite example is their analysis of the “broken windows” campaign carried out by New York City Mayor Rudy Giuliani and Police Commissioner William Bratton. The campaign coincided with a drop in the city’s crime rate. The officials naturally took credit for making the city safer, but Levitt and Dubner make a very strong case that the crime rate declined for reasons other than their campaign.

Raw data doesn’t offer up easy conclusions. Instead, look at your data as a generator of promising hypotheses that must be tested. Is your newly implemented user flow the cause of a spike in conversion rates? It might be, but the only way you’ll know is by conducting an A/B test that isolates that single variable. Otherwise, you’re really just guessing, and all that data you have showing the spike doesn’t change that.

Data can’t direct innovation

Unfortunately, many designers are relying on data instead of creativity. The problem with using numbers to guide innovation is that users typically don’t know what they want, and no amount of data will tell you what they want. Instead of relying on data from the outset, you have to create something and give it to users before they can discover that they want it.

Steve Jobs was a big advocate of this method. He didn’t design devices and operating systems by polling users or hosting focus groups. He innovated and created, and once users saw what he and his team had produced, they fell in love with a product they hadn’t even known they wanted.

Data won’t tell you what to do during the design process. Innovation and creativity have to happen before data collection, not after. Data is best used for testing and validation.

Product development and design is a cyclical process. During the innovation phase, creativity is often based on user experience and artistry — characteristics that aren’t meant to be quantified on a spreadsheet. Once a product is released, it’s time to start collecting data.

Perhaps the data will reveal a broken step in the user flow. That’s good information because it directs your attention to the problem. But the data won’t tell you how to fix the problem. You have to innovate again, then test to see if you’ve finally fixed what was broken.

Ultimately, data and analysis should be part of the design process. We can’t afford to rely on our instincts alone. And with the wealth of data available in the digital domain, we don’t have to. The unquantifiable riches of the creative process still have to lead design, but applying the right data at the right time is just as important to the future of design.

8 comments

  1. Kind of playing both sides there – do user testing before creating a product, but users really don’t know what they want. Steve Jobs went off his gut, so be creative and let the analytics after release see where issues are.

    I get it is a tough, non black and white, kind of thing. Maybe building out the rationale more of testing as an influence in decision making, but not a mandate in what needs to be done, would relate more to both UX and non-UX professionals.

  2. I’m not sure that any serious designer would say that just because the stats got better after a re-design then it means the improvement was due to the design change alone. But I’m willing to believe some might. They’ll be ex-designers pretty soon so it hardly matters much.

    Be that as it may, I’m not sure what your point really is here. People who don’t understand probability and statistics shouldn’t be using that in the design process. That’s obvious, isn’t it? You may as well write an article about how blind people should’t drive cars.

    I think in general though, all designers need to understand is that unless they personally know how to calculate a p-value from test data, then they should just ask an analyst what they think before they make any claims about it. Then after that you need to bear in mind that data can only tell you what happened, not why.

  3. I agree with Jonathan in that I’m not really sure of the point, either. And I especially don’t agree with the premise that we have at our fingertips all the data sources we need through web analytics and numbers, and this is all about improper understanding of causation vs. correlation. Of course such methods will not show causation; we have no idea WHY users are doing what they are doing. It’s common sense these are unreliable, but not because of how people are interpreting them – because of the type of data people are expecting from such stuff. So, isn’t that a bit like “throwing the baby out with the bathwater” when it comes to getting user data? What about reexaming the tried and true methods of UCD (user-centered design?) If we do that, it doesn’t really jive with this notion to just get all creative and wait until the product is released to test – reallly??
    Getting user data does not mean you are not using your innovation chops. It is a matter of knowing when to get creative and when to get data to validate your ideas – it’s a juggling act and an important one if you want success. And user “research” doesn’t have to be a bunch of complicated methods – think “lean”. As Jakob Nielsen has always said, test simple prototypes on just 3-5 users, and if done properly (using think out loud, keeping questioning open, not leading, all best done with a skilled moderator) you will get a wealth of information – and I would add, much faster than creating a bunch of magical designs in the dark which then require rework later in development if not accepted. And pretty risky to do without – prior to release – in today’s customer-driven, usability-focused competitive landscape.

    Steve Jobs is touted as the “hero” of innovation – but let’s be clear what that means, and not get “correlation” wrong here. 🙂 Innovation in that context was inventing brand new things for mass consumption. So, in a way, it was valid that the whole market was his test bed. But let’s not confuse that notion of innovation the life cycle of a typical product. 99% of the time these are not “invented” products – they are offshoots of some other familiar functionality. Testing after release is just counterintuitive to user-centered design and is one of the biggest impediments to success in a world demanding simplicity.
    We just can’t keep singing the praises of user-centered design methods, and then completely ignore them.
    And finally – I cannot let this the focus group reference slide. Focus groups is a term bandied about with user research – but let’s be clear – a focus group might be OK for coming up with a brand, but it has nothing to do with getting interface and interaction data.

  4. I think this article is very valuable — for non-experts. I’ve seen plenty of people claim that “the analytics says this design wins, so it must be the right one” in situations that simply are not that black-and-white.

    We can’t get teams to support user-centered design if they believe that A/B testing on whole-site redesigns is the best determinant for success.

  5. How about calling it “Data-Driven Refinement” instead of “Data-Driven Design”?

  6. There’s an ethical discussion here. Critiques of mHealth apps note various challenges and research gaps within industry user requirements, of particular concern are ethical issues where technology is favored over interpersonal supports, potentially reducing access to quality health supports for people at-risk, you know like old people and kids. That’s good cause for considering industry policy within data design to be informed about your users needs. Innovations that actually solves this problem will be focused on how real people use tech, not on how tech can inform us about some people.

  7. Thanks for sharing this post. It answered my question on data driven design process.

Comments are closed.