Product development: From hypothesis to experiment

02. March 2018

Product development is confronted with the constant challenge of supplying the customer with a product that exactly meets his needs. In our new blog series, etventure’s product managers provide an insight into their work and approach. The focus is on hypothesis-driven product development. In the second part of the series they show how to set up an experiment on the basis of a hypothesis.


Formulating good hypotheses is essential for successful product development – as we have shown in the first part of our series. And yet it is only the first step in a multi-stage development and testing process. After all, once the team has agreed on a hypothesis, it’s all about testing the assumption. Experimental design is a crucial factor here.

Austin Chan_Unsplash

An experiment has two aims: On the one hand, it tests what works and what doesn’t, but on the other hand it also serves to collect evidence. This can be crucial to convince important external and internal stakeholders such as management or investors, so that they can invest in further product development (e. g. budget or venture capital). Far too often, trend-setting product decisions are not based on data but on gut instinct. This often subjective viewpoint can be avoided through experiments.

The right test method

In order to successfully design a test setup, one should first become familiar with different test methods and choose the appropriate method. The hypothesis or learning objective of the experiment always serves as a starting point. What do I want to learn? How fast do I need the results? Is qualitative data sufficient for me or do I have to convince with quantitative data – or do I even need both? What resources are available? What data would gather enough proof of the validity of my hypothesis?

Some of the most common test methods are:

  • User interviews
  • Shadowing: Observation of behaviour in the appropriate context
  • Usability tests with external users or “dogfooding”, i. e. the self-testing of your own product
  • A/B and multivariate tests
  • Personal, online or telephone surveys
  • Preto- / prototyping, e. g. with interactive click dummies or landing pages that communicate the product promise

David Travis_Unsplash

We often rely on user interviews for our experiments. These have the advantage that they only require short lead times and often produce surprising insights in personal discussions. In a direct conversation we gain deep insights into the needs of the target group. If we need quantitative data instead of qualitative data, we often create landing pages with the value proposition and provide traffic via Google Adwords.

Knowing and avoiding obstacles

Even if the right method is chosen, there are some pitfalls that can be easily avoided if you are aware of them and take the relevant parameters into account right from the start. These are typical mistakes that product developers make in experimental design:

  • Objective missed: The experiment is not linked to the initial hypothesis. The findings are not helpful.
  • Unfavourable runtime: The experiment runs forever or is aborted at the wrong time. In both cases, we have learned nothing, but wasted time.
  • Long setup time: The development of the experiment takes longer than the development of the intended features.
  • Wrong target group: If in the experiment a different target group is addressed than the one defined in the hypothesis, the test result does not allow any statement about the hypothesis.
  • Paralysis by analysis: The motto “everything should be tested” is taken too literally and even trivial things are questioned unnecessarily. Here too, time is wasted without getting closer to the goal.

Pragmatic and iterative approach

As with the formulation of the hypothesis, a pragmatic approach is also recommended for experimental design. An iterative approach helps to find the quickest and easiest way to validate or refute the hypothesis and obtain results that credibly support the product project.

One hypothesis we have put forward in the first part of our blog series is: The new subject line will increase the opening rate for subscribers by 15% after 3 days. The experimental design could look like this:

  • Title: Subject line test with newsletter subscribers
  • Date: 1 March 2018
  • Author: Max Mustermann
  • Objective: Influence of the subject line on the opening rate
  • Test method: A/B test, 50 % of subscribers in the test group, 50 % of subscribers in the control group
  • Hypothesis: Increase of the opening rate by 15% after 3 days
  • Scheduling: Start end of April, expected 2 weeks running time (statistical significance is decisive)
  • Participants: 1,000 newsletter subscribers
  • Resource deployment: 2 days for 1 developer and 1 marketer

Formulating the test setup has two decisive advantages: On the one hand, the team involved in the test setup receives clarity about the current status and can adjust the next steps accordingly. On the other hand, such a list helps in external communication, because important stakeholders receive an overview of the most important parameters in a short time.

Once the experimental design has been completed, it is ready for implementation. Depending on the method, appropriate preparations must be made for this. If you want to interview your users, you should write a short interview guide, but for A/B tests you usually have to install the appropriate software.

Drawing the right conclusions

As soon as the experiment is running and the first results are visible, product developers can start the next steps. Is our hypothesis correct or were we wrong? What do we do next? How do we systematically use the new findings? In the third and final part of our blog series, we will address these questions.


Further links:


You have a question or an opinion about the article?
Share it with us!

Your email address will not be published. Required fields are marked *.

* Required field

Autor

Product Manager bei etventure

Read all articles