Quantcast
Channel: Booking.com dev blog
Viewing all articles
Browse latest Browse all 114

How we designed Booking.com for Business

$
0
0

It’s no secret that Booking.com has a strong data-driven culture. We validate our work through A/B experimentation, allowing millions of customers to have their say in what works best. But quantitative research is not our answer to everything; we adjust our toolset to the problem at hand.

When we set out to build Booking.com for Business, it immediately felt like we were in a startup. Suddenly, the wealth of experimentation data at our disposal wasn’t enough to start building the application for businesses. To kick-off the design work, we needed to know more about our business users’ needs, motivations, and current frustrations. We needed to get out of the building and talk to them.

Initial research

We performed a series of user interviews in several countries with a significant share of business travel. We met with business travellers of all types (from interns to CEOs), along with the people who organise the business trips for them.

We learned that business users have unique set of needs. Booking a holiday can be a fun pastime on its own, but booking a business trip is part of a job—it needs to be as efficient as possible. There’s also more to business travel than making the booking itself. Companies need an overview of who is going where and how budgets are spent. Existing business travel solutions either don’t satisfy such needs, or they’re expensive and complex.

These and many other insights became the foundation for our work. They were synthesized into a set of user personas that shaped the design of the product as we went on.

Product vision

Armed with knowledge about potential users, we started brainstorming. Our aim was to create a vision of the product that would help our personas accomplish their goals. The outcome of these brainstorms was a list of high-level requirements that were later visualised as a set of wireframes. This tangible representation of our ideas enabled us to have fruitful conversations with stakeholders. The wireframes were high-level enough that we could temporarily set aside details of technical implementation and visual design, but also detailed enough to convey the purpose of each screen.

Minimal viable product (MVP)

The product vision was exciting, but it was just a hypothesis. We didn’t want to spend months implementing something ultimately not useful for customers, and not beneficial to our business. It was important to get the product out there as soon as possible, and to start learning from real world usage. We knew that we already had a great product—Booking.com itself—and we could build on its strengths. With this in mind, we defined the minimal scope that would be sufficient to validate our ideas, and started mapping out the user journey.

Filling the user journey with designs felt like finishing a puzzle. As we progressed, we could see how complete the whole picture was from the design perspective.

However, soon after we started implementation, we noticed a problem. Separate design mockups didn’t provide a true feeling of the user experience. They were static. It wasn’t always clear how the application would respond to user actions, and how one page would transition into another. We found ourselves figuring out these details along the way.

It was also important to get user feedback on design decisions we had made so far. Unfortunately, the actual product was still at the early stage of development, and putting mockups in front of users gave us limited feedback.

Prototype

As a response to these issues, we created an interactive prototype that simulated the end-to-end user flow. For example, it was possible to land on the product page, go through the sign-up process, view transactional emails, experience key application features, sign-out, and sign back in again. In this way, we solved two problems at once: we had created a tool that would better guide product development and procure high-quality user feedback.

We kept it lean and didn’t spend much time creating the prototype. We simply placed mockups in HTML files and connected them with hyperlinks. In-page interactions were triggered by bits of basic Javascript code that showed images of various UI states on click. For example, when the user clicked on an area of the mockup that had a button, the image changed simulating interface response.

Some design solutions that previously looked good as static mockups didn’t work so well when presented in the dynamic prototype. Acknowledging this helped us to fix design issues before they reached the product. Participants in user testing sessions were also more engaged with the prototype because it felt like a real product.

But prototypes are not without limitations. It’s hard to do full-blown usability tests with them. They may look real, but not every possible scenario is supported, so facilitators need to carefully steer participants. Maintaining the prototype also becomes tedious over time. We tried to make it easier by separating reusable parts like header, footer, navigation, etc. into include files. We accomplished this by using Jekyll, a static website generator.

The good news is that we didn’t have to rely solely on the prototype for long. The product quickly took shape and it soon became possible to put the real thing in front of users.

User feedback

After the product reached the MVP state, it became easier to get user feedback. Although the product wasn’t yet ready to be publicly announced, we were able to start gathering usage data and feedback from early adopters. We also continued usability testing, because the usage data told us what was happening, but often left us wondering why.

Even with the working product at our disposal, we continued using prototypes to fill the gaps during usability tests. We seamlessly integrated feature prototypes into the live product and switched them on specifically for usability session participants. This helped us to establish whether mocked-up ideas were worth implementing, and also to test parts of the application that were still in development.

Usability labs were not our only test environment. We visited company offices and observed how the product was performing in the real world. These office visits were highly valuable, providing us with insight from observation of users in their natural environment. We had a chance to see what tools they used, what workarounds they developed, and how our product would fit their work process. This was absolute gold. Some things that performed well in the lab set-up, failed during office visits.

We saw that our users were working in a very busy environment and were constantly distracted. The time they spent making a decision was very short. Plus, they were sceptical about introducing new tools into their work. All this posed a particular challenge for a crucial step in the user journey: the product page. Our users needed tangible proof that the product would do as it promised, and that it was reliable. They wanted to explore the product before making any commitments.

Product page

The research findings informed new product page designs. But we needed confirmation that they would actually solve the problems we had observed. We opted for remote surveys as a method to gather feedback quickly and on a large scale. This enabled us to cover several markets and bring a quantitative component into the research.

Survey participants were presented with various versions of the page and were asked to click and comment on page elements that stood out to them. Afterwards they were asked a series of questions that helped us gauge how well they understood our offering, and how likely they were to sign-up. 

It took us a few survey iterations to arrive at a version that proved to work well for our users. Had we gone with one of the new designs without testing, we would have ended-up with a sub-optimal page upon product launch. 

Now that we had the fully-tested user journey in place, we could reveal the product to the world.

Final thoughts

Fast forward to today. The product is up and running. We can now make decisions through A/B experimentation as there is an established base and a sufficient number of users that continues to grow. If we look back on the process that brought us here, this is what comes to mind:

  • In an environment of uncertainty, it was important to remain open to change. We had to think creatively not just about the product itself, but also about how to get there.
  • We were focused on the user from day one and all the way through. Even when we didn’t yet have the complete product, we used the prototype to get user feedback.
  • By combining quantitative and qualitative research methods, we got the best of both worlds. This was and will remain our recipe to continuously improve the user experience.

Viewing all articles
Browse latest Browse all 114

Trending Articles