eMetrics Keynote: Optimizing eBay – Improving Customer Experience at the World’s Marketplace
Happy Tuesday. Time for a morning keynote where everyone gets free coffee mugs and are then blinded by strobe stage lights. Yey! Jim Sterne welcomes Elissa Darnell and Deepak Nadig. I think. His voice is really distorted over the microphone. Elissa and Deepak are from the eBay.
Up first is Deepak Nadig. He’s going to give us an overview of eBay. The microphone is really bad. This should be interesting.
In the beginning, eBay started out as an experiment. The founder had a broken laser pointer and thought it would be interesting to start an auction to see if anyone wanted to buy it. He set the bidding price at $1. At the end of two weeks, someone had paid $40 for the broken laser pointer. He contacted the guy to make sure he knew it was broken. The guy said he knew and that he collects broken laser pointers. An industry was born!
In 1996, eBay had 41,000 users. In 2008, they have 276,000,000 users with over a billion photos. The site gets more than a billion page views a day. An SVU is sold every five minutes. A sporting good sells every 2 seconds. There are over ½ million pounds of kimichi sold every year. It’s in 39 countries, in seven languages.
Elissa jumps in. eBay is really about the people who use it. To optimize the user experience, they need to know who they are and what they do. Users trade in over 50 thousand categories on eBay. They want people to have a fun shopping experience and to find good value. They want to help sellers find buyers. They want to know more about their buyers and sellers – what motivates them, how often do they use the site, why do they go to other sites, etc. They break their buyers and sellers down on their experience, frequency of use, and lots of other metrics.
What do they mean by user experience? They’re talking about things like the utility of the site, usability, desirability, and the brand experience.
They use an assortment of research methods, including lab testing, field visits, participatory design, surveys, eye tracking and card sorting. They want the users to help design the user experience.
For lab testing they bring in representative users individually into their usability labs. They observe them perform assigned tasks. They use either prototypes or the live site itself to test. This enables direct observation of target users as they interact with the Web site or a design prototype. They’re able to identify areas that are confusing and potential fixes. Testing is done iteratively through the design process.
Sometimes they’ll have people use low fidelity (paper) lab testing where the designs are shown on paper. The researcher or designer acts as the computer and the participant uses their finger as the mouse.
They also do RITE testing – Rapid Iterative Test and Evaluation. It forces more rapid testing and retesting of the design based on a very small sample.
They have a program called Visits, also known as Field Study or Ethnography where eBay employees go to their customers’ homes and watch them use eBay. They’ve taken their CEO, CMO, designers, finance people, etc into peoples’ homes. They want everyone to understand the customers.
Visits involve going into peoples’ homes and spending several hours watching them use eBay. They don’t give them a set of tasks; they just let them do what they would normally do. The eBay rep takes notes or films video. The findings are summarized across participants. It reminds eBay employees that they are not their customers even they though have experience using the site. They have to get to know their real customers.
When they do Visits, they get to see people use eBay. They see them on their computers, switching between computers, what kind of connection they have, what kinds of equipment they use with the site, etc. They see them with life’s normal interruptions (talking parrots and messy desks). They see that people weigh objects using their bathroom scale. People let their guard down when you go into their homes. They followed people to the post office to see how they ship their items.
Questions they focused on during the Visits:
- What is the larger context of use?
- What issues exist and WHY?
- What can we do to address the issues?
The visits are not about the numbers, or the question “how many users experienced that?”
To optimize the user experience they do research. In the beginning they do strategic research to inspire (field visits, competitive evaluations). Then they do design research to inform and assessment research to track.
Case Study
They redesigned the View Items page this year. They wanted to increase BID/BN efficiency and to improve the user experience by reducing the complexity and the clutter. They use a combo of qualitative and quantitative techniques. They used research from multiple teams.
She shows us what the View Item page used to look like in 2000 and how it’s evolved over the years. A new design will be unveiled soon.
Research Overview
- Understand the User Needs: They conducted a compelled lab study to understand user experience. They also did participatory design studies to try and come up with the ideal design.
- Concept Testing: Held focus groups.
- Iterative Design: Used Rapid Iterative Testing to gauge use reaction.
- Visual Design Research: Use a desirability study. Tested the tabs, a new visual element they added.
- Implemented a Diary Study: Had users give input about how they’re using the site at home.
How do they know the View Items page will be a success? Research was conducted through the product lifecycle to evaluate the current strategy and design at each stage. They had focus groups on the early concepts. They did desirability studies, and used eye tracking to help them pick the best tab design.
Deepa is back up.
Experimentation Lifecycle
- Hypothesis: Idea and learning
- Experimental Design: DOE, define samples, treatments, factors, etc.
- Setup Experiment: Setup experiment samples, treatments, factors, implementation
- Launch Experiment: Serve treatment
- Measurement: Tracking, monitoring
- Analysis & Results: Metrics, reporting
Automation
- Dynamically adapt experience: Choose page modules and inventory which provide the best experience for that users and contest. Order results by combination of demand, supply and other factors.
- Feedback loop enables system to learn about improve over time: Collect user behavior, aggregate and analysis offline, deploy updated metadata, decide and serve appropriate experience.
- Best Practices: “Pertubation” for continual improvement. Dampening feedback loop.
What they think about during and experiment: The fidelity (how representative is it of the product?), the cost (total cost of designing, building, running and analyzing an experiment), the iteration time, the concurrency (how many experiments can be done at the same time?), the signal/noise ratio, and the type/level of the experiment.
Challenges to Experimentation
- Stickiness to the user
- It gives you the “what”, not the “why”
- Duration and long term effects
- Minor vs Major differences
- Extend of generalization
Qualitative research such as lab test and field visits give us rich data about usability problems, discoverability, navigation, terminology, more complex problems.
Understanding the customer experience requires insights into what they do, why they do it, attitude, motivations, etc. Qualitative and Quantitative both have their advantages and limits. Using them together helps you gain a holistic understanding of the user experience.