1. Searching for X: Search Interface Usability Lynn Leitte User Experience Designer / Information Architect J. Boye Conference Philadelphia, PA May 3-5, 2011
6. Information Need "An information need is a recognition that your knowledge is inadequate to satisfy a goal that you have." "Information seeking is a conscious effort to acquire information in response to a need or gap in your knowledge.” Donald O. Case Looking for Information, 2002 Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
7.
8.
9. Information Need Examples I need a read on Motorola Android development because I am considering whether to invest in an open source mobile app company. « Motorola Android development » and « telecommunications product manager » I need to know how common are oil pan failures on Honda cars approximately 10 years old because I am in disupute with the service department . « Honda service 10901 » and « oil pan stripped threads » Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
18. GLG Search Results Before Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
19.
20.
21.
22.
23.
24.
25.
26. GLG Search Results Layout prefered by Design & Product Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
27. GLG Search Results Layout prefered by Users Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
28. GLG Search Results After Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
29. GLG Search Results After Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
30. GLG Search Results Later roll out changed the calls to action and added list functionality Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
31.
32. Thank You! Time for Q&A [email_address] LinkedIn » Lynn Leitte
33. Search Patterns by Peter Morville & Jeffery Callendar Search Resources Search User Interfaces by Marti A. Hearst Information-Seeking Behavior in the Digital Age: A Multi-disciplinary Study of Academic Researchers by Xuemei Ge Information Seeking Behavior in New Searching Environments by Colleen Cool, Soyeon Park, Nicholas Belkin, Jurgen Koenemann and Kwong Bor Ng Lynn Leitte, User Experience Designer/Information Architect J. Boye Conference , Philadelphia, PA May 3-5, 2011
Hinweis der Redaktion
This talk is about internal site search About search once someone is in your site or intranet -- Results listing data and interactions Not SEO or paid ad words Not Google, Bing, Yahoo
Quick overview of basic search behaviors Present the idea of an “information need” High level points about what these mean for your UI Case study of some search UI work that I did for Gerson Lehrman Group
Cognitive model (Norman 1988) 1. Execution – enter the results in the search. Cognitive process is “gulf of execution” what you did compared with goal 2. Evaluation – evaluates results. Cognitive process is “gulf of evaluation” is comparison of results with what was expected Standard model (Salton 1989) 1. Identify information need 2. Query specification 3. Examination of retrieval results 4. Reformulation of query, if needed. Quite a bit written about this one throughout the 90s. Important work on it by Marchionini and White in 2006 In 1991 Khulthari expanded the model into “stages” that are the pattern of behavior for information seeking over extended periods of time and for complex information seeking tasks. This extension also included emotional states. Berry Picking (Bates 1989) Usually gets thought of as iterative searching and selective choosing of results items. Gets a fair amount of citing in UX circles, but I don’t think with much understanding of the 3 most important contributions of this model. 1. Reading and learning from the results will cause the information need and the queries to shift 2. Needs are not met by a single, final retrieved set of documents but rather by a series of selections and bits of information along the way. 3. Search results for a goal tend to trigger new goals. Pearl Growing is a phrase coined to describe the shifting queries of the berry picking model. Users will take terms that they’ve learned via searching and use them as additional query terms. Anchoring bias (Hertzum and Frokjaer 1996) aka thrashing (Russell 2006) When the user “hangs onto” their initial keyword query making only small changes even if the term isn’t brining good results. i.e. “trucking” “truck engine” “diesel truck” rather than switch to “semi-tractor” or “lorry” or “cab chassis” Pogo Sticking. Users are frequently bouncing back and forth between results list and individual results. Some of this is normal. Lots of it, and very quickly, indicates a problem with your UI and/or algorithms or the expectation setting that your experience creates about your data. Principle of least effort: A user will tend to use the most convenient search means; they’ll stop searching as soon as minimally acceptable results are found. The user will use the tools that are most familiar and easy to use that find results. This theory holds true regardless of the user's proficiency as a searcher, or their level of subject expertise. Also this theory takes into account the user’s previous information seeking experience. The principle of least effort is analogous to the path of least resistance.
Donald O. Case defined information need as the following: Great, so what does that mean in User Experience parlance?
Information need -- it means context. It’s particularly important to have an idea of what people intend to do with what they find…read it, share it, buy it, eat it, follow the instructions, give it as a gift, use it for analysis, amuse their brother… We stop short on the “what to do with it” at these three: print and buy and now with the social media explosion, share. But outside of that we really don’t spend a lot of time thinking about what they intend to do with a search result. Sometimes there’s nothing we can do, because the “what” is completely off line. Other times, there are things we can do with the experience – of search or features elsewhere in the site -- that can improve the experience, satisfaction, and stickiness of the site.
Something Case didn’t address in his definition also impacts your UI. It’s that the “reason” is all wrapped up in why the user has chosen your site. What do they already perceive about your (right or wrong) that drives them to choose your site. Their understanding of the problem and your data will shape what kind of search terms they use and what kind of data they expect to get back.
Needs and keywords aren’t always obviously correlated. As users, we are translating our need into the keywords we think will bring us results that will be close to what we need. These are a couple of “information need” examples, which I am not going to disect. The two most important takeaways are: the information need is often an off-line driver/outside the context of your site Keywords are not the information need. Keywords are the user’s brain extrapolating keywords that he/she thinks will be present in your data and likely to bring back items that are useful.
What does this mean for your UI? As I’d said, the context and the intent for action should drive design decisions. 1. If you can, support their off line/off site tasks 2. Let users to learn about your data/information. It will help them in “pearl growing” and to keep them from “thrashing” It also supports the changes that the berry picking model predicts. 3. Things such as filtering by rating, date, allowing thumbnails, or having a timeline can help satisfy different users models & interests Disambiguation – to help users distinguish between terms with more than one meaning, multiple people with the same name, and company/brand names that are also general terms, iterations of software 4. Recognition – it’s easier to recognize something than remember it “cold”. Type aheads, “did you mean” and “related terms” “ Concept matches » « alternative fuel » and « wind energy » 5. Don’t forget that search and browse work together. The days of the web being Browse Vs Search are gone. People move pretty seamlessly back and forth between them.
User Research is needed. It’s the best way to figure out that context, those off line drivers, and the expectations. It is invaluable for designing the search experience Once you understand the “why” of your site and the “what to do” you can prioritize and design features that have a positive impact. In addition to context, through research you will have a better understanding of their expection and perception of your data. What users think about your data may surprise you.
Can your search logs help you figure it out? They will give you a lot of useful information but they won’t help you with off line drivers or intent. You can learn about what they commonly search, mine for trending data, look for places you can improve your thesaurus and synonym list.
When you work with your search developers, the experience is impacted by the algorithm. The UI may not be – the layout and interactions will all work, even if your algorithm is poor. Accuracy is important, but it involves the user’s perception as much as the technology. “oil pan stripped threads” for example. A result with all 4 words, but scattered separately around the document will likely not be perceived as relevant and a result with “oil pan threads” as a contiguous string – which is only matching on 3 terms. Make sure to leverage as much data as you can -- and is valuable – in your search. Using metadata is great, except when it’s junk data. Where to present concept matches in your UI – in the results or as another kind of prompt or “you may also like” - -depends on your users and your data.
When I joined GLG we had an enthusiastic product manager over search who was brimming with ideas. He wanted additional features, data visualizations, all kinds of neat stuff. Their behavior in our site inextricably tied to the client’s title and level. Meaning Sr. Analyst behaved differently than a Jr. Analyst or that Assistants behaved differently than Partners. That the point at which they chose to come to our site was the point at which they were ready to engage our consultants services.
Here’s what a search module looked like when I was hired. Lots of data points, classic snippeting. Rather full of independent bits of information about a consultant. It wasn’t clear what was causing the disatisfaction.
I’d pushed hard to do user research because my experience in UX indicated that what was “known” about clients wouldn’t be true. It was also important to me that we design features that fit the needs of the audience, not just build features because they were cool. Fairly new endeavor for GLG. UX group spearheaded; one other colleague of mine made inroads with the business to establish communication and process around recruiting and built awareness of the practice. I executed contextual inquiry with US client. We also took some dives into the metrics – the reason I say “caveats” is that the data mining revealed some path and tracking problems with our Unica set up. Our data was rather inconsistent – required a lot of normalizing and clean up… even then, parts of it we had to take it with a grain of salt
Their own level of sophistication with search systems in general AND, interestingly their perception of how sophisticated our system was. General attitude about search “make it like Google” vs. “I want all the structure of Lexus-Nexus.” Clients who’d worked heavily with Lexus Nexus legal library made a correlation with our fairly simple interface as being not as powerful as the LN very structured query builder. Some used our tool specifically for learning about a topic. They came, browsed profiles, job titles, company names, expertise descriptions. This was fairly far in advance of being ready to talk to a consultant
Much of the features and functionality that had already been built into the search was not being used or considered of no value by the end users. “ show similar” pulled a list of people who were classified in our system under the same job function or company type – low value because clients considered the lists to be too generic. They were not tied to the keyword search or any of the facet selections made. “ featured people” were ignored entirely – whether in the featured module at the top or in the right side bar. For good reason, too, since the code that drove the featured modules was not connected with the query. Great finding: it was quite clear what information clients valued most.
Things that clients found useful. You’ll see that the info the valued was pretty spare. We returned one or the other between bio and Q&A. We aggressively snippeted them. Users DID NOT say “I don’t like that you snippet” They did say “I need to read their biography statement from the beginning because seeing just a sentence or two I don’t get a good picture of what they know about” or “the keywords are really out of context if we can see only part of the question or part of the answer”
Stuff that clients didn’t value - were lots of the little data points. Some of the data (such as where the consultant was physically located) mattered later in the decision making process. It was more important to find the person with the right knowledge and worry about the timezone for the call later. The only clients who cared about geo location were inclined to use the city-state-country facets rather than scan the results list.
With all this clear input from clients, I went into wireframing. Though the process I did 4-5 layouts of the search results page. With variations on the consultant info modules and facets. Then I went into design review and product owner reviews and I got to something a bit perplexing…
The design & product input was that the streamlined results with all the rich data at a single click or mouse hover was the right approach. The research was indicating that the information dense layout – that broke a lot of the “rules” about search results was the one that clients would find useful
Took these wires – just paper printouts of the wires – to sessions with clients and asked them which they preferred. Hands down every client preferred the blocky, data rich version. Most of them, it didn’t even take a heartbeat for them to decide.
Where’d we net out? Happy to say, the client input won over a couple of important stakeholders. This is a screen capture of the search results module that we rolled out: But the design certainly did “fatten” it up. In particular, more copy from the biography and more of the question and answer pairs. We returned more overall, but also dropped the classic “snippeting” Dropping the snippetting cause my search algorithm developer and some others to be really agitated. This just wasn’t “done”. He thought it would be more confusing to not snippet.
This is what the “show more” Q&A looked like. Kudos to my developers – both the back end and front end guys. I gave them a big challenge with this - -to return a lot more data and still keep it performant. They did it – and in fact rose to the challenge -- when measured on Dynatrace, the performance of the new search results page was faster than the old page, even though they returned more data
A later rollout changed the call to action to be more in keeping with the customer-centric language for the action and added save to list functionality.
Success! Our metrics (normalized) showed a significant success. Those who work in eCommerce will recognize these as really good numbers. Clients that customer support spoke to, liked the new search results. That support group asked for similar search results in their app.
Hearst: Academic, comprehensive, search behavior models, user research studies Morveill & Callendar: introductory, basics, focused on design patterns