Contextual inquiry as a research method has gained its popularity these years among user experience practitioners. As a user researcher, we face excessive user data that are collected from field studies. Most of us review and analyze the field data by looking for trends of users’ responses and behaviors. For example, “Affinity diagram” has been commonly used to group and analyze the field data to identify any trends. However, in many cases, it is not enough to draw our conclusions based on a few “Aha!” moments. We should also consider the rich and “random” data that are not obvious to form trends, and abstract hidden implications from them. How we could accomplish it, however, has remained as a challenge.
In this presentation, I will start with a case study from our own work, and demonstrate how we found the hidden implications from our data. Then we will explore and discuss strategies and techniques from different perspectives.
2. Agenda
Topic Introduction (5 min)
Case Study Presentation (20 min)
Summary of Techniques Learned(5 min)
Group Discussion for Cases and Possible Solutions
Using the Techniques (25 min)
Wrap Up (5 min)
3. Topic Introduction
Contextual inquiry is a user research method to collect field
data through inquiry and observations in the real
environment users live in or work in.
The output from contextual inquiry is not structured data. It
does not always show direct links to new user requirements
and design issues.
Analyzing and understanding the field data become a
discovery process. And it is always a challenge to find design
implications from such rich and unstructured data.
4. Topic Introduction and Challenges
“Affinity diagram” has been used as a way to organize notes
and insights from field studies.
From “Affinity diagram”, we could cluster the data into
trends.
However, in many cases, it is not enough to draw our
conclusions based on a few “Aha!” moments.
We should try to abstract hidden design implications from
the data that don’t seem to form a trend as well.
6. Site Visits Study for Designing Credit
Management and Collections System
• We visited 3 companies and interviewed 40+ users
at their sites.
• User Profiles:
• Credit manager
• Credit analyst
• Collections manager
• Collections analyst
7. Different Data for Credit Analysts
Credit Analyst – Release an order hold
Contact: Wei Zhou
From Schreiber Foods Inc
Open Crystal Report (Order holds
report for the entire team) from email
to retrieve case folder #, credit
exposure and customer’s name
(see Screenshot)
Go to EBS
Oracle Navigator
(see Screenshot)
Go to Oracle Collection
IEXH (See Screenshot)
Are credit exposure and order
behavior or pattern changed?
Thru “Application” link
Thru SFI Collections user role and Collection
Search Customer Name to find the customer with order holds
Crystal Report was sent periodically in
the morning, and then 3pm, 4:30pm…
Crystal Report was opened up in Excel
Go to Intranet
Start to retrieve customer information to
evaluate before doing anything on the
order hold
Select the customer
found in search result
(See Screenshot)
Frustration with Oracle Search, had to
put “%” in the beginning and the end of
the string to get result, otherwise, may
fail. Google search style is desired
Search Result has scrunched columns
every time and couldn’t view the whole
customer names. Hard to move the
column splitter.
Go to Customer
Detailed View (default
was “transaction”
Go to “General Tab” to
check credit, and WAP
(See Screenshot)
Ignores Transaction tab, go to General
Tab.
Click “Trend” button to see
Trend Information such as
historical WAPs and balances
(See Screenshot)
Upon closing the Trend dialog, May
also go to “Trend” tab to get
additional information
(See Screenshot)
Yes
Go to Order
Management Apps to
check the order
Here, user had to go back to EBS,
switch user role to OM view.
Collections view and OM can’t be open
at the same time!
No
2
Example: Company 1
Each company has their own
work flow and steps!
We also collected a lot of
screenshots from each
company.
8. Similarities From the Differences
What is the ultimate goal? Release a held order!
What are the reasons of those different steps?
Review held orders
Check customers’ credits, order history and payment
info
Update credit recommendations for customers after
obtaining approvals, such as
Release an order hold
Increase/decrease credit limits
If it is a new customer, check public credit information
and set up new account
9. Analysis of Differences
Credit held orders were reviewed in different tools.
Company 1 users: Crystal Report received by email
Company 2 users: ExpressOS & BPCS system
Company 3 users: GMIL
Credit review was conducted in different third-party
applications
Company 1 users: DNBi, SeaFax and SEC/Edgar
Company 2 users: Global Credit
Company 3 users: DNBi
Held order release was completed in different tools.
10. Analysis of Differences
Different authorities to release a held order
Company 1 users: Credit analysts have authority to
release the orders.
Company 2 users: Credit analysts provide hard-copy
credit review and obtain physical signatures of
management before releasing any orders, higher $
amount needs further approval.
Company 3 users: Credit analysts have authority to
release the orders under certain $ amount. If an order $
amount is over the limit, escalate to manager to release
it.
11. Finding the Design Implications from
Differences
Let’s drill deeper into the consolidated data
Are there any reasons why no trends were seen at all?
Why are they different?
Are these differences actually related to a same topic?
12. Finding the Design Implications from
Differences
the reason for the differences is because there were no
integrated solutions in the market to help our users!
Users struggled with too many separate applications
to complete a single task. It is crucial to provide
appropriate seamless integration within a complete
task flow such as release a credit hold.
13. Another Example – Enrollment and Payment for
Continuing Education
One University we visited, we found the different
schools and departments have its own customized or
home grown systems for Enrollment and Payment.
They said they needed to apply different discounts for
the students who are government employees and
military family, plus ad hoc, random programs etc.
The payments become more complicated due to
different discount programs and different ways to
apply the discounts.
We observed 4 different departments’ current systems.
14. Another Example – Enrollment and Payment for
Continuing Higher Education
Although they are all different, but yet not so different!
Why are they so different? - We noticed the differences
are different design solutions, not the underlying
problems to solve!
What are the underlying reasons? - apply various
discounts and be able to collect payments.
We believe we can provide a simple streamlined
system for Continuing Higher Education.
15. Summary of the Technique
Document and Analyze “Differences”
Find “Similarities” from “Differences”
Drill deeper beyond the surface of the data, ask:
Are there any reasons why no trends were seen at all?
Why are they different?
Is underlying problem the same?
Are these differences actually related to a same topic?
Are these differences just different design solutions?
16. Group Discussion
What kinds of data are difficult to analyze using affinity
diagrams? And why?
What kinds of data have been ignored or discarded during
field data analysis? And why?
What are the challenges and strategies to abstract hidden
design implications from field data?
How many different perspectives could we analyze
consolidated field data? By similarities, by differences and
what else?
Is there a method or process that we could come up with to
guide us step by step to find hidden design implications?
17. Wrap Up
Summary of our discussions
Any concerns?
Contact: wei.x.zhou@oracle.com
Editor's Notes
It is a technique based in anthropology and ethnography that helps user experience practitioners to understand in depth users’ wants and needs within that environment. As part of contextual design process, contextual inquiry has been extensively documented [1, 2] and widely practiced.
In my working experience at Oracle, we also adopted Contextual interviews which interview users and observe their tasks contextually in their working environment.
In some cases, design implications are not even the direct outcome of studies of users and their context
2nd bullet: A lot of these trends that we found are actually from similarity relationships of consolidated data. And then conclusions and design recommendations are usually drawn from these trends based on similarity perspective. For example: users pain points, similar trouble to complete a task due to a same reason etc.
3rd bullet: because we see the trends of similarities from our affinity diagrams. Very often, the data we collected do not even form clean-cut categorizations. Therefore, we should also consider the unmatched data that may seem irrelevant to each other and are not so similar to form any trends.
4th: But how we could accomplish it and successfully discover these implications have remained as a challenge.
In this study, I will focus on seem-to-be random data that looks so different
This project is for designing credit management and collection system. Credit management is the process for controlling and collecting payments from your customers. A good credit management system will minimize your exposure to bad debts by understanding customers’ credit history to determine the risk of the customers. Collections is a process to collect and recover the amount of money owed by your customers.
Looking at all these difference, we were wondering if we have support all these different work flows? it is impossible to cover them all.
At the End: Once we got this layer out, we use this data as a skeleton of the structures. Let’s take a look at more detailed differences.
Interestingly, Some of the tools are even different Oracle tools.
It seems no trend to find there. All these differences confused us again and we couldn’t decide which tool to support since it is again impossible to support them all. We wished if there can use one tool, and then we can just provide a plug-in, right?!
Then we looked beyond the appearance of the data and asked ourselves again why they are so different? This “Why” question finally made us realize that the reason for the differences is because there were no integrated solutions in the market to help our users!
This “Why” question finally made us realize that the reason for the differences is because there were no integrated solutions in the market to help our users! Oracle provided some solutions here and there by different development team, they are not all connected! Now, you have another “aha” moment! Of course, there are also functionality discovery as well such as supporting hierarchical authority level for approval.
Now, you have another “aha” moment!
1st bullet: Very recently, we visited universities to understand continuing higher Ed requirements. It is actually an on-going project.
Last bullet: some offers promo code, some requires authentications, phone calls, and different payment ways.
2nd bullet: we asked ourselves…
After 3rd bullet: Of course there are a lot of details to take into account during the design, but we believe…
Hopefully by answering these questions, it will help you to discover your findings from very rich data easier.
I also think this Technique can be used in usability testing as well!
Now I am done my talking, I would like to invite everyone to have a group discussing, taking about your own experience in analyzing data to discover key findings.
We can be open to talk about any of these suggested questions.