NAACL Tutorial Social Media Predictive Analyticsshengjing 孙胜晶
This document summarizes a NAACL tutorial on social media predictive analytics. The tutorial covers theoretical and practical sessions on batch prediction, online inference, and dynamic learning and prediction of user attributes from social media data. It discusses how to collect and annotate social media data, features and models for user attribute classification, and approaches for predicting from streaming data and incorporating neighbors' content. The tutorial materials include slides, code, datasets and references related to predictive analytics on social networks.
The document discusses methods of automated knowledge extraction from web documents. It describes how knowledge extraction systems work by defining output structures, preprocessing input text, applying extraction methods, and generating output. The key steps involve identifying named entities, their relationships, and addressing issues like ambiguity and accuracy challenges. While automated extraction helps analyze vast online information, limitations regarding accuracy, efficiency and dependency on external resources exist. Combining automated techniques with human oversight may help improve knowledge extraction.
Studying people who can talk back, Meyer 2013 DH at Oxford summer schoolEric Meyer
This document summarizes a presentation about studying digital transformations in research methods in the humanities. It discusses how digitization has impacted access to sources and research practices. Quantitative and qualitative research methods are examined. Examples of digital projects and their usage statistics, publications, and impact on research are provided. Surveys of researchers show how access to online sources and new skills are changing research practices and enabling new types of analysis.
This document provides information on several classic anthropological studies involving in-person fieldwork, as well as topics related to conducting ethnographic research online. It discusses Bronislaw Malinowski's work with Trobriand Islanders, Margaret Mead's Coming of Age in Samoa, and Napoleon Chagnon's studies of the Yanomamo people. The document also addresses conducting ethnographic research on social media, combining online and historical research methods, framing technology from actors' perspectives, setting research parameters to avoid information overload, recording and referencing online sources, and addressing ethical issues around informed consent and anonymity when studying online communities.
Integrating digital traces into a semantic enriched dataDhaval Thakker
The document discusses integrating digital traces from social media into a semantic-enriched data cloud for informal learning. It outlines a processing pipeline that collects digital traces, semantically augments them using ontologies, and allows browsing and interaction through a semantic query service. An exploratory study on job interviews found that authentic examples from digital traces were useful learning stimuli but could be mistaken as norms without context. Semantic technologies provide opportunities to organize digital traces for informal learning but further work is needed to fully realize this potential.
Identifying features in opinion mining via intrinsic and extrinsic domain rel...Gajanand Sharma
The existing approaches to opinion feature extraction usually mine patterns from a single review corpus. This presentation gives idea about a novel approach to identify opinion features from online reviews by exploiting the difference in opinion feature statistics across two corpora.
This document discusses semantic augmentation, which is the process of attaching additional semantic information to text to aid automatic interpretation. It describes how existing tools like GATE and services like DBpedia Spotlight can be used to semantically annotate text by linking entities to knowledge bases. Some challenges of semantic augmentation are also outlined, such as ambiguity, unknown entities, and scaling to large amounts of data.
NAACL Tutorial Social Media Predictive Analyticsshengjing 孙胜晶
This document summarizes a NAACL tutorial on social media predictive analytics. The tutorial covers theoretical and practical sessions on batch prediction, online inference, and dynamic learning and prediction of user attributes from social media data. It discusses how to collect and annotate social media data, features and models for user attribute classification, and approaches for predicting from streaming data and incorporating neighbors' content. The tutorial materials include slides, code, datasets and references related to predictive analytics on social networks.
The document discusses methods of automated knowledge extraction from web documents. It describes how knowledge extraction systems work by defining output structures, preprocessing input text, applying extraction methods, and generating output. The key steps involve identifying named entities, their relationships, and addressing issues like ambiguity and accuracy challenges. While automated extraction helps analyze vast online information, limitations regarding accuracy, efficiency and dependency on external resources exist. Combining automated techniques with human oversight may help improve knowledge extraction.
Studying people who can talk back, Meyer 2013 DH at Oxford summer schoolEric Meyer
This document summarizes a presentation about studying digital transformations in research methods in the humanities. It discusses how digitization has impacted access to sources and research practices. Quantitative and qualitative research methods are examined. Examples of digital projects and their usage statistics, publications, and impact on research are provided. Surveys of researchers show how access to online sources and new skills are changing research practices and enabling new types of analysis.
This document provides information on several classic anthropological studies involving in-person fieldwork, as well as topics related to conducting ethnographic research online. It discusses Bronislaw Malinowski's work with Trobriand Islanders, Margaret Mead's Coming of Age in Samoa, and Napoleon Chagnon's studies of the Yanomamo people. The document also addresses conducting ethnographic research on social media, combining online and historical research methods, framing technology from actors' perspectives, setting research parameters to avoid information overload, recording and referencing online sources, and addressing ethical issues around informed consent and anonymity when studying online communities.
Integrating digital traces into a semantic enriched dataDhaval Thakker
The document discusses integrating digital traces from social media into a semantic-enriched data cloud for informal learning. It outlines a processing pipeline that collects digital traces, semantically augments them using ontologies, and allows browsing and interaction through a semantic query service. An exploratory study on job interviews found that authentic examples from digital traces were useful learning stimuli but could be mistaken as norms without context. Semantic technologies provide opportunities to organize digital traces for informal learning but further work is needed to fully realize this potential.
Identifying features in opinion mining via intrinsic and extrinsic domain rel...Gajanand Sharma
The existing approaches to opinion feature extraction usually mine patterns from a single review corpus. This presentation gives idea about a novel approach to identify opinion features from online reviews by exploiting the difference in opinion feature statistics across two corpora.
This document discusses semantic augmentation, which is the process of attaching additional semantic information to text to aid automatic interpretation. It describes how existing tools like GATE and services like DBpedia Spotlight can be used to semantically annotate text by linking entities to knowledge bases. Some challenges of semantic augmentation are also outlined, such as ambiguity, unknown entities, and scaling to large amounts of data.
Google Chromecast Usability Report by Team User FriendlyReed Snider
This usability report for the Google Chromecast® was carried out by team User Friendly of the Bentley University Testing & Assessments course in the Human Factors in Information Design graduate program.
The Google chromecast product was given to a set of 45-80 year old participants who were instructed to simply "set up the device". This project was not organized by Google and all rights of the terms used are attributed to Google® under Alphabet®.
A usability test was conducted on the Novaspaceart.com website using 3 participants. The tests found that (1) the site's visual design is outdated and unattractive, (2) some labels and terms are unclear, (3) the shopping cart and wish list functions are confusing to use, and (4) there is no direct path to the FAQ page or customer satisfaction guarantee. The report provides detailed findings from tasks given to participants and recommends improvements to the site's visual design, labels, navigation, and key functions.
This usability test report summarizes testing of the One Bus Away web-based bus routing application. Four undergraduate students completed tasks using the application and a satisfaction survey. Results show that successive tasks took longer to complete and had more errors. None of the participants could locate the bus's current location without assistance. While some functions were rated as well integrated, overall findings suggest the application has usability issues for novice users and comparisons to Google Maps. The report concludes each task was more difficult and provides recommendations to improve future tests.
This document analyzes the usability of Drupal's administrative tools based on usability guidelines and stakeholder reviews. It finds that Drupal adheres well to usability guidelines in its forms, receiving grades ranging from a B- to an A+. A stakeholder review also concludes that Drupal provides sufficiently usable tools to complete common administrative tasks to maintain a website. While Drupal has had usability issues, many are being addressed in subsequent releases. The overall analysis is that Drupal's tools are usable for common website administration.
Samantha conducted usability tests on documents she redesigned for her aunt, a fourth grade teacher. She tested documents with parents and students separately. Parents wanted more information in the newsletter and handbook, like lunch menus and event calendars. Students enjoyed the colorful graphs but wanted the y-axis to start at 10% instead of 0%. Based on the feedback, Samantha plans to add requested information to the newsletter and change the graph formatting. The tests helped her improve the documents for their intended audiences.
The document summarizes the findings of a usability study conducted on the Surgical Arts Centre website. Key findings include:
1. Users want more information about doctors' credentials, facility details, patient experiences and preparation/recovery processes. They care less about technical medical details.
2. Users highly trust review sites like Yelp over medical facility websites. The site should promote positive Yelp reviews.
3. Users have mixed reactions to the "Surgical Arts Centre" name and diverse service offerings. The branding and messaging may need refinement.
4. Users expect logistical support if traveling for treatment and want anonymity for plastic surgery. Outreach should promote package deals.
Grad1-YuanjingSun-CS5760Evaluation-UtestReport-Apr27Yuanjing Sun
This usability test report documents includes test of the Field Form web App http://www.csl.mtu.edu/classes/cs4760/www/projects/s15/group6/www/hci/. The test was carried out in Apr.14 to 16th 2015 by Team Justice League. It adapted paper-based USDA agriculture field condition criteria and evaluation method by using Field Form website. Local farmers can upload weekly report of weather observation, assessment of crop condition as well as GPS location. Such field data collection across 3000 counties in U.S. will have incredible values for stakeholders.
This document summarizes the results of a usability study conducted on Stephen F. Austin State University's online calendar. 10 participants with varying demographics completed tasks on the calendar and provided feedback. Most found the calendar difficult to navigate due to its lengthy scrolling format without clear month separators. Recommendations include adding color, distinguishing month/day markers, and a more traditional monthly calendar view. The study aims to identify issues and help the university improve its online calendar system.
The document summarizes the results of usability testing conducted on the University of Central Arkansas' Writing Department website. Testing found several issues with consistency, organization, navigation and clarity of information across various pages of the site. Recommendations are provided to improve consistency of design, clarify intended audiences, reorganize the menu structure and content, and add more descriptive information and headings throughout the site.
This document summarizes usability testing conducted on the Roomie platform, which helps university students in Hong Kong find suitable roommates. Two rounds of testing were conducted with 10 participants total. The goal was to understand strengths and weaknesses of the prototype. Participants completed registration and roommate search tasks while thinking aloud. Both quantitative metrics like time spent and qualitative feedback were collected to analyze user experience and identify areas for improvement.
eFolioMinnesota Text-Based Usability Test Findings and Analysis ReportKevin L. Glenz
The usability test report summarizes findings from usability testing of the text-based version of the eFolioMN website. Five visually impaired users tested six scenarios on the site. They found issues with content, terminology, and the user interface. Recommendations include improving directions, fixing accessibility issues, and making the interface more intuitive. The report provides details on the methodology, a breakdown of likes and dislikes, and analysis of post-task questions. It aims to help eFolioMN enhance the user experience for blind professionals.
We test the site www.whirlpool.net.au and did a detail analysis on that website and tried to find the issues. This is our analysis and finding about the website and some recommendation to improve the design of the website.
Web Site Usability Test - Client Report - Victorian Deaf Society (Ver 1....Di Zhang
This report summarizes usability testing conducted on the Victorian Deaf Society website. Testing focused on the Auslan course enrollment and online donation pages. Six participants completed tasks for each page and provided feedback. Results showed the donation page was moderately usable while the course page was slightly usable. Recommendations include consolidating course information, reorganizing the course timetable, and adding context for donations to improve usability.
Usability test report for inno ventureBrian Gaines
The usability test summary is as follows:
1) 10 participants completed tasks and surveys to evaluate the usability of the InnoVenture.com website.
2) Participants struggled with navigation, finding information, and completing tasks, averaging a score of 2.41 out of 4 on tasks.
3) The System Usability Scale survey scored the site even lower at a mean of 28 out of 100, below the threshold for usable.
4) Major issues identified included too much distracting text, difficulty navigating and locating information, and inconsistencies.
The document summarizes a usability evaluation of the U-Haul website conducted in April 2011. Five participants performed common rental and storage tasks while thinking aloud. The evaluation found that participants struggled to understand insurance coverage, estimate needed storage unit sizes, determine appropriate truck sizes, and find rental items. It provides recommendations to address these issues such as clarifying insurance information, helping estimate storage needs, and distinguishing rentals from purchases. The evaluation provided insights to improve the usability of key tasks and shopping experiences on the U-Haul website.
This document summarizes the results of two rounds of usability testing for the Roomie online roommate matching platform. In the first round, 5 problems were identified and addressed. The prototype was updated and 10 new users participated in a second round, identifying 8 additional issues. Key findings included unclear pictures, complex forms, and missing navigation elements. Both quantitative metrics like task completion times and qualitative feedback helped optimize the prototype between rounds to improve the user experience.
The Completed Quests page clearly shows the total points earned for each completed assignment as well as the total points earned for the course so far. Participants found this breakdown of points intuitive and helpful for understanding their progress in the class.
The usability test report summarizes testing of a mobile application. Five individuals used the app and provided feedback via surveys and interviews. Testers were mostly aged 20-25 and used Apple devices. The objectives were to find defects, ensure usability, and check if the app met requirements. Feedback indicated searching could be improved, fonts/colors needed adjusting, and navigation was difficult for older users. Recommendations included updating visual design, search features, and ease of use for all ages.
Usability Testing of the Czech Post Mobile Application (Case study)ExperienceU
This case study discusses usability testing of the Czech Post mobile application. Several test participants of varying ages used the app and provided feedback. Younger participants under age 40 were more comfortable exploring the app's features, while older participants age 40 and above were less familiar with their mobile phones in general. The testing uncovered opportunities to improve the intuitiveness and discoverability of features in the Czech Post app.
Primo Usability: What Texas Tech Discovered When Implementing PrimoLynne Edgar
This presentation discusses the usability study of Primo, an Ex Libris discovery tool, immediately after its implementation by Texas Tech University Libraries. Problems and potential solutions are explored by four librarians.
The document discusses usability testing of library databases and websites. It defines usability and user experience (UX), and explains why they are important for customer loyalty. It also outlines the key elements of UX design including learnability, efficiency and satisfaction. The document then describes how the University of Connecticut conducted usability testing of its database locator and redesigned it based on the test results to improve the user experience.
Google Chromecast Usability Report by Team User FriendlyReed Snider
This usability report for the Google Chromecast® was carried out by team User Friendly of the Bentley University Testing & Assessments course in the Human Factors in Information Design graduate program.
The Google chromecast product was given to a set of 45-80 year old participants who were instructed to simply "set up the device". This project was not organized by Google and all rights of the terms used are attributed to Google® under Alphabet®.
A usability test was conducted on the Novaspaceart.com website using 3 participants. The tests found that (1) the site's visual design is outdated and unattractive, (2) some labels and terms are unclear, (3) the shopping cart and wish list functions are confusing to use, and (4) there is no direct path to the FAQ page or customer satisfaction guarantee. The report provides detailed findings from tasks given to participants and recommends improvements to the site's visual design, labels, navigation, and key functions.
This usability test report summarizes testing of the One Bus Away web-based bus routing application. Four undergraduate students completed tasks using the application and a satisfaction survey. Results show that successive tasks took longer to complete and had more errors. None of the participants could locate the bus's current location without assistance. While some functions were rated as well integrated, overall findings suggest the application has usability issues for novice users and comparisons to Google Maps. The report concludes each task was more difficult and provides recommendations to improve future tests.
This document analyzes the usability of Drupal's administrative tools based on usability guidelines and stakeholder reviews. It finds that Drupal adheres well to usability guidelines in its forms, receiving grades ranging from a B- to an A+. A stakeholder review also concludes that Drupal provides sufficiently usable tools to complete common administrative tasks to maintain a website. While Drupal has had usability issues, many are being addressed in subsequent releases. The overall analysis is that Drupal's tools are usable for common website administration.
Samantha conducted usability tests on documents she redesigned for her aunt, a fourth grade teacher. She tested documents with parents and students separately. Parents wanted more information in the newsletter and handbook, like lunch menus and event calendars. Students enjoyed the colorful graphs but wanted the y-axis to start at 10% instead of 0%. Based on the feedback, Samantha plans to add requested information to the newsletter and change the graph formatting. The tests helped her improve the documents for their intended audiences.
The document summarizes the findings of a usability study conducted on the Surgical Arts Centre website. Key findings include:
1. Users want more information about doctors' credentials, facility details, patient experiences and preparation/recovery processes. They care less about technical medical details.
2. Users highly trust review sites like Yelp over medical facility websites. The site should promote positive Yelp reviews.
3. Users have mixed reactions to the "Surgical Arts Centre" name and diverse service offerings. The branding and messaging may need refinement.
4. Users expect logistical support if traveling for treatment and want anonymity for plastic surgery. Outreach should promote package deals.
Grad1-YuanjingSun-CS5760Evaluation-UtestReport-Apr27Yuanjing Sun
This usability test report documents includes test of the Field Form web App http://www.csl.mtu.edu/classes/cs4760/www/projects/s15/group6/www/hci/. The test was carried out in Apr.14 to 16th 2015 by Team Justice League. It adapted paper-based USDA agriculture field condition criteria and evaluation method by using Field Form website. Local farmers can upload weekly report of weather observation, assessment of crop condition as well as GPS location. Such field data collection across 3000 counties in U.S. will have incredible values for stakeholders.
This document summarizes the results of a usability study conducted on Stephen F. Austin State University's online calendar. 10 participants with varying demographics completed tasks on the calendar and provided feedback. Most found the calendar difficult to navigate due to its lengthy scrolling format without clear month separators. Recommendations include adding color, distinguishing month/day markers, and a more traditional monthly calendar view. The study aims to identify issues and help the university improve its online calendar system.
The document summarizes the results of usability testing conducted on the University of Central Arkansas' Writing Department website. Testing found several issues with consistency, organization, navigation and clarity of information across various pages of the site. Recommendations are provided to improve consistency of design, clarify intended audiences, reorganize the menu structure and content, and add more descriptive information and headings throughout the site.
This document summarizes usability testing conducted on the Roomie platform, which helps university students in Hong Kong find suitable roommates. Two rounds of testing were conducted with 10 participants total. The goal was to understand strengths and weaknesses of the prototype. Participants completed registration and roommate search tasks while thinking aloud. Both quantitative metrics like time spent and qualitative feedback were collected to analyze user experience and identify areas for improvement.
eFolioMinnesota Text-Based Usability Test Findings and Analysis ReportKevin L. Glenz
The usability test report summarizes findings from usability testing of the text-based version of the eFolioMN website. Five visually impaired users tested six scenarios on the site. They found issues with content, terminology, and the user interface. Recommendations include improving directions, fixing accessibility issues, and making the interface more intuitive. The report provides details on the methodology, a breakdown of likes and dislikes, and analysis of post-task questions. It aims to help eFolioMN enhance the user experience for blind professionals.
We test the site www.whirlpool.net.au and did a detail analysis on that website and tried to find the issues. This is our analysis and finding about the website and some recommendation to improve the design of the website.
Web Site Usability Test - Client Report - Victorian Deaf Society (Ver 1....Di Zhang
This report summarizes usability testing conducted on the Victorian Deaf Society website. Testing focused on the Auslan course enrollment and online donation pages. Six participants completed tasks for each page and provided feedback. Results showed the donation page was moderately usable while the course page was slightly usable. Recommendations include consolidating course information, reorganizing the course timetable, and adding context for donations to improve usability.
Usability test report for inno ventureBrian Gaines
The usability test summary is as follows:
1) 10 participants completed tasks and surveys to evaluate the usability of the InnoVenture.com website.
2) Participants struggled with navigation, finding information, and completing tasks, averaging a score of 2.41 out of 4 on tasks.
3) The System Usability Scale survey scored the site even lower at a mean of 28 out of 100, below the threshold for usable.
4) Major issues identified included too much distracting text, difficulty navigating and locating information, and inconsistencies.
The document summarizes a usability evaluation of the U-Haul website conducted in April 2011. Five participants performed common rental and storage tasks while thinking aloud. The evaluation found that participants struggled to understand insurance coverage, estimate needed storage unit sizes, determine appropriate truck sizes, and find rental items. It provides recommendations to address these issues such as clarifying insurance information, helping estimate storage needs, and distinguishing rentals from purchases. The evaluation provided insights to improve the usability of key tasks and shopping experiences on the U-Haul website.
This document summarizes the results of two rounds of usability testing for the Roomie online roommate matching platform. In the first round, 5 problems were identified and addressed. The prototype was updated and 10 new users participated in a second round, identifying 8 additional issues. Key findings included unclear pictures, complex forms, and missing navigation elements. Both quantitative metrics like task completion times and qualitative feedback helped optimize the prototype between rounds to improve the user experience.
The Completed Quests page clearly shows the total points earned for each completed assignment as well as the total points earned for the course so far. Participants found this breakdown of points intuitive and helpful for understanding their progress in the class.
The usability test report summarizes testing of a mobile application. Five individuals used the app and provided feedback via surveys and interviews. Testers were mostly aged 20-25 and used Apple devices. The objectives were to find defects, ensure usability, and check if the app met requirements. Feedback indicated searching could be improved, fonts/colors needed adjusting, and navigation was difficult for older users. Recommendations included updating visual design, search features, and ease of use for all ages.
Usability Testing of the Czech Post Mobile Application (Case study)ExperienceU
This case study discusses usability testing of the Czech Post mobile application. Several test participants of varying ages used the app and provided feedback. Younger participants under age 40 were more comfortable exploring the app's features, while older participants age 40 and above were less familiar with their mobile phones in general. The testing uncovered opportunities to improve the intuitiveness and discoverability of features in the Czech Post app.
Primo Usability: What Texas Tech Discovered When Implementing PrimoLynne Edgar
This presentation discusses the usability study of Primo, an Ex Libris discovery tool, immediately after its implementation by Texas Tech University Libraries. Problems and potential solutions are explored by four librarians.
The document discusses usability testing of library databases and websites. It defines usability and user experience (UX), and explains why they are important for customer loyalty. It also outlines the key elements of UX design including learnability, efficiency and satisfaction. The document then describes how the University of Connecticut conducted usability testing of its database locator and redesigned it based on the test results to improve the user experience.
Student research eds ugm melbourne presentation (public edit)Miranda Hunt
Student researchers presented research on user experiences and behaviors. Primary research methods discussed included contextual inquiry, surveys, interviews, usability testing, video diaries, and card sorting. Research on college students found they begin with "presearch" on Google and Wikipedia to scope their topic before doing "serious research". Student research occurs in "microbursts" with periods of dormancy. Many students are novice researchers who find library websites challenging and don't understand terms like "Boolean". Top search terms were often broad, misspelled, and focused on results on the first page.
Impact the UX of Your Website with Contextual InquiryRachel Vacek
A contextual inquiry is a research study that involves in-depth interviews where users walk through common tasks in the physical environment in which they typically perform them. It can be used to better understand the intents and motivations behind user behavior. In this session, learn what’s needed to conduct a contextual inquiry and how to analyze the ethnographic data once collected. We'll cover how to synthesize and visualize your findings as sequence models and affinity diagrams that directly inform the development of personas and common task flows. Finally, learn how this process can help guide your design and content strategy efforts while constructing a rich picture of the user experience.
This is the slide deck for the presentation that was given with Kate Lawrence (VP User Experience EBSCO), Courtney McDonald (Indiana University), and Esther Onega (University of Virginia) at the 2014 Charleston Conference on Thursday Nov 6, 2014.
How students *really* do research - Findings from the Research Confession BoothEmily Singley
This document summarizes findings from a study called the Research Confession Booth. The study involved undergraduate and graduate students performing research tasks at a booth while talking through their process and being recorded. Key findings included:
- Students primarily used Google, Google Scholar, and library databases like PubMed for research.
- Navigation of library resources was challenging for many students.
- Students valued features that provided relevant results, full-text access, and citation management support.
- Some found library resources confusing and difficult to use compared to familiar Google.
- Preference for open web was due to quick results, familiar interfaces, and ability to match effort to information needs.
User-Generated Content and Social Discovery in the Academic Library Catalogu...Steve Toub
1) The document discusses findings from user research on incorporating user-generated content and social discovery features into academic library catalogs.
2) Participants expressed a desire to see what trusted colleagues think of resources and find "gems" they don't know exist. However, few used existing social tools for academic purposes.
3) The strongest motivation for contributing user reviews was helping others find useful resources faster. Ensuring quality would involve authenticating users and exposing more than binary reviews.
Impact your Library UX with Contextual InquiryRachel Vacek
A contextual inquiry is a research study that involves in-depth interviews where users walk through common tasks in the physical environment in which they typically perform them. It can be used to better understand the intents and motivations behind user behavior. In this session, learn what’s needed to conduct a contextual inquiry and how to analyze the ethnographic data once collected. I'll cover how to synthesize and visualize your findings as sequence models and affinity diagrams that directly inform the development of personas and common task flows. Finally, learn how this process can help guide your design and content strategy efforts while constructing a rich picture of the user experience.
Are you interested in learning about text analysis but have little to no experience with programming languages or writing code? These two short courses will introduce you to multiple text analysis methods. We will examine real-world examples and engage in hands-on activities that don’t require running any code. These short courses are ideal for students and researchers in non-technical fields, faculty who would like to incorporate text analysis in their curriculum, or as a precursor to programming with text analysis tools.
A Gentle Introduction to Text Analysis I will cover both qualitative and quantitative text analysis methods, bag-of-words techniques and classification.
Presentation given by Joseph Greene, Research Repository Librarian at University College Dublin Library, at Open Repositories 2016, held at Trinity College Dublin, June 13-16th, 2016.
Qualitative text analysis and sentiment analysis techniques were used to analyze various types of text data and answer research questions. Specifically, thematic coding was used to analyze interview transcripts on the experiences of pediatric speech language pathologists. Bag of words methods and sentiment analysis using lexicons were applied to study public perception of vaccines by examining tweets. Supervised and unsupervised classification identified Jim Crow laws from North Carolina legislation based on racially-based language.
This presentation to the 2015 i3 Conference in Aberdeen describes two weeks of ethnographically-inspired, synchronous usability testing which will have been conducted on a prototype for a new library search tool at a small university in the United Kingdom. Phase one of testing is complete and the presentation covers the design process, initial analysis and reflection on the methods, as well as the demands placed on the research design by the practitioner setting.
This document discusses effective strategies for searching the internet. It emphasizes that boolean searches using operators like AND, OR and NOT can help save time. Primary sources are valuable but should not be relied on alone. Web 2.0 has increased access to primary sources in real-time like photos, maps and documents. Information sources must be verified by considering the authority, independent corroboration, plausibility and professional presentation. RSS feeds allow users to continuously receive updated information from credible sources. Proper filtering techniques can help organize information and ensure only credible sources are used.
This document summarizes research on how users interact with and experience online finding aids. It discusses two studies that observed both novice and expert users searching finding aids and identified challenges around terminology, navigation, and display design. The studies found that while experts were more efficient, novices struggled with archival terms and hierarchical structures. Both recommended combining browse and search functions along with clear explanations of terms. The document also discusses issues around hidden collections and approaches to improve their accessibility and intellectual control.
Presentation from Professor Matthew Chalmers from the School of Computing Science at the University of Glasgow who gave a presentation on beacons at the Intelligent Campus Community Event on the 10th April 2018.
Does the field of user-centered design mystify you? Does user research seem like the last thing you have time to think about?
Any team can look at analytics to understand what users are doing and how often they’re doing it. What analytics won’t tell you is *why* users are doing certain things — sometimes you need more context. That’s where user research comes in. This session will map out a framework for incorporating user research into your development cycle.
What do students want from library discovery tools?Keren Mills
The document summarizes research into what students want from library discovery tools. Key findings from interviews and prototyping with students include that they want a simple search interface with clear indications of what is searchable. Students also want results to open in new tabs and clearly show full text availability. Other desired features include autocomplete, seeing previous searches, and saving items to a personal library shelf. The research helped the university select a new library management system and shape how the discovery tool will be implemented.
Trials by Juries: Suggested Practices for Database TrialsAnnis Lee Adams
This panel discussion focused on tools and techniques for gathering feedback on database trials from librarians and library users. The panelists from Golden Gate University, University of Nebraska-Kearney, and Clemson University discussed criteria for selecting database trials, scheduling trials, soliciting and recording trial feedback, and closing the loop with participants and vendors after a trial. Key points included using web surveys to gather feedback, timing trials to maximize participation, and maintaining records of past trials and decisions.
Ähnlich wie Usability Report - Discovery Tools (20)
Proposed Website Redesign: Howard County, MD Board of ElectionsNikki Kerber
Proposed website redesign for the Howard County Maryland Board of Elections website, http://www.howardcountymd.gov/Departments.aspx?id=4294968268.
Presentation delivered to Dr. Kathryn Summers at the University of Baltimore in her Information Architecture class at the end of the spring 2012 semester.
This presentation showcases a potential social media strategy for Ryka, a women's only sport company, and its parent company, Brown Shoe Company. By the end of the presentation readers will have an idea of what to include in their own social media strategy and how possible metrics of success. The presentation also covers an outline of a possible social media policy that Ryka could adopt as their own.
Website Design for Senior Citizens is a presentation given to a graduate class at University of Baltimore which looks at design heuristics web developers and designers should consider when developing a website for seniors and the general population.
Sephora is a global beauty retailer started in 1969 in France that was later purchased by LVMH in 1997. It operates over 2,000 stores globally including 280 in the US and Canada. Sephora pioneered the open-sell retail model and carries over 13,000 products from 250 brands. The company innovates through technology, social media, and services like beauty classes and apps.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Presentation of the OECD Artificial Intelligence Review of Germany
Usability Report - Discovery Tools
1. USABILITY TESTING REPORT:
DISCOVERY (SEARCH) TOOLS OF ENCORE AND SUMMONS
Authors: Heather Mathieson, Debbie Shultz, Nikki Kerber
IDIA 642 - Lucy Holman
May 12, 2010
2. Table of Contents
Executive Summary
Participants
Methodology
Results
Recommendations
Appendices
Heat-map
Wireframes
Consent Form
Faculty Screener
Graduate Student Screener
Script
Task List
3. Executive Summary
This report contains the results of a user research project conducted between April 14 and May 12, 2011, to
develop recommendations for the University System of Maryland and Affiliated Institutions (USMAI) Library
Consortium in their consideration of several commercial discovery tools to purchase for use in USMAI
libraries. Before a product is purchased, it was determined that usability testing should be performed on the
four main tools under consideration: EDS, Primo, Summons, and Encore. Dr. Lucy Holman’s IDIA 642
Research Methods class at the University of Baltimore conducted the testing as their final project.
To gain insight into each tool’s usability in terms of both negative issues and positive attributes, we
conducted usability testing on Summons and Encore using four participants: two graduate students and two
faculty members. Participants selected had to be from a university or college within the state university
system of Maryland. To conduct the test, we used the Tobii T60 Eyetracker and Software and Morae, a
usability testing and market research software, at the University of Baltimore Usability Research Lab. This
report concerns the findings of our usability testing.
What Are Discovery Tools?
Universities within the state university system of Maryland currently use Research Port, developed by the
University of Maryland at College Park as a front end for Federated Search. (Federated Search is an
information retrieval technology that allows the simultaneous search of multiple searchable resources.) The
university system of Maryland is planning to replace Research Port by purchasing a commercial Discovery
tool.
Currently, when a user logs onto a library website to search for information, he or she faces a daunting task:
how to find something doing independent searches on 75 article databases, local digital content on the
library server, the library catalog, and electronic collections across libraries.
4. Executive Summary (cont.)
This is where Discovery tools come in. Discovery tools, which are middleware that is used as a searching
front end, make searching faster and easier. By entering a single query, Discovery tools index content
simultaneously from a variety of sources such as library catalogs, article databases, and electronic
collections. Discovery tools are available as open source, but the most robust ones come from commercial
vendors.
Summary of Findings
In general, our tests showed that Summons was easier to use than Encore. Both the quantitative data and the
qualitative data proved that this was true. The quantitative data, which measured such things as the task
completion rate, the number of searches needed to complete each task, and the ocular fixations and scan
paths, indicate that participants took longer and needed more searches to complete a task in Encore than
they did in Summons. The qualitative data, which primarily included an evaluation of participant comments
and observations of body language and facial expressions, led us to infer that participants were more
confused and frustrated in Encore than they were in Summons.
Both tools were lacking in feedback to the user, especially in cases in which a search fails because of a
misspelled word. Neither tool offers adequate search hints for frustrated users. However, Encore had many
more issues of major or moderate severity, while most of Summons’ issues were minor. Perhaps the most
serious error that occurred in Encore had to do with the links to additional content that were located on the
right-hand side of the page. Not one user saw these links right away and without clicking on them, search
after search failed. The “Results” section explains the findings in more detail.
5. Executive Summary (cont.)
One of the most surprising findings had to do with one participant’s comments as he was searching. He
spoke about how he tries to use the subject matter terms to get to the right Library of Congress catalog,
indicating that he applies the traditional thought patterns of a library catalog to his search. Discovery tools
are supposed to provide a different search experience for the user than that provided by a traditional library
catalog search. This made us question whether the two tools we tested are providing that experience, or
whether it is a case of users beginning to change old paradigms as they become more familiar with
Discovery tools.
Recommendations
For most issues, our recommendations include suggestions to reorganize screen elements to make
searching less confusing, easier, and faster. Adding more user feedback in the form of search tips and “do
you mean” prompts when a word is misspelled would make the user experience less frustrating. In some
cases, a simple cosmetic change might make things clearer.
6. Participant Overview
User 1 User 2 User 3 User 4
Gender Female Female Male Female
Age 18-29 18-29 60+ 50-59
UB Affiliation Graduate Student Graduate Student Faculty Faculty
Frequency
>10 times per 4-10 times per
Conducting >5 classes 1-2 classes
month month
Research
Website used first Langsdale library Langsdale library
Google Google
in research website website
Academic Search Academic Search
Academic Search Premier; ABI/ Premier; ABI/
Online Databases
Premier; Inform; Research Port Inform; Lexis
Utilized
Psychinfo PyschInfo; Nexis; Research
EBSCO; Business Port
Source Premier
7. Participants (cont.)
User 1 User 2 User 3 User 4
-- If none, which
-- -- -- --
statement?
** Statement
about online #2 #1 #2 #1
databases
Online Research
Good Good Good Excellent
Skills
Length of time
1-2 hours 31-60 minutes 0-30 minutes 0-30 minutes
researching
Frequency
1-3 times per 1-3 times per <1 time per
visiting UB <1 time per week
week week week
library’s website
** Note: The statement numbers, #1 and #2, refer to the appropriate statements
on the screeners. Please refer to the screener to read those statements.
8. Methodology
RECRUITMENT
Graduate students and faculty of the University of Baltimore community were contacted about
participating in a usability test of two discovery search tools. Team members, as well as Dr. Lucy
Holman, used a variety of ways to contact prospective participants, including but not limited to e-mail
blasts, individual e-mails, the UB Students Daily Digest, the UB Faculty Daily Digest and word of
mouth. A total of four participants who were all affiliated with the University System of Maryland were
found to participate in the usability test. Two participants were UB faculty members and two
participants were UB graduate students; all were over the age of 18. Participants were tested in
sessions that ran approximately 60 minutes each.
PREPARATION
Prior to the test dates, the team reviewed two discovery tools, Encore and Summons, to determine how
each tool worked. A screener, test-script, and consent forms were written and duplicated for each
participant. Projects were created in each of the testing software tools, Tobii and Morae.
TESTING PROCEDURE
Each testing procedure began with introductions and a description of the project. The participant was
asked to fill out an informed consent form. A screener was also given to the participant to collect user
profile information (as shown in the profile matrix in the previous section). Each of the four
participants was asked to perform five tasks in each of the two discovery tools, Encore and Summons.
The specific tasks included combinations of searching for articles, books, and audio-visual materials.
A Task List is included in the Appendix, pages 48-50.
9. Methodology (cont.)
Participants were met and welcomed into the reception area of the Usability Lab. The members of the group
took turns being the moderator for each participant. Each participant was set up in the testing room and
logged into the system. If Tobii was used, it was calibrated to the user’s eye movements.
During the test, the moderator followed a written script and emphasized two important points at the
beginning and throughout the test: first, that we were testing the discovery tools and not the user or the
user’s ability to succeed; and second, that because of the test environment, participants might not be able to
fully complete a task. Participants were also encouraged to talk aloud and comment on anything they
wanted to during the test.
The other two team members sat in the Observation Room behind a one-way mirror. They took notes,
observed the users’ actions, and made note of users’ comments. They also recorded any quantitative data.
After the test, participants were asked to summarize their reactions to the two Discovery tools and provide
us with any additional feedback that could help us in our evaluations.
10. Methodology (cont.)
CRITERIA OF USABILITY
We used the following criteria as a measure of the usability in Encore and Summons:
Quantitative Measures
Measure of time participants needed to complete each task
Task completion rate
Number of searches needed to complete each task
Average number of searches needed to complete each task
Qualitative Measures
Conversations with participants after the usability test and answers from the screener questionnaires
Comments from participants
Facial expressions
Body language
11. 11 Results
Pages 15-24 provide an extensive results overview. Please refer to
the Task List, located in the Appendix, pages 48-50, to get
acquainted with all of the tasks we asked our users to complete.
12. Results
Features of Encore and Summons that Support Users
Tool Good Usability Practices Current Features of Web
Both Tools Allow users to use search criteria in a logical way Left side bar with various search options
to further narrow search results produced by a displayed when searching articles in Encore.
basic search. This paradigm will fit users’ mental
models of the search process.
13. Results
Features of Encore and Summons that Support Users
Tool Good Usability Practices Current Features of Web
Summons Allow users to select as much criteria for an Material Type field on the Advanced Search
advance search with as little clicks or selections as screen includes an option that combines books
possible. This way users will spend less time and journals together.
searching and more time looking through
meaningful search results.
Encore Provide a clean and simple user interface, clear of Encore provides a clean and simple user
advertisements, banners, or other added features interface that is fairly intuitive to navigate.
that do not provide a purpose in the search
process.
14. Results
Features of Encore and Summons that Support Users
Tool Good Usability Practices Current Features of Web
Summons Give users plenty of search criteria options in Summons offers a nice Advanced Search form.
order to help narrow their search from the
beginning.
15. Results
Error Severity
The most critical level - The user was unable to correctly complete
Major
task.
Moderate Significant problems caused for the user.
Minor annoyances that slow the user down during the tasks. The user
Minor
leaves frustrated.
Good Indicates that an object is well designed.
16. Results
Tool Issue Results Level Frequency
Encore Users were blind in most cases It took most users anywhere from 5 to 10 Major 4
to finding and/or clicking on minutes to locate this link. One user never
“Other Sources” (WorldCat and found it. Until the user located and clicked
Link+) “Other Sources,” searches failed.
Encore Tool does not clearly show how The user could not understand why certain Moderate 1 user failed
it uses search terms results were listed. The results list did not to complete
highlight keywords to see why certain data task;
was included. Sometimes data was included
that did not appear to contain any of the 3 users had
search terms. significant
problems
Encore Irrelevant search results Search results sometimes seemed completely Moderate 2
irrelevant to the search terms entered. The
“Did you mean” entries often appeared to be
somewhat or very unrelated to what the user
was looking for.
Encore Tool manipulates keywords It appeared that the tool treated the whole Moderate 4
search string as individual keywords and not
specifically as Title, Author, or Subject.
17. Results (cont.)
Tool Issue Results Level Frequency
Both Both tools are unforgiving of When a user misspelled a word, the search Moderate 2
Tools spelling errors. failed. No feedback was provided to bring
the misspelling to the user’s attention.
Unless the user noticed the misspelling and
corrected it, additional searches could also
fail.
Both Neither tool offers search hints Encore did not provide any assistance or Moderate 4
Tools for frustrated users suggestions to users when a search failed. It
did not offer any suggestions as to how the
user could get an unmanageable list (too
many items) down to a manageable list.
Summons offered search tips but they were
below the fold of the Advanced Search page
and users did not always see them.
Both Tool offers multiple ways to Clicking in various search fields produced Moderate 4
tools search for items. search results pages each with a different
look/feel and with different filtering options.
Navigating through these search options
required the user to click multiple times.
18. Results (cont.)
Tool Issue Results Level Frequency
Encore Links for Catalog, Images, and Placement of these links at the very top of Minor 2
Articles are hard to find and the screen made them hard to locate. Their
behavior can be confusing behavior is sometimes puzzling. For
example, when the user clicks Articles, then
clicks Advanced Search, the Articles link
disappears and only the Catalog link
displays, making the user wonder whether
the search is actually being performed on
the catalog.
Summons Users appeared to hesitate when clicking
The tool has two Submit
this button as if unsure of which one to use.
buttons on the Advanced Minor 3
Some clicked the top button; others clicked
Search form.
the bottom.
Not obvious when the tool Because users did not see the “No entries
displayed “No entries found” on found” message, they were unsure whether
Summons Minor 2
the top of the Advanced Search the search was still processing or had
form. finished.
Encore Confusing search results Search displayed “No Catalog Results Minor 1
Found,” at the top of the page; at the bottom
of the page, it displayed a whole series of
results (in one case over 40,000 articles).
19. Results (cont.)
Tool Issue Results Level Frequency
Summons Not obvious when the tool Because users did not see the “No entries Minor 2
displayed “No entries found” on found” message, they were unsure whether
the top of the Advanced Search the search was still processing or had
form. finished.
Both Screens in both tools were It was not always clear where users should Minor 3
tools cluttered, with options for go when starting a search.
searching on the left-hand
side, the center, and
sometimes on the right-hand
side of the page.
Both Discovery tools are not Participant tried to use subject matter terms Minor 1
tools providing a different search to get to the right Library of Congress
experience catalog, applying the traditional thought
patterns of a library catalog to his search.
Discovery tools are supposed to provide a
different search experience for the user than
that provided by a traditional library catalog
search.
20. Results
Number of searches for each task
Encore Summons Encore Summons
User 1 User 2
T1 6 1 T1 6 2
T2 7 1 T2 6 2
T3 4 2 T3 4 2
T4 4 1 T4 5 1
Encore Summons Encore Summons
User 3 User 4
T1 10 1 T1 7 2
T2 10 2 T2 5 1
T3 9 6 T3 6 4
T4 3 2 T4 10 1
Overall, users had the most problem with successfully
completing task 1 in Encore and task 3 in Summons
21. Results
How successful were users in completing tasks?
Participant completed all of the components of a certain task successfully.
Complete
Example: Participant found many articles on Artificial Intelligence that were no
more than five years old and are peer-reviewed.
Participant only completed some of the components of a certain task.
Incomplete
Example: Participant successfully found an article but did not save an article for
retrieval.
Participant was not able to complete any portion of the task.
Failed
Example: Participant did not find a known item: Beowulf
22. Results
How successful were users in completing tasks?
Failed Tasks
15%
Failed Tasks
35%
Completed Tasks
45%
Incomplete Tasks
Completed Tasks 25%
60%
Incomplete Tasks
20%
Encore Summons
23. Results
Average Time on Task in Encore and Summons
Based on our findings, Task 2 in Encore gave users the most problem. It took an average of 7:50
minutes for users to complete the task or end the task before asking to move to the next task.
Similarly, it took users an average of 5:15 minutes to complete or end Task 3 in Summons.
24. Results
Time on Task in Encore and Summons
User 1 User 2 User 3 User 4
Encore
Task 1 4:31 2:44 5:40 2:51
Task 2 7:39 5:29 14:25 3:49
Task 3 1:30 3:49 9:50 4:45
Task 4 4:43 4:39 4:31 3:27
Task 5 0:14
Summons
Task 1 0:42 0:48 1:44 2:44
Task 2 0:40 0:42 4:03 0:50
Task 3 2:41 2:40 7:18 8:21
Task 4 1:29 2:14 6:16 2:22
Task 5
In general, it took users much more time to complete or end tasks in Encore than in Summons. Interestingly, Task 2 in
Encore gave users the most problem, as their completion times are higher.
In Summons, users were much more successful in completing tasks. Interestingly, Task 3 gave users the most trouble
with time rates varying between 2:40 - 8:21 minutes, however these times do not compare to the problems Task 2
gave users in Encore. Comparing Task 2 in Encore and Summons, users cut their time rate by more than half.
25. Interesting Observations
The observations described below include those things that we felt were important or interesting enough to
make a note of while we were conducting our testing.
•
Only one participant found the + button in the Format field of the Advanced Search screen in Encore.
•
Only one participant typed in “peer reviewed” as a search term in the advanced search criteria. The
other participants were unsuccessful in filtering articles by this criteria because there was no specific
option and it didn’t occur to them to include it as a search term.
•
One participant had no successful searches in Encore because this participant never located the “Get
More Results in: Link+ and WorldCat” on the right-hand side of the screen. It appeared from our
observations that the right-hand side of the screen was totally ignored during the usability testing for
this tool with this participant.
•
Only one participant was successful in saving the article for retrieval in Task 5. The other participants
found the article but did not save.
•
Participants who used a basic search and then narrowed their search results using additional search
criteria seemed to have better results with both tools (provided that they had already clicked Link+ or
WorldCat in Encore) than those who started with Advanced Search.
One of the most surprising findings we came across in our usability testing had to do with one participant’s
comments as he was searching. He spoke about how he tries to use the subject matter terms to get to the
right Library of Congress catalog, indicating that he applies the traditional thought patterns of a library
catalog to his search. Discovery tools are supposed to provide a different search experience for the user
than that provided by a traditional library catalog search. This made us question whether the two tools we
tested are providing that experience.
26. Recommendations
Encore Recommendations
Highlight words in results page to show user how terms are being used.
All results should appear below the primary search field.
Use better labeling such as “Get More Results in: Link+ and WorldCat.” If no search results are found,
this should be located where the results would be. If results are found, move this to the top left sidebar
where there is a higher probability a user would look for more information on the page.
Summons Recommendations
Have only one Submit button on Advance Search Form page.
Make the words “No Entries Found” in a larger font size and in the color red, located below the search
fields where results would normally appear, so that user has a clear understanding that their search
returned zero results and that the search has completed.
Common Recommendations
Have a Modify Search Link below the search field to edit search or make changes to the search directly
from the Search Results page.
Have a list of the top search term suggestions.
Spelling suggestions are needed so that a search does not always result in “No Entries Found”.
Clear call-out button for help or instructions on how to use the discovery tool. Perhaps a video tutorial
or PDF guide on how to use the system. Help link/button should also be clearly labeled on all internal
web pages.
27. Overall Recommendation
Should a purchasing decision need to be made between Encore and Summons, our group feels that
Summons would be the better discovery search tool for the graduate students and faculty members
of University of Baltimore, and as a whole for the University System of Maryland and Affiliated
Institutions (USMAI). While some changes should be made to Summons before it is integrated into
USMAI, we feel that these changes are minor in comparison to the issues Encore would need to
address. The results show that many users felt Summons was a lot easier to use compared to Encore.
Users successfully completed 60% of the five tasks in Summons versus users only completing 45% of
the tasks in Encore. The overall failure rate of successfully completing tasks shows an even greater
disparity; users testing Summons only had a 15% failure rate compared to Encore’s 45% failure rate.
We feel that if our recommendations were taken into consideration, the failure rate in Summons
would drop significantly.
29. 29 Heat Map
Heat Maps show how long each part of the screen has been looked at, as well as
the main areas of the website users looked at and focused on.
30. Heat-Maps
These heat maps show that users have a right-side “blindness” in both Encore and Summons. In other
words, most users did not notice that there was a portion on the right side bar in Encore that said
“Other Sources”. We assume this “blindness” occurs because advertisements and other banners
normally appear on the right hand side on webpages. On average, it took participants approximately
1:34 minutes to find “Other Sources” in Encore. User 4 in particular never even found the “Other
Sources” side bar. A similar pattern of “blindness” was also found in Summons. For example, none of
our participants looked to the right sidebar in Summons to see what features or links were available.
User 2 even commented that her field of vision was in the center of the screen and to the left.
32. Wireframes
The following wireframes were created in response to our recommendations, based on both of
the discovery search tools we tested. We envisioned a discovery search tool that incorporates
all of our recommendations. Our wireframes also include the positive features we found in
both systems.
44. Script
Introduction
M: Good <time of day>, thank you for making time in your schedule to participate today. My name is <name>, and I am a part of a group
working on a research study to learn about discovery (search) tools for the University System of Maryland and Affiliated Universities (USMAI).
We have asked you here <time of day> to help us evaluate two discovery tools through your interaction with them.
At a basic level, a discovery tool is library search engine that allows you to search multiple databases, along with the library's catalog in one
search interface. During today’s session, you will be using <name discovery tool> and <name discovery tool> to complete some basic tasks
that a <student/faculty member> might carry out during their search process.
Please realize that we are not testing your ability. Instead, we’re testing the effectiveness of the tool. Do you feel comfortable in proceeding?
P: Participant acknowledges
M: Very good, if you will, please sign our release form here. It says:
“I, the undersigned, agree to be part of a usability study conducted at the University of Baltimore. As a participant. I agree to be videotaped
and to have my activities on the computer recorded. I allow my comments and observations about my experiences to be become part of the
findings of the usability study.”
P: Participant signs release form.
M: Thank you. Before we begin with the usability test, we could like you to fill out this basic questionnaire in order to get an idea of your
background in searching online databases and online search engines for resources such as books, articles, or audiovisual items. (Hands the
participant the screener).
P: Participant fills out screener and hands it back to the moderator.
M: Thank you.
45. Script (cont.)
System Introduction and Calibration (Tobii)
M: In order to start the usability test we need to set up the eye-tracking software. This computer system is equipped with eye-
tracking software, which is a device for measuring eye positions and eye movement which is helpful in analyzing how you use the
discovery tools. Before we begin, we will need to set up a session in the software. Would it be OK if I used your first name to
identify this session?
P: Participant responds.
M: O.K. <name of user>.
The next step is to calibrate the eye tracker to your eyes. During this calibration and throughout this session, please relax and
continue to look at the screen. While we will be conversing throughout the session, please try to keep your focus on the screen
rather than looking at me. Do you have any questions?
P: Participant responds.
M: OK, here we go… (start and run through calibration)
FAILURE
M: OK, the system had a little trouble picking up all the information it needed for calibration. I would like to recalibrate the areas
of the screen on which the system had some trouble. Are you ready?
P: Participant responds
M: OK – Here we go
SUCCESS
M: Excellent! I will now open the first discovery tool.
46. Script (cont.)
Alternate paragraph if using Morae:
(Will the Morae session be set up prior to beginning the test, or will we have to set it up before the user starts working with the
tool?)
M: The computer system you will be working on today is equipped with Morae software, which is used for usability testing and
user experience research. It records your interactions with the computer so they can be analyzed after your session. I’m now
going to set up a session in the software. Would it be OK if I used your first name to identify this session?
P: Participant Agrees
Familiarization
M: Here is the first discovery tool you will be working with. As you can see, <name of tool> includes options that allow you to
search library databases for books, articles, and other types of materials.
Overview of Tasks
M: To gauge the usability of the tool, we will be asking you to work your way through some tasks that we would expect a
<student or faculty member> to do. These will be tasks that you may or may not have done before.
Because of the test environment, you may not be able to FULLY complete a task and may be stopped at the point of a login or
click a link that is unavailable. If you can’t find something or if it doesn’t make sense, just tell us, and we can inform librarians
who are making decisions about the tools.
Remember: We are testing the discovery tools, not you. There is no right or wrong answers, no right or wrong way to do things.
Every action you take, no matter how you may feel about how it turns out, helps us to evaluate the product. Do you have any
questions before we begin?
P: Participant responds
47. Script (cont.)
Tasks – Discovery Tool 1
Insert Discovery Tool Tasks and Scenarios.
Tasks- Discovery Tool 2
M: Here is the second discovery tool that you will be working with. As you can see, <name of tool> includes options that allow
you to search library databases for books, articles, and other types of materials.
To gauge the usability of this tool, we will be asking you to work your way through the same tasks that you did using the previous
tool. Again, because of the test environment, you may not be able to FULLY complete a task and may be stopped at the point of a
login or click a link that is unavailable. If you can’t find something or if it doesn’t make sense, just let us know. Do you have any
questions before we begin?
P: Participant responds
Insert Discovery Tool Tasks and Scenarios.
Conclusion
M: Those are all of the tasks we have today. Thank you for your assistance in evaluating the discovery tools; we appreciate your
time completing them for us. Your participation today will help us make recommendations that can make a difference to the UB
community. Do you have any further questions about the study before you leave today?
P: Participant responds.
M: (Answers questions if any are asked). Once again, thank you for spending your (morning/afternoon) with us today. As a thank
you, please accept this gift of appreciation. (Hand over $10 Starbucks gift card)
49. Task List
Task Script
1. Find Your professor has asked you to read the Seamus Heaney translation of Beowulf. Find out if a copy is
known item available in the library and, if so, where it is located. If a copy is NOT available, to check out, are
(book) there any other options for obtaining or accessing the book?
Alternate wording for faculty: You are interested in reading Seamus Heaney’s translation of Beowulf.
Find out if a copy is available in the library and, if so, where it is located. If a copy is NOT available,
to check out, are there any other options for obtaining or accessing the book?
2. Find Your professor suggested that you consult an article entitled “Modernism and the Harlem Renaissance” that
known item was published in American Quarterly for a paper you’re doing. You want to see if it’s available full-text from the
(article) & save library’s databases. Once you have found the article, can you save that information for later use?
article citation
Alternate wording for faculty: There is an article you recall seeing in American Quarterly on “Modernism and
the Harlem Renaissance” that you think you might assign as a class reading. You want to see if it’s available
full-text from the library’s databases. Once you have found the article, can you save the information to pass on
to your students?
(Full citation, do not give to subject: Baker, Houston A. (1987). Modernism and the Harlem Renaissance.
American Quarterly, 39 (1), 84-97.)
3. Conduct You are interested in learning basic Portuguese for an upcoming trip to Brazil. You want to see if the library has
topic search for any elementary Portuguese language textbooks or audiovisual materials
print or AV
materials
50. Task List
Task Script
4. Conduct a You have to research Artificial Intelligence for a project, and your professor has asked you to use a variety of
topic search for resources including books and journal articles. The articles should be no more than five years old and be peer-
all types of reviewed. As a preliminary assignment she wants you to report on how many relevant books and articles you
materials found.
Alternate wording for faculty: You are researching recent developments in Artificial Intelligence. As a
preliminary step, you want to compile a list of relevant books and peer-reviewed articles from the last five years
that are available through your library.
5. Retrieve
article You’re ready to use the Modernism article you found earlier, and you want to retrieve it.