Evidence Summary
Usability Study Identifies Vocabulary, Facets, and Education as Primary
Primo Discovery System Interface Problems
A Review of:
Brett, K. R., Lierman, A., & Turner, C.
(2016). Lessons learned: A Primo usability study. Information Technology and Libraries, 35(1), 7-25. https://doi.org/10.6017/ital.v35i1.8965
Reviewed by:
Ruby Warren
User Experience Librarian
University of Manitoba Libraries
Winnipeg, Manitoba, Canada
Email: [email protected]
Received: 1
June 2017 Accepted: 11 Aug. 2017
2017 Warren.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective – To discover whether users can effectively complete
common research tasks in a modified Primo Discovery System interface.
Design – Usability testing.
Setting – University of Houston Libraries.
Subjects – Users of the University of Houston Libraries Ex Libris Primo Discovery System interface.
Methods – The researchers used a think aloud usability test
methodology, with participants asked to verbalize their thought processes as
they completed a set of tasks. Four tasks were developed and divided into two
task sets (Test 1 and Test 2), with session facilitators alternating sets for
each participant. Tasks were as follows: locating a known article, finding a
peer reviewed article on a requested subject, locating a book, and finding a
newspaper article on a topic. Tests were conducted in front of the library
entrance using a laptop equipped with Morae (screen and audio recording
software), and participants were recruited via an assigned “caller” at the
table offering library merchandise and food as a research incentive. Users
could opt out of having their session recorded, resulting in a total of fifteen
sessions completed with fourteen recorded. Thirteen of the 15 participants were
undergraduate students, one was a graduate student, one was a post-baccalaureate
student, and there were no faculty participants. Facilitators completed notes
on a standard rubric, coding participant responses into successes or failures
and noting participant feedback.
Main Results – All eight participants assigned Test 1 successfully
completed Test 1, Task 1: locating a known article. Participants expressed a
need for an author limiter in advanced search, and had difficulty using the
citation formatted information to locate materials efficiently. Again, all
eight participants found an article on the requested subject in Test 1, Task 2,
but two were unable to determine if the article met peer review requirements.
One participant used the peer-reviewed journals facet, while the rest attempted
to determine this using the item record or with facilitator help. All seven
participants in Test 2 were able to locate the book requested in Task 1 via
title search, but most had difficulty determining what steps to take to check
that book out. Five participants completed Test 2, Task 2 (finding a newspaper
article on a topic) unassisted, one completed it with assistance, and one could
not complete it at all. Five users did not notice the Newspaper Articles facet,
and no participants noticed resource type icons without facilitator prompting.
Conclusions – The researchers, while noting that there were few
experienced researchers and a narrow scope of disciplines in their sample,
concluded that there were a number of clear barriers to successful research in
the Primo interface. Participants rarely used post-search facets, although they
used pre-search filtering when possible, and ignored links and tabs within
search results in favour of clicking on the
material’s title. This led to users missing helpful tools and features. They
conclude that a number of the usability problems with Primo’s
interface are standard discovery systems usability problems, and express
concern that this has been inadequately addressed by vendors. They also note
that a number of usability issues stemmed from misunderstandings of
terminology, such as “peer-reviewed” or “citation.” They conclude that while
they have been able to make several improvements to their Primo interface, such
as adding an author limiter and changing “Peer-reviewed Journals” to
“Peer-reviewed Articles,” further education of users will be the only way to
solve many of these usability problems.
Commentary
There is, as the authors of this study note,
substantial literature available on the usability of discovery systems, and on
the Primo interface in particular. This study, while not precisely replicating
any previously published usability studies of the Primo interface, does not
seek out or fill any gaps in the literature available; however, it is important
to conduct usability studies periodically to identify needs or issues unique to
an institution’s local context, a purpose this study ultimately serves.
This study scored an 88% overall validity rating in
Glynn’s critical appraisal tool for library and information science research
(2006), with points deducted for the lack of representative diversity in the
study participant population (as noted by the study authors), and for the
impact of observer bias and observer influence on the results. The authors note
that usability study facilitators provided participants with guidance and
prompting to use certain features, which negatively impacts the face validity
of the study – completions obtained with facilitator assistance can’t tell us
if the user would have been ultimately successful independently navigating the
Primo interface, and should have been recorded as incomplete tasks or invalid
results.
Although study participants do not completely
represent the spectrum of library users at the University of Houston Libraries,
adequate information was collected from undergraduate students to inform design
decisions that would impact them. Although the number of respondents might seem
low for other types of research, for insight gathering usability studies a
total of 13 participants is quite high and more than enough to inform design
decisions (Nielsen, 2012).
The authors confirm a number of standard usability
findings as valid, including reducing the amount of jargon and unclear
terminology used in web interfaces. Should they decide to pursue further
research to confirm their hypothesis that instruction is the only way to reduce
interface difficulties caused by a lack of understanding of research
components, reducing observer influence on study results should be a top
priority.
References
Glynn, L. (2006). A critical appraisal tool for library and information
research. Library Hi Tech, 24(3),
387-399. doi:10.1108/07378830610692154
Nielsen, J. (2012). How many test
users in a usability study? Retrieved from Nielsen Norman Group Articles: https://www.nngroup.com/articles/how-many-test-users/