The document summarizes a study that examined perceived risk and self-efficacy regarding internet security in a marginalized community. 44 participants were interviewed about their perceptions of risk and self-efficacy when using the internet. They were shown examples of safe and malicious websites. While participants were aware of security risks, they had low self-efficacy and saw themselves as vulnerable. They were introduced to PopJART security software but low self-efficacy and perceived barriers prevented adoption, even though they had confidence in its accuracy. The study found that increasing self-efficacy may be more important than communicating risk when designing security services for low-proficiency users.
1. Perceived Risk and Self-Efficacy
Regarding Internet Security in a
Marginalized Community
Abstract
As part of the ongoing CRISP project (Communicating
Risk in Internet Security and Privacy), we conducted a
user study in a marginalized community to better
understand community members’ interactions with
computers and the Internet in terms of security and
privacy. We used the Health Belief Model to understand
what factors affect members’ behavior when a potential
attack is present. In particular, we focused on two
factors, perceived risk and self-efficacy, and
interviewed 44 participants about them. In this paper,
we report our preliminary quantitative and qualitative
findings.
Author Keywords
Urban Computing; Marginalized Community;
Marginalized Population; Underserved Community;
Usable Security; Browser Add-On; Qualitative Methods;
Quantitative Methods; Mixed Methods
ACM Classification Keywords
K.4.m Computers and Society: Miscellaneous
Introduction
The CRISP project at the University of San Francisco
aspires to find a better way of communicating potential
risks on the Internet to users at every proficiency level.
As part of the project, we worked with a non-profit
agency that runs a public computer lab in a low-income
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.
Copyright is held by the owner/author(s).
CHI'15 Extended Abstracts, Apr 18-23, 2015, Seoul, Republic of Korea
ACM 978-1-4503-3146-3/15/04.
http://dx.doi.org/10.1145/2702613.2732912
Eunjin (EJ) Jung
Dept. of Computer Science
University of San Francisco
2130 Fulton St.
San Francisco, CA 94117 USA
ejung@cs.usfca.edu
Evelyn Y. Ho
Dept. of Communication Studies
University of San Francisco
2130 Fulton St.
San Francisco, CA 94117 USA
eyho@usfca.edu
Hyewon Chung
Dept. of Education
Chungnam National University
99 Daehak-ro Yuseong-gu
Daejeon, 305-764, Korea
hyewonchung7@gmail.com
Mark Sinclair
University of San Francisco
2130 Fulton St.
San Francisco, CA 94117 USA
mksinclair@dons.usfca.edu
2. neighborhood of San Francisco. The community
members have low computer proficiency and have
limited access to computers and the Internet. We
adopted the Health Belief Model (HBM) from the
health/risk communication literature to understand
what factors affect members’ behavior regarding
Internet security, and asked community members
about their self-efficacy and the risks they perceive. We
found that despite low proficiency, members were
aware of potential risks on the Internet and many
employed behavioral strategies to protect themselves.
However, they had low self-efficacy and believed that
they would be vulnerable to the attacks on the Internet
due to their low self-efficacy. They viewed the attacks
on the Internet as a threat to their privacy. Even
though they had high confidence in our software, low
self-efficacy and perceived barriers impeded adoption.
Why a Marginalized Community?
The CRISP project aims to understand the challenges in
designing the user interfaces of security software for
users at any proficiency level and across the digital
divide, including people with low-proficiency, limited
access to computers and the Internet, and with socio-
economic hardship.
Community Members
The lab offers walk-in hours for anyone to use
computers and the Internet, and classes on basic and
intermediate computer skills. The lab also has offered a
basic Internet security class intermittently. According to
the director of the lab, most community members live
on social security. They use email and Facebook to stay
connected and may visit YouTube and video game
websites for fun. They may use the Internet to find
resources, such as applying for social security and
finding jobs.
Why Use the Health Belief Model (HBM)?
Internet security behavior and health prevention
behavior have important similarities. Security experts
want users to avert security incidents by practicing
rather inconvenient strategies, such as running Anti-
Virus software even though it would impact the
performance of their computers, and health promotion
experts want people to avoid fatty foods even though
they are often delicious, cheap, and easily accessible.
Research in health promotion has long recognized the
importance of theory-based planning for identifying the
variables for targeting particular behaviors, even if the
application in actual program planning may be difficult
[3]. While numerous theories exist and overlap, in the
CRISP project, the Health Belief Model [2,6] had the
best fit. According to the HBM, a target population’s
likelihood to change their behavior is based on:
perceived susceptibility (perception of risk), perceived
severity (seriousness of threat), perceived benefits
(effectiveness of actions to reduce threat), perceived
barriers (negative actions or costs of behavior), cues to
action (stimuli to trigger action), and self-efficacy
(perceived competence in being able to act) [2,7].
Using the HBM in a study of email attachments, Ng et
al. [4] found that perceived susceptibility,
perceived benefits, and self-efficacy affected a
person’s security behavior. Because the study was
of relatively tech-savvy users, Ng et al. conclude that
perceived barriers may not have been as relevant. Like
previous health behavior studies that have found
weaker effects, perceived severity was not strongly
Demographic Information
of the Participants
Gender: In our sample of 44
members, 31 (70%) were
male, 10 (23%) were female,
and 3 did not share their
gender information.
Ethnicity: Participants were
quite diverse. Among 44
participants, 18 reported
Caucasian, 13 Black or
African-American, 8 of
Hispanic or Latino origin, and
4 Asian, and 5 of them also
had American Indian or
Alaska Native heritage.
Participants could choose
multiple ethnicities with 4 of
them identifying as
multicultural.
Education: 18 (41%)
completed high school, 10
(23%) received some college
education, 3 (7%) completed
a 2-year college degree, 8
(18%) completed a 4-year
college degree, and 3 (7%)
had a graduate degree. Only
2 participants did not
complete high school.
3. correlated [2] and at least these particular cues to
action tested were also not significant [4].
While HBM has been widely used in health intervention
research, critiques have been laid on the HBM’s ability
to account for community/cultural norms and structural
barriers [1]. In this paper, we also pay attention to
these aspects of users’ behaviors by inductively coding
the qualitative interviews for examples of any norms or
barriers that might also affect behavior.
User study
We recruited participants by posting advertisements at
the public lab. The interview was conducted in a corner
of the lab and ran for 30-45 minutes in general. We
interviewed 44 participants asking questions about
perceived risk (susceptibility) and self-efficacy, and
then showed them 8 websites: 2 financial services, 1
professional association, 1 personal blog, 2 game
websites, and 2 video viewing websites. For each
website, we asked them to make a guess whether this
website is safe or malicious and what criteria they used
to make the decision. Malicious websites were defined
as any websites that can steal their personal
information or harm the computer they were using.
In Part 3, we introduced PopJART. This part of the
interview was designed so that the participants would
be exposed to recent types of attacks and to a
countermeasure against these attacks. Malicious
JavaScript is used in 3 types of attacks (#1, #3, #8)
among the 10 most critical web application security
risks in 2013 [5]. We explained how malicious
JavaScript code in recent attacks is invisible to users
and introduced PopJART as an example
countermeasure, which would assist them in identifying
malicious websites. We showed the 8 websites again,
this time with PopJART. 5 out of 8 websites had
specially designed JavaScript code similar to attack
code and PopJART showed warnings. When the
participant saw the warning, he or she could 1) “Play
Safe” and disable the potentially malicious JavaScript
code and interact with the rest of the website, 2) “Take
Risk” and interact with the website while executing the
malicious code, or 3) “Learn More Information.” After
revisiting 8 websites, we asked the participants if they
would use PopJART in the future.
Preliminary Findings
We categorized our preliminary findings in three
factors: perceived risk (perceived susceptibility), self-
efficacy, and what affected the users’ adoption of
security software PopJART.
Perceived Risk
Even though most participants had low computer
proficiency, they cared deeply about privacy and
security. One said: “This stuff, you put your information
in these pages, I think some other people can see it.
That's why I don't like to give any information.” They
were aware that a breach in Internet security could
result in losing control over their private information.
There was an atmosphere of general mistrust towards
the Internet. One said: “I have a friend that repairs
computers and he is convinced that these antivirus
programs have worms built into them so they can
continue business and create more business for them.
He is usually right about things.”
PopJART (Pop-up for
JavaScript Analysis
Research Tool)
PopJART is a Firefox browser
add-on developed by a team
of USF researchers including
Prof. Jung. Its backend is a
JavaScript classifier based on
Statistical Language Model
and text-based features that
identifies malicious JavaScript
code with high probability. As
part of the CRISP project, we
are in the process of
designing the front-end user
interface to enhance
perception of risk and self-
efficacy.
4. Figure 1 Perceived exposure to malicious websites
As shown in Figure 1, 15 participants (34%) said they
had no idea how often they visit malicious websites.
Nine participants (20%) answered “Once a day”, but
many were uncertain in their answers. For example,
one said, “I come here 3 times a week, so 3 times a
week!” meaning that they believe they visit malicious
websites every time they use the Internet. One said:
“I’m extremely confident that I don’t know [how often I
visit malicious websites].” This served as another
indicator of low self-efficacy.
Thirty-one participants (70%) believed that just visiting
malicious websites without interacting could cause
harm. However, most could not explain what can
actually happen. When we showed them websites, most
relied on brand familiarity (“Western Union must be
safe”, “I trust YouTube. Yeah, I’ve been on YouTube
since 2006”) or on whether the website asks for
sensitive information such as social security number or
credit card number (“It has something like information
and they don't ask you for any password”).
Self-efficacy
As expected, participants reported low perceived self-
efficacy. Figure 2 shows that 26 participants (59%) did
not think that they could identify malicious websites,
and 15 participants (41%) believed they could.
Figure 2 Self-efficacy in detecting (and thus protecting
themselves against) malicious websites
Figure 3 Self-efficacy in differentiating between safe and
malicious websites just by the look of websites. If the
participant were to guess whether a given website is safe or
malicious just based on the look of website, how many guesses
out of 8 does he or she think would be correct?
5. Figure 3 shows 14 (32%) said that they would be able
to identify at most 1 out of 8 websites1
. Twenty two
(50%) believed that they could detect at least half
accurately. Average was 3.3.
The correlation between having a security and privacy
problem on the Internet and self-efficacy in
differentiating safe and malicious websites was not
statistically significant (r=.-127, p=.495), and similarly
between having heard of anyone who had one and self-
efficacy (r= .037, p=.842). Interestingly, Table 1 shows
that the same group of people was slightly more
confident with the help of PopJART.
Figure 4 shows how well the participants could identify
malicious websites. The correlation between
reported self-efficacy and actual performance in
differentiating between safe and malicious websites just
by the look of websites was not statistically
significant (r=.157, p=.361). This indicates that their
self-efficacy is not a good indicator of their actual
proficiency in security.
Security Software Adoption
PopJART was given as an example of software to help
differentiate between safe and malicious websites.
While participants reported high confidence in
PopJART’s accuracy (average 4.16 out of 5) and the
majority said they would install PopJART (31 out of 44
rated 4 or above), the correlation between the
confidence level and the adoption of PopJART was not
statistically significant (r=.239, p=.119).
1
The scale was given from 1 to 8, and 3 participants wrote in 0.
Figure 4 Actual performance in differentiating safe and
malicious websites just by the look of the websites
Further investigation, however, revealed that those
who said that they would absolutely install the software
(n=22) strongly felt that with PopJART, they could
accurately differentiate between safe and malicious
websites (16 out of 22).
Surprisingly, those who would not install the software
at all (n=6) also strongly felt that with PopJART, they
could tell the difference between safe and malicious
website (5 out of 6). This indicates that the confidence
in PopJART’s accuracy does not necessarily lead to the
adoption of PopJART. In those cases, participants listed
reasons along the lines of “I don’t know how to use it”
and “Just because I don’t like, like anti-virus and those
kinds of software’s because they use up a lot…they
make things run slower” (perceived barrier). Unlike the
tech-savvy users in [4], perceived barriers may play a
more important role in the low-proficiency group.
In [5], higher perception of risk and higher self-efficacy
led to a better security behavior. 77% of those who
Accurate
identification with
PopJART
(5 Likert scale)
1 2 3 4 5
Had S&P
problem
Yes 0 0 4 3 11
No 1 1 4 5 14
Heard of
S&P
problem
Yes 0 0 6 5 14
No 1 0 1 1 3
Table 1. Those who have had a
security or privacy problem over the
Internet or have heard of anyone who
had one are slightly more confident
that they can accurately differentiate
between safe and malicious websites
with the help of PopJART. 61% (11 out
of 18) of those who had a problem
were absolutely confident to install
PopJART while 56% (14 out of 25) of
those who did not have a problem were
absolutely confident. (5 Likert scale: 1
= not at all confident to 5 = absolutely
confident)
6. thought malicious websites could cause harm just by
visiting said they would adopt PopJART, while 60% of
those who did not think so said they would. Similarly,
80% of those who thought they could differentiate safe
and malicious websites said they would adopt PopJART,
while 65% of those who did not think said they would.
Table 2 shows that those who have had a security or
privacy problem over the Internet or heard anyone who
had one are more likely to install PopJART. Direct and
indirect exposure to security and privacy problems may
have increased their perception of risk and encouraged
security software adoption.
Discussions and Implications
As the HBM model predicted, low self-efficacy impedes
security software adoption. However, low self-efficacy
also seemed to increased uncertainty regarding
perceived risk, so the correlation between perceived
risk and adoption was not straightforward. This may be
partially explained by participants’ varied
understandings of security risk. This suggests that user
experience designers for security services may focus
more on increasing self-efficacy than communicating
potential risk (especially when the risk itself is not well
defined). Also, gaining users’ trust in the software is
more important than convincing users of its high
performance.
Conclusion and Future Work
The CRISP project aims to illuminate the challenges in
designing user interfaces of security software for users
at any proficiency level and across the digital divide.
We plan to conduct an additional user study with
college students and compare their willingness to adopt
the security software. We will continue to investigate
how to effectively change culturally diverse users’
security behavior in an inclusive manner.
Acknowledgements
We thank all the interview participants and all the
volunteers and staff members at the lab who allowed
us to use their lab for the interviews. We are grateful
for the student interviewers who conducted interviews
and the research assistant Victor Valle for his work.
Finally, authors Profs. Jung and Ho gratefully
acknowledge the Innovation Award from the USF
Provost office, and Prof. Chung gratefully acknowledges
a CNU research grant.
References
[1] Dutta-Bergman, M.J. Theory and practice in health
communication campaigns: A critical interrogation.
Health Communication, 18(2) (2005), 103-122.
[2] Janz, N. K. and Becker, M. H. The Health Belief
Model: A Decade Later. Health Education & Behavior,
11(1) (1984), 1-47.
[3] Langlois, M. A. and Hallam, J. S. Integrating
Multiple Health Behavior Theories Into Program
Planning: The PER Worksheet. Health Promotion
Practice, 11(2) (2010), 282-288.
[4] Ng, B.-Y., Kankanhalli, A. and Xu, Y. Studying
users' computer security behavior: A health belief
perspective. Decis. Support Syst., 46(4) (2009), 815-
825. doi: 10.1016/j.dss.2008.11.010
[5] Open Web Application Security Project. OWASP Top
Ten Project for 2013. (2013)
[6] Rosenstock, I. M. Historical origins of the health
belief model. Health Education Monographs, 2 (1974),
328-335.
[7] Rosenstock, I. M., Strecher, V. J. and Becker, M. H.
Social learning theory and the Health Belief Model.
Health Education Quarterly, 15(2) (1988), 175-183.
Will install on
personal computer
(5 Likert scale)
1 2 3 4 5
Had S&P
problem
Yes 2 0 2 1 13
No 4 2 3 7 10
Heard of
S&P
problem
Yes 3 0 3 6 13
No 1 1 2 0 2
Table 2. Those who have had a
security or privacy problem over the
Internet or heard of anyone who had
one are more likely to install PopJART
on their own computer, if they had
one. 72% (13 out of 18) of those who
had a problem were absolutely
confident to install PopJART while 56%
(10 out of 26) of those who did not
have a problem were absolutely
confident. (5 Likert scale: 1 = will not
install at all to 5 = will absolutely
install)