SlideShare ist ein Scribd-Unternehmen logo
1 von 10
Downloaden Sie, um offline zu lesen
 
	
  
Information Systems Implementation Delays and
Inactivity Gaps: The End User Perspectives
Matt Glowatz David Malone Ian Fleming
University College Dublin University College Dublin University College Dublin
School of Business School of Business School of Business
Dublin 4, Ireland Dublin 4, Ireland Dublin 4, Ireland
matt.glowatz@ucd.ie davemaloner@gmail.com flemingi@tcd.ie
ABSTRACT
This paper presents findings of an empirical study into how
delays and inactivity gaps affect end user perception in the
system implementation phase - and subsequently - during the
actual usage phase. The authors’ methods of research included
interviews and online surveys with employees within an
accounts team of a company named DFS. The findings indicate
that inactivity gaps (short delays) have a negative impact on end
user perception in the implementation phase, however on the
other hand, long delays pose a minor effect on end user
perception in the implementation phase. The data reveals end
users view longer delays more positively than shorter delays
with both types of delays having no bearing on satisfaction
levels during system usage.
Categories and Subject Descriptors
Not Applicaple
General Terms
Human Factors
Keywords
Information Systems Implementation, Inactivity Gaps,
Technology Acceptance Model (TAM), End-user Satisfaction
1. INTRODUCTION
“Short cuts make long delays” [34]
1.1 Company background
Dynamo Financial Services (DFS) is the Irish subsidiary of a
global consulting conglomerate. In May 2008, the “Fund
Accounting” division of DFS in Dublin began the process of
developing a new accounting system called Ashton to replace
an obsolete system called Idrone. The system was to be used in
the organisation’s Irish (Dublin and Cork) and India offices.
The decision to implement Ashton was taken without a detailed
plan in place or assessment of how much work the project
would involve. A mismanagement of the implementation
process happened in the following five years which caused
continuous delays, a twelve months period of inactivity where
production halted resulting in a final launch in January 2013.
1.2 Statistics on Delays
In its 2013 Chaos Manifesto [31], the Standish Group found that
74% of Information Technology (IT) projects experience time
overruns (see Figure 1). The equivalent for time overruns in
2004 was 84%. Though there has been a 10% decrease in
projects experiencing time overruns in the intervening eight
years to 2012, a large proportion of IT projects still experience
difficulties involving delays in development and
implementation.
Figure 1. Chaos Manifesto Statistics
In 2012, a study conducted by McKinsey and Company and the
University of Oxford [5] examining 5,400 large scale IT
projects (projects with budgets greater than US$15M) found
that - on average - large IT projects ran 45% over budget and
7% over time, while 56% delivered less value than predicted.
Davis and Venkatesh [9] stated that the success rate of systems
implementation is well below 50% with a significant proportion
of new system projects encounter delays of various lengths
during their implementation.
1.3 Delays
Delays and how they are perceived are cognitive and therefore
subjective. Researchers have found that delays are, for the
majority of IT projects, a constant factor and a necessary evil.
Chau [8] found in his study that delays had a relatively small
and negative effect on end users’ perception and usage of a
system. However apart from this one study there is a dearth of
valuable research available on how delays affect end users in
the context of IT implementation. Studies have been performed
in different fields on system delays in context of user
perceptions and notably how delays in transmission affect
interaction, [29], how download delays in browsers affect users
and retail customers [28], and how delays affect the negotiating
of benefits and burdens in an employment context [25]. Even
though all this research has been conducted on delays,
academics have failed to address the issue of delays in IT
projects. Rose and Straub [28, p.57] stated “that a delay is cited
as a problem by numerous sources in the practitioner
literature”, albeit in varying contexts.
Permission to make digital or hard copies of all or part of
this work for personal or classroom use is granted without
fee provided that copies are not made or distributed for
profit or commercial advantage and that copies bear this
notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a
fee.iiWAS2014, 4-6 December, 2014, Hanoi,
Vietnam.Copyright 2014 ACM 978-1-4503-3001-5/14/12
 
	
  
Models of information technology acceptance have touched
upon the affect of delays, however, only in the context of
variables influencing acceptance. Two of these academic
models are the Technology Acceptance Model (TAM) and the
Expectation Confirmation Model (ECM).
Delays are difficult to analyse as you delve in the realm of
perceptions and how end users process information cognitively.
The Ashton project in DFS suffered numerous delays ranging
from long delays to periods of total inactivity. The reasons for
these delays are not dissimilar to reasons for many other
projects, such as poor planning, communication, costing issues
and poorly trained staff. But do these delays and inactivity gaps
affect end users and how they perceive the new system? Are
their cognitive thoughts favouring long delays or are they
indifferent to any type of delay? This study focuses on these
perceptions during the implementation phase and subsequently
the usage phase. In addition, this paper investigate whether a
link between implementation and usage in regard to delays
affecting satisfaction levels exist or not.
2. LITERATURE REVIEW
2.1 Introduction
The literature review examines articles related to end user
perception, adoption and acceptance during information systems
implementation, end user expectations, implementation gaps
and delays in systems development and the effect that delays
have on end users in different contexts as mentioned in the
introduction.
2.2 Delays and the Technology Acceptance
Model (TAM)
The authors examined the influence of delays in implementation
on users’ Perceived Usefulness of a System (PU) and Perceived
Ease of Use (PEOU) using the Technology Acceptance Model
(TAM) by Davis [9] (see Figure 2). This was researched
because the authors felt the gap in implementation of DFS’s
Ashton information system may have had an effect on users’
perceptions of the system.
Figure 2. The Technology Acceptance Model (TAM)
Chau [8] examines TAM and also the Computer Utilisation
Model and integrates these two models. His research found that
Ease of Use (EOU) had the largest influence on Computer
Aided Software Engineering (CASE) acceptance, and that the
implementation gap was found to have a relatively small and
negative effect on CASE acceptance by end users, through its
influence on ease of use, near-term usefulness and long-term
consequences. A point made about the implementation gap is
that the wider the gap between old and new [procedures, skills
and knowledge], the longer will be the time likely to be needed
for individuals to learn the new skills and acquire knowledge
and thus to adapt to the new procedures Chau [8, p.272]. He
notes that abandonment of CASE is a chronic problem and that
CASE adoption might require unlearning old practices. His
paper sees the implementation gap as an external variable (as in
the Technology Acceptance Model) that negatively affects the
usefulness and ease of use of CASE tools, as perceived by
systems developers. However, its direct effect on long-term
consequences was not significant.
Chau [8] appears to be one of the few academics examining the
effects of an implementation gap using TAM, and the authors
investigate whether his findings regarding CASE play out in a
similar manner with Ashton, with particular reference to end
users.
TAM is not without its critics. The authors were acutely
conscious of criticisms other authors have levelled against the
model. Hsieh and Wang [17, p218] were scathing of the
following proclivity of information systems researchers, stating
“The independent attempts by several researchers to expand
TAM has led to a state of theoretical chaos and confusion in
which it is not clear which version of the many iterations of
TAM is the commonly accepted one”.
They also underpin the point that TAM serves as a “diversion of
researchers’ attention away from more important phenomena,”
and that TAM constantly talks about the importance of PU but
very little research effort is going into investigating what
actually makes a system useful. PU and PEOU have largely
been treated like black boxes that very few academics and
practitioners have tried to pry open.
Venkatesh et al. [7], one of the foremost proponents of TAM,
acknowledge that the model is predictive, but as it is generic it
does not provide a sufficient level of understanding, and
information to give system designers a platform to create user
acceptance for new systems. In response to these criticisms the
authors would make the point that TAM should account for
delays but because of the amount of iterations to TAM,
proposing a new model would be a better approach.
Organisations need to be aware of the potential problems which
implementation gaps and delays in development can cause in
regard to end user acceptance of information systems. This
awareness may help increase end user enrolment and eventually
lead to stabilisation.
2.3 Expectation Confirmation Model (ECM)
ECM deals with end user system acceptance (see Figure 3). The
authors studied the work of Brown, Venkatesh, Kuruzovich and
Massey [7]. Their paper examines the three models of: (1)
disconfirmation, (2) ideal point and (3) experiences. Satisfaction
with an information system is calculated as a function of
expectations and experiences. Expectation, experiences and
system satisfaction were all measured using the main
components of the TAM, these being ease of use and
usefulness.
Figure 3. The Expectation Confirmation Model (ECM)
 
	
  
One useful point is that while technology acceptance research
typically uses behavioural intention as the dependent variable,
other research (Brown, Massey, Montoya - Weiss and Burkman,
2002) has suggested that system satisfaction, and not
behavioural intention to use the system, is the appropriate
dependent variable when the system in question is large scale,
integrated and its use is mandated in the organisation [7, p.56]
Ashton, the system at the core of the authors’ study, meets two
of the three criteria above, however, it is used by all the
employees in the “Fund Accounting” team. It could be argued
that system satisfaction is the dependent variable.
The ECM looks at expectations of employees, primarily post
implementation of information systems. In addition, it looks at
post-adoption satisfaction as a function of expectations,
perceived performance, and disconfirmation of beliefs (the
extent to which beliefs are not met).
In the paper by Hsieh and Wang [17], PEOU, PU and
satisfaction are examined to see how they affect a user’s
extended use of a system, (extended use meaning using more
functionality to support job performance and tasks). Employees
in DFS have used Ashton since January 2013 and this study
examines if delays in information systems implementation
affected their extended use.
2.4 Delays in other fields
Limited research has been conducted into how delays in
development impact on employees’ perceptions, but plenty of
research has been conducted on usage after implementation.
The authors’ research attempts to establish if a link could exist
between time pre and post implementation in regard to user
perception and usage.
Chau [8] is the primary academic piece the authors found that
examines delays and implementation gaps. However, while
academics have failed to get to grips with studying delays in an
IT context they have studied delays in many other fields, from
download delay on the internet [16], transmission delay in
teleconferencing [29], delay of receiving benefits and burdens
[25], and delays in playing online live games [26].
The study by Okhuysen, Galinsky and Uptigrove [25] on the
effect time delays can have when distributing benefits (financial
bonuses, pay rises) and burdens (redundancy, pay freezes) was
intriguing. For the majority of individuals negative events are
more salient, and negative thoughts and information weigh
more heavily on one’s mind than positive thoughts. Humans
tend to discount the impact benefits and burdens have when
delays occur in receiving them. Rewards that are received
sooner are often preferred over future rewards, that is the
subjective value of an outcome is discounted as a function of
the delay.
Renner [27] observed that a delay in receiving a benefit is
discouraging, which causes immediate benefits to be inflated in
value and delayed ones discounted. If these findings are
translated into the context of the authors’ study on Ashton, the
system being implemented is the benefit, but because the system
was delayed continuously over a five year period, the benefits
gets discounted and eroded by the end users.
The delay in implementation can potentially cause a benefit to
become a burden for end users. Implement the system quicker
and the benefit can be realised. Broadband speeds have
improved in the last ten years, but there are still servers that
cause download time delay for internet pages and files. Whilst
these delays are relatively short in nature, it has been found by
Hoxmeier and DiCesare [16] that browser based applications
delays in excess of twelve seconds can cause levels of
intolerance by end users to increase to such a level that
satisfaction with the provider decreases substantially and this
may lead to discontinued use. This shows that short delays
cause issues for end users.
In 1987 Geist, Allen and Nowacyzk [11] advocated the
importance of response time and proposed that user perception
of computer system response time be studied further and a
model of user perception be designed. To outline further the
relevance and importance of delays and time as variables in
system implementation, Rushinek and Rushinek, [30] analysed
the results of a questionnaire they drafted to determine the
effects of seventeen different variables on user satisfaction. A
total of 15,288 questionnaires were sent to subscribers of
Computerworld magazine and the results showed that the
number one factor relating to user satisfaction was response
time. In the context of this study, response time is similar to
delays. Three of the top five responses in the above study
related to time factors as well, and whilst the variables
concerning time and delays are cognitive in nature they cannot
be ignored.
2.5 Other research
With regard to the mandatory use of information systems by
employees, Delone and McLean [10, p16] state that “even when
use is required, variability in the quality and intensity of this
use is likely to have a significant impact on the realisation of
system benefits”.
Wilson and Howcroft [36] recount the tale of nurses working at
the Eldersite Hospital in the North of England and how they
resisted enrolment with regard to their new Zenith Nursing
Information System. The nurses who were the users consciously
act to determine their circumstances by rejecting a system that
they see as detrimental to their work. Nursing system end users
complained that because the system was so slow and not user-
friendly it interfered with them doing their jobs correctly. The
nurses, the ‘relevant social group’ in question, consistently
reported that they had not been involved in the design of the
Nursing Information System. In the above case the Eldersite
nurses could continue taking care of patients while ignoring the
Zenith system and its’ time-intensive characteristics. This case
shows that mandatory use of a system does not determine usage
and success.
3 RESEARCH AIMS, OBJECTIVES AND
HYPOTHESES
3.1 Aims and Objectives
The aims and objectives of this study are:
§ To examine the effect long delays in system
implementation have on end users perceptions of that
system.
§ To examine the effect gaps of inactivity in system
implementation have on end user perceptions of that
system.
§ To examine if satisfaction levels of end users in the usage
phase are affected by the delays in the implementation
phase.
§ To see if the current models of IT acceptance, namely
TAM and ECM, are relevant to this research or does a
new model need to be developed to account for delays.
 
	
  
§ To gain an understanding of other variables that might
cause delays to occur.
§ To see if delays in implementation have any bearing on
perceived success or failure.
The authors’ aim is to research the above objectives in a form
which allows subjective accounts of retrospective events from
employees and also which can account for online quantitative
based survey tools. There appears to be a lack of empirical data
available implementation phase delays in the context of user
perception. The authors’ literature review delved into the
research that has already been carried out, but the novel nature
of the authors’ empirical research has the potential to add to a
research area that could be described as being neglected.
3.2 Long Delays in Implementation
For the purpose of this research project, a long delay is defined
as a period of five years or more from the time the new system
got initiated until it went live. This incorporated the inactivity
gap above as well as delays internally and from the vendor in
developing the system. The authors wanted to see how these
continuous delays over five years affected end user satisfaction
levels with the system and how they perceived these delays,
whether negatively or positively.
3.3 Implementation Gaps and Short Delays
This study defines an implementation gap as a twelve-month
period of non-activity where all development on the project is
stopped. The authors aim, while studying the implementation
gap in the Ashton development, was to develop a theory about
whether this twelve month long gap in systems development
had any influence on employees’ perceived levels of satisfaction
with the system, and whether they perceived this gap as a
negative or positive development.
3.4 Models of Information Technology
Acceptance
The Technology Acceptance Model (TAM) as envisioned by
Davis [9], was built upon a foundation laid by the Theory of
Reasoned Action (TRA); which was conceptualised by Ajzen
and Fishbein [2]. TRA is a model for the prediction of
behavioural intention, encompassing predictions of attitude and
predictions of behaviour. TAM is an information systems
theory, which models how end users come to accept and use
technology.
The model suggests that when end users are presented with a
new technology, a number of variables influence their choices
about how and when they will use it; notably:
1. Perceived usefulness (PU): the degree to which a person
believes a particular system would enhance his or her job
performance and
2. Perceived Ease of Use (PEOU): Davis defined this as “the
degree to which a person believes that using a particular
system would be free from effort” [9, p82]
TAM demonstrates that external variables affect both PU and
PEOU. This meant that the authors could slot in their external
variable delays, and see how this affected actual system use.
The survey data is used to see if end users’ experiences with
Ashton compared to the old system (Idrone), can affect
Perceived Usefulness. The survey data also studies Perceived
Ease of Use of the Ashton system.
TAM posits to predict system success and failure. Adams,
Nelson and Todd [1] claim that – together - TAM and TAM’s
extension, TAM2, account for only 40% of a technological
system's use. The lack of any practical value is a criticism that
has been levelled against TAM.
Legris, Ingham and Collerette [18] suggest that TAM must be
extended to include variables that account for change processes.
TAM does not give delays the respect the authors think it
deserves. The authors attempt to discover if delays can be added
to TAM to explain why end users accept or reject IT systems.
The Expectation Confirmation Model (ECM) is a useful model
for the prediction of end user satisfaction with an information
system (see Figure 3). The ECM is a cognitive theory, which
seeks to explain post adoption satisfaction as a function of
expectations, perceived performance and disconfirmation of
beliefs. This study shows satisfaction with an information
system is a superior method of measuring a system’s success, as
opposed to just use of a system. This is because use of a system
can be mandatory in organisations. A system could be a failure
in employees’ eyes, in terms of their satisfaction levels, but
because it is mandatory and being used by employees, it is
trumpeted by management as a success, which may not be true.
Ginzberg [12] argues that the most consistent explanation for
why people are satisfied with an information system is that their
expectations are neither too high nor too low, but in reality
individuals expectations are wholly subjective. Bhattacherjee
[4] talks about cognitive beliefs and how individual perceptions
of system usefulness can change over time, and that satisfaction
can become the main behavioural determinant of success..
Expectations influence both perceptions of performance of an
information system and disconfirmation of beliefs, and affect
post adoption satisfaction indirectly.
3.5 Hypotheses
The authors defined the following five hypotheses which are
tested for the purpose of this study:
Hypothesis One (H1): Long delays in system implementation
have a negative effect on end users’ perception of the system
Hypothesis Two (H2): Short continuous gaps of inactivity in
system implementation have a negative effect on end users’
perception of the system
Hypothesis Three (H3): Long delays in system implementation
affect end users’ satisfaction negatively during actual usage
Hypothesis Four (H4): Short continuous gaps of inactivity in
system implementation affect end users’ satisfaction negatively
during actual usage
Hypothesis Five (h5): Delays have no impact on whether an
information system succeeds or not (success being defined as
enrolment of end users’)
4. RESEARCH METHODS
4.1 Adapted Research Methods
This study uilises qualitative and quantitative research focusing
primarily on the Dublin office as this is where management, the
project team, system testers and the majority of end users
worked.
 
	
  
4.1.1 Qualitative
The retrospective nature of this research meant interviews were
used to question employees. Qualitative research can involve
the study of case studies, stories, experiences, and in the
authors’ case interviews. This type of research lets the
researcher interpret the data he is collecting, so it can be defined
as an interpretive study of subjective issues and problems in
regard to certain events. The method is described as person
centred by Moustakas, [23], in that personal problems or
situations become real and alive. Heidegger was an advocate of
a retrospective qualitative model called hermeneutic
phenomenology which studies texts and multimedia. He
believed that understanding is a basic form of human existence
in that understanding is not a way we know the world, but rather
the way we are. [14]
4.1.2 Quantitative
In the context of examining a multinational organisation for this
reseasrch, the authors also decided to utilise online survey tools
to collect quantitative data without imposing adversely on
employees. Surveymonkey.com was used for the first survey.
Surveymonkey.com provides data collection, data analysis,
brand management, and consumer marketing services.
Qualtrics.com was used for the second survey. Qualtrics.com
statistical analysis has been cited in many quantitative academic
journals. [32]
4.2 Research Design, Structure and
Procedure
As the aim of the authors’ research was to see how delays and
implementation gaps could affect end users over a period of
time a diary and case study based approach was deemed
appropriate. Case studies are well-used in academia and can
ignite further research to develop theories explored.
Longitudinal research can help ascertain if opinions and
perceptions of systems change over time. By conducting
multiple interviews with certain employees, a diary of
interviews could be reviewed. These interviews were carried out
over a twelve month period between March 2013 and March
2014.
All interviewees were given lists of questions before scheduled
meetings, so that they would be comfortable with what was
being asked. Data was collected from interviews, observation
research and online surveys. These were retrospective in nature
using the hermeneutic approach. Online surveys gave the full
employee base a chance to contribute.
The first phase of structured interviews took place in the first
week of March 2013, and the interviewees were the head of
accounts (JF), the project manager (ND) and the main system
tester (WC). These interviews focused on the history behind the
decision process to implement the system, the ongoing
development process and the potential reasons why the system
was taking so long to go live. The interviews contained between
twenty-one and thirty questions and were a mix of open and
closed questions. The authors’ reasoning for this approach was
that they wanted the interviewees to have a forum to explain
their opinion of what the background to the Ashton
implementation had been. These interviews were used as the
basis for this study’s initial online survey sent out to the
employee user base in June 2013. A draft set of fifty-one
questions was first designed, discussed with JF and consolidated
to an agreed thirty-six questions. JF did not want too much of
employees’ time to be taken up by the online survey, hence the
reduction.
A pilot test was carried out using three employees asking ten
questions in an online survey. The purpose of the pilot was to
eliminate any possible duplications or inconsistencies. No data
from this was used in the final analysis. The authors then
proceeded to send the final draft of survey one to employees for
completion. The survey was open for four weeks and it was
agreed that anonymity would be removed in order to track
completion by the employees.
Second interviews took place in September and October 2013
with JF, ND and WC. These interviews were carried out as part
of the authors’ longitudinal research and to see if opinions and
perceptions of Ashton had changed since March 2013. The final
round of structured interviews took place in January and
February 2014 with WC and another system tester. JF and ND
were unavailable for further interviews. Again these interviews
examined the interviewees’ perceptions and changes in thoughts
processes regarding Ashton.
The second and final online survey focused on satisfaction
levels before and after usage, and whether the long delays and
one year gap of inactivity had affected it. As cognitive thoughts
can change over time the findings here were useful in answering
the research questions. The second survey was kept to twenty
questions. These questions were drafted using a five-point
Likert scale with strongly agree and strongly disagree at either
end of the scale. Some questions were open-ended with the
questionnaire distributed in February 2014.
4.3 Data Set
Eight interviews and two online surveys (Thirty-three
employees took part in the first online survey and twenty-five in
the second online survey) were conducted over the period of
research..The majority of answers to the first survey contained
closed questions. The bulk of the questions asked in the second
survey were done using the Likert scale. The online survey tools
used, automatically generated results for these questions.
The eight interviews and the text answers to the two surveys
were analysed using two theories. Firstly a line by line inductive
analysis using coded units of meaning as per Glaser [13]. A
coded unit of meaning was defined by Tesch [33, p.116) as a
“segment of text that is comprehensible by itself and contains
one idea, episode or piece of information”. So each line of the
interviews and survey text was defined as per the method
outlined.
Secondly, using Heidegger’s phenomenological theory each
sentence is analysed using the question “what does this
sentence reveal about the phenomenon or experience being
described?” (see [35, p.93]).
5. RESULTS
5.1 Hypothesis One
Hypothesis One: Long delays in system implementation have a
negative effect on end users’ perception of the system.
Q17 on the first survey asked, “How did the length of time
before Ashton went live (approximately five years) affect your
perception of the system”.
Only 15% of respondents agreed that the long delay in
implementation had negatively affected their perception of the
system. This figure is similar to the result for the same question
on the second survey, where 12% said the five-year delay
negatively affected their perception of the system. In the first
 
	
  
survey, 12% felt the delay positively affected their perception of
the new system. For the second survey, 32% either disagreed or
strongly disagreed that the five-year delay had a negative
impact on their satisfaction with the system. In addition, 73% of
respondents (second online survey) stated that the long delay in
development had little or no effect on their perception of the
system.
Q14 of the second survey asked respondents to give their
reaction to the following statement:
“Which statement do you agree with? Choose one:
1. A long delay in development equals a good quality system.
(Answer 32%)
2. A long delay in development equals a poor quality system.
(Answer 50%)
3. A short delay in development equals a poor quality system.
(Answer 9%)
4. A short delay in development equals a good quality system.
(Answer 9%)
The above supports H1 as respondents state they believe a long
delay in systems development manifests itself as a perception
that the information system will be of poor quality.
Q2 of the second survey asked respondents whether they agreed
or disagreed with the following statement:
“The five year delay in development of Ashton had a negative
impact on my satisfaction with the system”.
Strongly agree 0%
Agree 12%
Neither agree nor disagree 56%
Disagree 20%
Strongly disagree 12%
This result was inconclusive. In a qualitative interview
conducted on the 3rd
January 2014, WC commented that the
long delay of five years negatively affected his perception of the
system, stating that:
“It was clear that the people developing Ashton were not
capable of doing a good job” and that DFS “should have
bought an [off-the-shelf] accounts package and implemented it
within one month”.
In conclusion, H1 can only be partially supported.
5.2 Hypothesis Two
Hypothesis Two: Short continuous gaps of inactivity in system
implementation have a negative effect on end users’ perception
of the system.
Q19 of the first survey asked:
“Do you think the mothballing (postponement of development
for one year) of development of the system had a positive or
negative effect on employee’s perception of the system? Please
justify your answer”.
The question was closed as it had two answers, either positive
or negative, but the authors allowed respondents to elaborate
further on their answers in an optional text box. This question
can be analysed in two ways; respondents who answered
positive or negative and then the reasons why can be analysed
separately, to see if trends appear. Result of the 33 who
answered was as follows:
Positive affect 39%
Negative affect 61%
The negative response clearly demonstrates the one year gap
had a negative effect on employee perception of the system.
This answer supports H2. With regard to the implementation
gaps in development, in the Irish office the main reasons for
positive answers were:
1. That the system wasn’t ready
2. Better to have a system that works
The main reasons for negative answers were:
1. Lack of confidence in the system
2. Something must be wrong with the system for such a
delay to happen
Q1 of the second survey asked respondents to give their
reaction to the following:
“The gap of one year in the middle of development negatively
affected my satisfaction levels with Ashton”:
Strongly agree 8%
Agree 16%
Neither agree nor disagree 48%
Disagree 24%
Strongly disagree 4%
This result was partially supported with one quarter of
respondents agreeing and disagreeing and half of respondents
neither agreeing nor disagreeing.
In a qualitative interview on the 4th
January 2014, WC stated
that in his opinion the one year delay/ implementation gap
negatively affected his perception of the system stating that this
“gave [him] the impression [the Ashton developers] didn’t
know what they were doing”.
In conclusion, H2 can be supported.
5.3 Hypothesis Three
Hypothesis Three: Long delays in system implementation affect
end users’ satisfaction negatively during actual usage.
Q13 of the second survey asked respondents to give their
reaction to the following statement:
“The delay mentioned had an impact on my satisfaction level
WHEN I STARTED TO USE Ashton”.
Strongly agree 4%
Agree 24%
Neither agree nor disagree 56%
Disagree 16%
Strongly disagree 0%
Again this result was inconclusive, there may have been an
impact but this impact is difficult to quantify. This question
requires further research.
Q16 of the second survey asked respondents to give their
reaction to the following statement:
 
	
  
“Since I began using Ashton my satisfaction levels with the
system have increased”.
Strongly agree 4%
Agree 64%
Neither agree nor disagree 12%
Disagree 16%
Strongly disagree 4%
This result is conclusive with the majority of respondents
agreeing with the statement.
In a qualitative interview on 4th
January 2014, after being asked
if he believes that the long delays of five years in systems
development had affected end users’ satisfaction levels, WC
stated:
“No, I don’t believe so, everything that went on before is
irrelevant when you start using the system.”
Therefore, H3 is not supported.
5.4 Hypothesis Four
Hypothesis Four: Short continuous gaps of inactivity in system
implementation affect end users’ satisfaction negatively during
actual usage.
Q4 of the second survey asked:
“Is your user satisfaction with the Ashton system affected by
past delays, actual usage or by both?”
Past Delays 4%
Actual Usage 71%
Both 25%
This would suggest that H4 could not be supported.
Q8 of the first survey examined the following question:
“How would you describe your Information Technology
(specifically, systems development) knowledge”?
The respondents were very confident in their answers with 88%
describing their systems development knowledge as good or
very good. If the results are taken at face value, this may go
some way towards explaining end user satisfaction levels with
Ashton, and would broadly be in line Adam Mahmood, Burn, et
al.’s [1] hypothesis number five; that “there will be a positive
relationship between (self-reported) computer skills and user
satisfaction.” This finding may contain a self-reported bias as
one of the authors of this report was not under the impression
that high levels of proficiency regarding systems development
were present among his co-workers.
Q18 of the first survey asked: “How has your perception of
Ashton changed over time”?
Twenty-one percent felt that their perception of Ashton had not
changed at all over time. One respondent explained that their
perception has not changed, and that they saw no benefit to
using Ashton over the old accounting information system.
Eighteen percent made comments referring to the importance of
actual use of the system. For example, one respondent claimed
“as I am learning to use it, I think it [Ashton] will become more
useful,” while another respondent proclaimed, “I have
welcomed [Ashton’s] implementation in Fund Accounts team in
a positive [way] but was [originally] not in favour of this…but
by using this system over time, I gradually [am coming] to like
it.”
Another respondent’s answer hinted at how far Ashton still had
to evolve in terms of systems development “At the beginning
(about 1-2 years ago) I thought ‘it’s a waste of money’, but now
I think it will work well in a few years’ time.”
Q5 of the second survey, asked respondents whether they
agreed or disagreed with the following:
“I am satisfied with the training I have received for Ashton”.
Strongly agree 4%
Agree 56%
Neither agree nor disagree 16%
Disagree 16%
Strongly disagree 8%
In a qualitative face interview with an Ashton systems tester on
3rd
of January 2014, when asked: “Did a lack of training or (no
training) impact on your satisfaction with Ashton”? the
individual replied: “Yes. It made me dissatisfied with the
system”.
When also asked - If further training was provided to you on the
Ashton system, would this increase your satisfaction with the
system? the individual replied: “Yes, it would”.
When asked if further training was provided to you on the
Ashton system in the future, would this make it more likely that
you would continue to use the system? The system tester
replied. “Yes”.
Q19 of the second survey asked respondents to give their
reaction to the following statement:
“Now that I have used the Ashton system, I am willing to learn
and use the extra functions (extended use) of Ashton”.
Strongly agree 20%
Agree 64%
Neither agree nor disagree 12%
Disagree 4%
Strongly disagree 0%
This result is affirmative with the majority of respondents
(84%) either agreeing or strongly agreeing with this statement.
Q3 of the second survey asked respondents whether they agreed
or disagreed with the following statement:
“Do you agree that the events (such as the events mentioned in
the two previous questions) affect my level of satisfaction when
ACTUALLY USING the Ashton system”?
Strongly agree 4%
Agree 24%
Neither agree nor disagree 44%
Disagree 28%
Strongly disagree 0%
This result was inconclusive.
The above data confirms that other variables affected
satisfaction when end users started using the system and delays
had little or no affect.
In conclusion, H4 cannot be supported.
 
	
  
5.5 Hypothesis Five
Hypothesis Five: Delays have no impact on whether an
information system succeeds or not (success being defined as
enrolment of end users).
Q36 of the first survey asked:
In your opinion has Ashton been a success or failure?
Success 42%
Failure 6%
Somewhere in between 52%
Keep in mind that for Question 19 of the first survey, 61%
stated that they believed the postponement in development of
the system for one year had a negative effect on their perception
of the system. This result supports hypothesis five; that delays
have no impact on the success of the information system (where
success is defined as enrolment of end users).
Q15 of the second survey asked respondents to give their
reaction to the following statement:
“Considering all delays in development and the fact that I have
used Ashton, I am confident that it will succeed”.
Strongly agree 0%
Agree 68%
Neither agree nor disagree 24%
Disagree 4%
Strongly disagree 4%
This result is conclusive and suggests that once respondents got
to use the Ashton system they became more confident it would
succeed. This finding also supports H5.
It appears that delays have no impact on end user enrolment, if
you use the above definition of success. However, if we were to
use the Standish Group’s definition of success i.e. a system
being delivered on time, within budget and with all the required
features and functions, then Ashton could indeed be considered
a failure. Ashton spent its entire budget of €130,000, however,
did not exceed this budget. The project was delivered late and
not all features were delivered.
In conclusion, H5 is supported.
6. DISCUSSION
6.1 Introduction
Previous academic literature did find that delays in
implementing a system had a negative effect on user
acceptance, albeit small [8]. Research on delays in other fields
found that long delays could soften the effect of burdens in a
work environment and reduce the positive effect of benefits
[25]. Even small delays of twelve seconds can make individuals
become disillusioned and annoyed with browsers [28]. So
whilst research is available to some degree the authors’ research
has made a contribution to the field.
6.2 Discussion of Results
From the authors’ research, gaps of inactivity did have a
negative effect on end users perception. The gap in question
was one year in duration and was due to staff turnover, budget
issues and other variables taking precedence such as compliance
in regulatory reports. The empirical analysis showed that
employees’ perceived ease of use and usefulness diminished in
the implementation phase, because of the inactivity gap. This
inactivity gap happened three years into development, when
employees were already wondering if the system was ever
going to go live. Whilst employees looked unfavourably at the
gap in inactivity, the delay of five years before the development
went live had a neutral to positive effect on their perception.
Only 12% of respondents viewed this as a negative
development.
The concept that shorter delays and gaps of inactivity of roughly
one year might have a greater negative impact on employees’
perception of an information system than longer delays of
approximately five years is worth considering. Though it may
seem counterintuitive, the longer system implementation takes,
the less employee perception seems to be affected in a negative
way. From the data the authors collected, 82% of respondents
initially perceived Ashton as a good idea. Thus, the majority of
employees backed the new systems implementation project.
After five years of delays, when the authors asked employees’
opinions on whether they perceived Ashton positively or
negatively, 85% of employees gave positive replies. This figure
has only deviated marginally from the 82% at the beginning.
From this it appears that smaller delays and gaps in
implementation seem to affect employees’ perception more
negatively.
A tipping point may exist where, after a certain period of delay,
a general sense of apathy sets in for employees. The employees
may come to believe that the system will never actually be
delivered. This certainly could be construed to have been the
case with Ashton; with almost 25% of employees reportedly
feeling completely indifferent towards the new system. This
finding could have profound consequences for employee
motivation, concerning to learning about and adopting an
information system, as part of their daily working practices.
This finding also caused the authors to question whether the
timeframes involved with delays (after an initial period) actually
matter. Stemming from empirical research, shorter delays and
inactivity gaps start off having a strong negative influence on
end users’ perception of the usefulness of a system, however,
this negative perception begins to erode over time as the period
of delay becomes longer in duration. Again, the reason for this
may be either that employees cease caring, or possibly that they
begin to believe in management’s assertions that the
information system will be delivered shortly. Because of the
lack of academic research on delays, this point would have to be
teased out further in different scenarios to see if it holds true.
Gaps of inactivity or shorter delays do seem to affect
employees’ perception more negatively than prolonged delays
of five years. There are gaps in the literature research on delays,
and academics need to address the issues raised here by
studying this in a bigger arena.
The qualitative interviews tried to tease out what had happened
in DFS for these delays to occur. Whilst these explicitly were
not part of this study’s hypothesis they are an important part of
the delays variable and are now discussed. Could delays be
reduced by better communication, planning and research
practices? Considering interviews with management, there were
admissions that communication and planning practices with the
vendor could have been better. DFS while admitting some fault
were unwilling to take responsibility for the delays; JF stated;
“People in DFS can’t be blamed for the delays. I blame poor
communication with the vendor for problems…..There was a
knowledge gap on both sides.”
 
	
  
Conducted interviews ascertained that no hard research had
been done on the functionality of Ashton and the majority of
research carried out was based on the Ashton model used in the
UK.
Senior management admitted that the initial requirements and
specifications for the information system were not properly
communicated to the vendor. Several practices could be
improved on for future implementation projects keeping the
following in mind:
-­‐ The importance of clear communication
-­‐ Set timelines and the importance of estimates and costs
-­‐ Agreeing on expectations on both sides
While these employees’ opinions differ on best practice
implementation guidelines, it seems to be generally in line with
what was implemented by DFS. It is proposed to administrate
additional consultations with employees in order to tap into
their respective ideas in context of information system
implementation issues. Only 27% of end users were consulted
when developing Ashton. In The authors’ opinion this figure
should be increased to 70% of end users (at least) and 100% of
end users if possible, especially when the user base is relatively
small. Interviews can be useful tools for gathering ideas from
employees, and online survey tools can be used which won’t
adversely affect working schedules as they can be completed
outside working hours. DFS employed a lazy and meandering
approach to employee consultation and communication. This
approach caused a lack of clarity in what exactly the company
wanted to get out of the Ashton project. Poor project
management meant delays were inevitable.
More frequent, face-to-face contact with the vendor is necessary
too. The vendor was located in the UK meaning physical
distance added a further potential barrier to communication. If
this required more travelling to the UK and back then so be it.
There should have been increased use of online collaborative
tools, such as Skype and Google Hangout. Globalisation of
technology should mean that geographic distance is not a barrier
to completing projects in a timely manner. JF stated that he did
not think the vendor being based in the UK had any bearing on
communication, however, this study suggest that it does seem to
have hindered proper and frank discussions.
The knowledge gap between finance and IT is large and skilled
project managers need to be in place. When the authors initially
began researching the Ashton implementation, they asked the IT
department in DFS if they were willing to participate in
qualitative interviews. They agreed, but subsequently reneged
on this agreement citing conflict of interest, and also the fact
that the project was still deemed to be ongoing (after a draft set
of questions had been sent for the proposed interview). Whilst
the real reason for this refusal may never be known, the authors
can only speculate that they were not willing to divulge
sensitive information, or possibly they felt at fault for the long
delays in Ashton’s implementation. In the authors’ opinion,
internal confusion and lack of planning between departments
was one of the root causes of delays.
This poor relationship only added to the chaos that was
occurring in the implementation process and this added fuel to
the fire in the ongoing delays.
6.3 Validity of IT Acceptance Models
The authors believe that, in the context and business
environment, TAM failed to predict the acceptance of the
Ashton information system. Using the criteria of TAM, the
authors predicted that the Ashton system would have been
rejected and been a failure in terms of user enrolment. Benbasat
and Barki [3], claim that TAM and TAM2 only predicts 40% of
system use. The authors’ would therefore question TAMs
predictive capabilities based on this. The authors’ agree with
Benbasat and Barki [3] assertion that researchers might be
better suited devoting their time and energy to new and more
important strands of research. This is what the authors have
attempted to do with the research on delays. Delays are too
important a variable to be ignored and dismissed.
The authors found the Expectation Confirmation Model (ECM)
to be useful and relevant for academics and IT practitioners.
They believe that end user satisfaction with an information
system is a more suitable method of judging the success of an
information system, rather than TAM’s concentration on ‘actual
use’ of the system. In the authors’ opinion, actual use is a poor
measurement of success when use of the system is mandatory.
Although the authors’ research was primarily focused on the
implementation phase, and how delays affect end users, they did
question if research carried out previously on satisfaction during
usage would hold true.
H3 and H4 followed on from H1 and H2, and asked whether
satisfaction levels would be impacted by the gaps in inactivity
and the long delay in the usage phase. From the survey analysis
and interviews both these hypotheses were not supported.
Whilst it could be argued that the results of these hypotheses
were predictable, the authors still wanted to see if delays would
alter satisfaction at the usage stage. The authors’ study confirms
previous research that actual system usage is a primary
motivator of satisfaction, but that delays do have a cognitive
impact on employees’ perceptions in the implementation phase.
6.4 “The Implementation Delay Model”
The authors proposed a new model called “The Implementation
Delay Model” (IDM). This model aims to show how short gaps
of inactivity and long delays as defined in the authors’ research,
affect end users in how they perceive a system, and ultimately
in how they perceive a system to be a success or failure before
usage. The authors’ findings will need to be examined further in
different environments and to a bigger scale. The model is
simplistic in its approach, however, this is deliberate so as to see
what happens during the two types of delays. The inactivity
delay of one year caused a negative perception, which can lead
to a perception of system failure even before usage. Long delays
of five years appear to derive a neutral/positive perception from
end users, and in a sense do not have as much of an impact.
7. CONCLUSION
As a proviso, the authors would like to stress that the following
conclusions are specific to the context of the working
environment at DFS, which can be classified as a unique case.
The first conclusion the author can draw from the results
suggest that there exists strong evidence in terms of
implementation gaps (short delays) having a negative effects on
employees’ satisfaction with an information system.
Secondly, longer delays in systems implementation seem to
have a neutral to positive effect on employee perception.
Thirdly, no clear relationship between implementation and its
effect on usage of the Ashton information system could be
identified. It appears that no matter how long the delay or how
 
	
  
botched the implementation, employees will still use the system
if it can be used.
Finally, the authors firmly believe that the area of delays is an
understudied area of information systems, which can reveal
hidden treasures for those undaunted by the challenge of
studying such an ephemeral subject.
8. FURTHER RESEARCH
To overcome limitations of this resesearch, the authors propose
the following research projects to be carried out:
• Conduct a similar study covering different industries
• Conduct a similar study focusing on different type of
organisations, such as small, medium, large, national
and international operating.
• Conduct a study elaborating on the causes of delay
• Investigate a time range for the point at which apathy
sets in for end users experiencing delays in system
implementation
REFERENCES
[1] Adams, D. A; Nelson, R. R.; Todd, P. A., Perceived
usefulness, ease of use, and usage of information technology: A
replication, MIS Quarterly 16, (1992): 227–247
[2] Ajzen, I., and Fishbein, M. Understanding attitudes and
predicting social. Behaviour. Englewood Cliffs, NJ: Prentice-
Hall, (1980).
[3] Benbasat, I. and Barki, H. Quo vadis TAM?. Journal of
the association for information systems, 8,4 (2007): 7.
[4] Bhattacherjee, A. Understanding information systems
continuance: an expectation-confirmation model. MIS
quarterly (2001): 351-370.
[5] Bloch, M.; Blumberg, S; and Laartz, J. Delivering large-
scale IT projects on time, on budget, and on value. McKinsey
and Company. (2012) Available at:
http://www.mckinsey.com/insights/business_technology/deliver
ing_large-scale_it_projects_on_time_on_budget_and_on_value
[6] Bokhari, R.H. The relationship between system usage and
user satisfaction: a meta-analysis. Journal of Enterprise
Information Management, 18, 2, (2005): 211-234.
[7] Brown, S.A.; Venkatesh, V.; Kuruzovich, J.; and Massey, A.
P. Expectation confirmation: an examination of three competing
models. Organisational Behaviour and Human Decision
Processes, 105, (2008) 52-66.
[8] Chau, P. An empirical investigation on factors affecting the
acceptance of CASE by systems developers. Information and
Management, 30, (1996) 269-280.
[9] Davis, F.D., and Venkatesh, V. Toward preprototype user
acceptance testing of new information systems: implications for
software project management. Engineering Management, IEEE
Transactions on, 51, 1 (2004): 31-46.
[10] Delone, W.H., and McLean, E.R. The DeLone and
McLean model of information systems success: a ten-year
update. Journal of Management Information Systems, 19,4
(2003): 9-30.
[11] Geist, R.; Allen, R.; and Nowaczyk, R. Towards a model of
user perception of computer systems, response rime user system
interaction. Proceedings of CHI+ GI. , 87, (1987).
[12] Ginzberg, M. J. Early diagnosis of MIS implementation
failure: promising results and unanswered
questions. Management Science, 27, 4 (1981): 459-478.
[13] Glaser, B.G., 1992. Basics of Grounded Theory Analysis:
Emergence Vs. Forcing. Mill Valley, CA: Sociology Press.
[14] Heidegger, M., 1962, Being and Time, New York: Harper
[15] Holt, N., 2003. Coping in professional sport: A case study
of an experienced cricket player. Athletic Insight, 5(1), pp. 1-11.
[16] Hoxmeier, J. A., and DiCesare, C. System response time
and user satisfaction: An experimental study of browser-based
applications. AMCIS 2000 Proceedings (2000): 347.
[17] Hsieh Po-An, J.J. and Wang, W. Explaining employees’
Extended Use of complex information systems. European
Journal of Information Systems, 16, (2007) 216-227.
[18] Legris. P,; Ingham. J,; and Collerette, P. Why do people
use information technology? A critical review of the technology
acceptance model. Information and Management, 40, 3, (2001)
191-204.
[19] Levy, Y. and Ellis, T. A systems Approach to Conduct an
Effect Literature Review in Support of Information Systems
Research. Informing Science Journal, 9, (2006) 181-212.
[20] Malone, S, Career Transitions in Sport: A Psychological
Case Study (2013): 18 – 20
[21] Mahmood A.M.O, Burn, J. M., Gemoets, L. A., and
Jacquez, C. Variables affecting information technology end-user
satisfaction: a meta-analysis of the empirical
literature. International Journal of Human-Computer
Studies 52, 4, (2000): 751-771.
[22] Miscione, G. Telemedicine in the Upper Amazon: Interplay
with Local Medical Practices. MIS Quarterly, 31, 2, (2007) 403-
425.
[23] Moustakas, C., 1994. Phenomenological research methods.
Thousand Oaks, CA: Sage Publications
[24] Munn, P. and Drever, E. Using Questionnaires in Small
Scale Research. A Teachers’ Guide. Scottish Council for
Research in Education, (1990).
[25] Okhuysen, G. A., Galinsky, A. D., and Uptigrove, T. A.
Saving the worst for last: The effect of time horizon on the
efficiency of negotiating benefits and burdens. Organizational
Behavior and Human Decision Processes, 91, 2 (2003): 269-
279.
[26] Pantel L and Wolf L.C, On the impact of Delay on real-
time multiplayer games, ACM, May 2002
[27] Renner, K. E. Temporal integration: An incentive approach
to conflict resolution. In B. A. Maher (Ed.), Progress in
experimental personality research, 4, (1967) New York:
Academic Press.
[28] Rose, G.M., and Straub, D.W. The effect of download time
on consumer attitude toward the e-service retailer. E-service
Journal, 1,1 (2001): 55-76.
[29] Ruhleder, K and Jordan B, 1999, Meaning-Making Across
Remote Sites: How Delays in Transmission Affect Interaction,
Conference on Computer Supported Cooperative Work
[30] Rushinek, A. and Rushinek, S.F. What Makes Users
Happy? Communications of the ACM , 29,7, (1986): 594-598.
Standish Group International. The Chaos Manifesto 2013;
www.versionone.com/assets/img/files/CHAOSManifesto2013.
[31] Strutz, M.L. (2008). A Retrospective Study of Skills,
Traits, Influences, and School Experiences of Talented
Engineers. ASEE North Central Section
Conference Ilin.asee.org
[32] Tesch, R., 1990. Qualitative Research: Types and software
tools: New York: Falmer Press.
[33] Tolkien J.R.R. (1954), The Fellowship of the Ring
[34] Van Manen, M., 1997. Researching the Lived Experience:
Human Science for an Action Sensitive Padagogy, 2nd
ed.
London Ontario: The Althouse Press.
[35] Wilson, M., and Howcroft, D. Power, politics and
persuasion in IS evaluation: a focus on ‘relevant social
groups’.The Journal of Strategic Information
Systems, 14,1(2005):17- 43
[36] Wittman M and Paulus M.P, Decision making, impulsivity
and time perception, Trends in Cognitive Sciences, 2007

Weitere ähnliche Inhalte

Was ist angesagt?

Paper id 28201431
Paper id 28201431Paper id 28201431
Paper id 28201431IJRAT
 
Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Alex Rétif
 
Steele_The Value of Using End-Users
Steele_The Value of Using End-UsersSteele_The Value of Using End-Users
Steele_The Value of Using End-UsersSummer Steele
 
The case for ubuntu linux operating system performance and usabil
The case for ubuntu linux operating system performance and usabilThe case for ubuntu linux operating system performance and usabil
The case for ubuntu linux operating system performance and usabilMaurice Dawson
 
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...ijbiss
 
Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...
Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...
Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...Sociotechnical Roundtable
 
Do clickers beat pcs for testing students
Do clickers beat pcs for testing studentsDo clickers beat pcs for testing students
Do clickers beat pcs for testing studentsDr. Tina Rooks
 
Survey Based Reviewof Elicitation Problems
Survey Based Reviewof Elicitation ProblemsSurvey Based Reviewof Elicitation Problems
Survey Based Reviewof Elicitation ProblemsIJERA Editor
 
ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...
ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...
ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...IJITE
 
User-driven Technology Evaluation of eParticipation Systems
User-driven Technology Evaluation of eParticipation SystemsUser-driven Technology Evaluation of eParticipation Systems
User-driven Technology Evaluation of eParticipation SystemsSotiris Koussouris
 
Taubenberger
TaubenbergerTaubenberger
Taubenbergeranesah
 
An interactive approach to requirements prioritization using quality factors
An interactive approach to requirements prioritization using quality factorsAn interactive approach to requirements prioritization using quality factors
An interactive approach to requirements prioritization using quality factorsijfcstjournal
 
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSIS
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSISA METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSIS
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSISIJwest
 

Was ist angesagt? (19)

Paper id 28201431
Paper id 28201431Paper id 28201431
Paper id 28201431
 
Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...Simplifying Model-Based Systems Engineering - an Implementation Journey White...
Simplifying Model-Based Systems Engineering - an Implementation Journey White...
 
Steele_The Value of Using End-Users
Steele_The Value of Using End-UsersSteele_The Value of Using End-Users
Steele_The Value of Using End-Users
 
The case for ubuntu linux operating system performance and usabil
The case for ubuntu linux operating system performance and usabilThe case for ubuntu linux operating system performance and usabil
The case for ubuntu linux operating system performance and usabil
 
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...
EMPIRICAL STUDY OF THE EVOLUTION OF AGILE-DEVELOPED SOFTWARE SYSTEM IN JORDAN...
 
Systems Life Cycle
Systems Life CycleSystems Life Cycle
Systems Life Cycle
 
DebuggingMOZ04
DebuggingMOZ04DebuggingMOZ04
DebuggingMOZ04
 
Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...
Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...
Sociotechnical Systems in Virtual Organization: The Challenge of Coordinating...
 
Do clickers beat pcs for testing students
Do clickers beat pcs for testing studentsDo clickers beat pcs for testing students
Do clickers beat pcs for testing students
 
Survey Based Reviewof Elicitation Problems
Survey Based Reviewof Elicitation ProblemsSurvey Based Reviewof Elicitation Problems
Survey Based Reviewof Elicitation Problems
 
ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...
ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...
ANALYSIS OF DEVELOPMENT COOPERATION WITH SHARED AUTHORING ENVIRONMENT IN ACAD...
 
User-driven Technology Evaluation of eParticipation Systems
User-driven Technology Evaluation of eParticipation SystemsUser-driven Technology Evaluation of eParticipation Systems
User-driven Technology Evaluation of eParticipation Systems
 
Chapter ii thesis
Chapter ii  thesisChapter ii  thesis
Chapter ii thesis
 
Taubenberger
TaubenbergerTaubenberger
Taubenberger
 
Kanban
KanbanKanban
Kanban
 
Crowdsourcing Software Evaluation
Crowdsourcing Software EvaluationCrowdsourcing Software Evaluation
Crowdsourcing Software Evaluation
 
An interactive approach to requirements prioritization using quality factors
An interactive approach to requirements prioritization using quality factorsAn interactive approach to requirements prioritization using quality factors
An interactive approach to requirements prioritization using quality factors
 
Thesis
ThesisThesis
Thesis
 
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSIS
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSISA METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSIS
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSIS
 

Ähnlich wie p346-glowatz

Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...
Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...
Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...Maurice Dawson
 
The impact of user involvement in software development process
The impact of user involvement in software development processThe impact of user involvement in software development process
The impact of user involvement in software development processnooriasukmaningtyas
 
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...ijaia
 
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...Generating a Domain Specific Inspection Evaluation Method through an Adaptive...
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...Waqas Tariq
 
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...ijiert bestjournal
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
 
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...cscpconf
 
The impact of usability in information technology projects
The impact of usability in information technology projectsThe impact of usability in information technology projects
The impact of usability in information technology projectsCSITiaesprime
 
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...IJMIT JOURNAL
 
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...Kate Campbell
 
Validity of a graph-based automatic assessment system for programming assign...
Validity of a graph-based automatic assessment system for  programming assign...Validity of a graph-based automatic assessment system for  programming assign...
Validity of a graph-based automatic assessment system for programming assign...IJECEIAES
 
Monitoring and Visualisation Approach for Collaboration Production Line Envir...
Monitoring and Visualisation Approach for Collaboration Production Line Envir...Monitoring and Visualisation Approach for Collaboration Production Line Envir...
Monitoring and Visualisation Approach for Collaboration Production Line Envir...Waqas Tariq
 
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...IJNSA Journal
 
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...IJNSA Journal
 
235429094 jobportal-documentation
235429094 jobportal-documentation235429094 jobportal-documentation
235429094 jobportal-documentationsireesha nimmagadda
 
USER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEM
USER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEMUSER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEM
USER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEMijseajournal
 
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...Transitioning IT Projects to Operations Effectively in Public Sector : A Case...
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...ijmpict
 
IRJET- Speech and Hearing
IRJET- Speech and HearingIRJET- Speech and Hearing
IRJET- Speech and HearingIRJET Journal
 

Ähnlich wie p346-glowatz (20)

Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...
Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...
Understanding User’s Acceptance of Personal Cloud Computing: Using the Techno...
 
The impact of user involvement in software development process
The impact of user involvement in software development processThe impact of user involvement in software development process
The impact of user involvement in software development process
 
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...
 
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...Generating a Domain Specific Inspection Evaluation Method through an Adaptive...
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...
 
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...
A SURVEY ON ACCURACY OF REQUIREMENT TRACEABILITY LINKS DURING SOFTWARE DEVELO...
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
 
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSAN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMS
 
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...
CRESUS: A TOOL TO SUPPORT COLLABORATIVE REQUIREMENTS ELICITATION THROUGH ENHA...
 
The impact of usability in information technology projects
The impact of usability in information technology projectsThe impact of usability in information technology projects
The impact of usability in information technology projects
 
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...
Analysis of the User Acceptance for Implementing ISO/IEC 27001:2005 in Turkis...
 
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...
A Conceptual Design And Evaluation Framework For Mobile Persuasive Health Tec...
 
Validity of a graph-based automatic assessment system for programming assign...
Validity of a graph-based automatic assessment system for  programming assign...Validity of a graph-based automatic assessment system for  programming assign...
Validity of a graph-based automatic assessment system for programming assign...
 
STUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEM
STUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEMSTUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEM
STUDY OF AGENT ASSISTED METHODOLOGIES FOR DEVELOPMENT OF A SYSTEM
 
Monitoring and Visualisation Approach for Collaboration Production Line Envir...
Monitoring and Visualisation Approach for Collaboration Production Line Envir...Monitoring and Visualisation Approach for Collaboration Production Line Envir...
Monitoring and Visualisation Approach for Collaboration Production Line Envir...
 
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
 
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...
 
235429094 jobportal-documentation
235429094 jobportal-documentation235429094 jobportal-documentation
235429094 jobportal-documentation
 
USER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEM
USER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEMUSER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEM
USER REQUIREMENTS MODEL FOR UNIVERSITY TIMETABLE MANAGEMENT SYSTEM
 
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...Transitioning IT Projects to Operations Effectively in Public Sector : A Case...
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...
 
IRJET- Speech and Hearing
IRJET- Speech and HearingIRJET- Speech and Hearing
IRJET- Speech and Hearing
 

p346-glowatz

  • 1.     Information Systems Implementation Delays and Inactivity Gaps: The End User Perspectives Matt Glowatz David Malone Ian Fleming University College Dublin University College Dublin University College Dublin School of Business School of Business School of Business Dublin 4, Ireland Dublin 4, Ireland Dublin 4, Ireland matt.glowatz@ucd.ie davemaloner@gmail.com flemingi@tcd.ie ABSTRACT This paper presents findings of an empirical study into how delays and inactivity gaps affect end user perception in the system implementation phase - and subsequently - during the actual usage phase. The authors’ methods of research included interviews and online surveys with employees within an accounts team of a company named DFS. The findings indicate that inactivity gaps (short delays) have a negative impact on end user perception in the implementation phase, however on the other hand, long delays pose a minor effect on end user perception in the implementation phase. The data reveals end users view longer delays more positively than shorter delays with both types of delays having no bearing on satisfaction levels during system usage. Categories and Subject Descriptors Not Applicaple General Terms Human Factors Keywords Information Systems Implementation, Inactivity Gaps, Technology Acceptance Model (TAM), End-user Satisfaction 1. INTRODUCTION “Short cuts make long delays” [34] 1.1 Company background Dynamo Financial Services (DFS) is the Irish subsidiary of a global consulting conglomerate. In May 2008, the “Fund Accounting” division of DFS in Dublin began the process of developing a new accounting system called Ashton to replace an obsolete system called Idrone. The system was to be used in the organisation’s Irish (Dublin and Cork) and India offices. The decision to implement Ashton was taken without a detailed plan in place or assessment of how much work the project would involve. A mismanagement of the implementation process happened in the following five years which caused continuous delays, a twelve months period of inactivity where production halted resulting in a final launch in January 2013. 1.2 Statistics on Delays In its 2013 Chaos Manifesto [31], the Standish Group found that 74% of Information Technology (IT) projects experience time overruns (see Figure 1). The equivalent for time overruns in 2004 was 84%. Though there has been a 10% decrease in projects experiencing time overruns in the intervening eight years to 2012, a large proportion of IT projects still experience difficulties involving delays in development and implementation. Figure 1. Chaos Manifesto Statistics In 2012, a study conducted by McKinsey and Company and the University of Oxford [5] examining 5,400 large scale IT projects (projects with budgets greater than US$15M) found that - on average - large IT projects ran 45% over budget and 7% over time, while 56% delivered less value than predicted. Davis and Venkatesh [9] stated that the success rate of systems implementation is well below 50% with a significant proportion of new system projects encounter delays of various lengths during their implementation. 1.3 Delays Delays and how they are perceived are cognitive and therefore subjective. Researchers have found that delays are, for the majority of IT projects, a constant factor and a necessary evil. Chau [8] found in his study that delays had a relatively small and negative effect on end users’ perception and usage of a system. However apart from this one study there is a dearth of valuable research available on how delays affect end users in the context of IT implementation. Studies have been performed in different fields on system delays in context of user perceptions and notably how delays in transmission affect interaction, [29], how download delays in browsers affect users and retail customers [28], and how delays affect the negotiating of benefits and burdens in an employment context [25]. Even though all this research has been conducted on delays, academics have failed to address the issue of delays in IT projects. Rose and Straub [28, p.57] stated “that a delay is cited as a problem by numerous sources in the practitioner literature”, albeit in varying contexts. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.iiWAS2014, 4-6 December, 2014, Hanoi, Vietnam.Copyright 2014 ACM 978-1-4503-3001-5/14/12
  • 2.     Models of information technology acceptance have touched upon the affect of delays, however, only in the context of variables influencing acceptance. Two of these academic models are the Technology Acceptance Model (TAM) and the Expectation Confirmation Model (ECM). Delays are difficult to analyse as you delve in the realm of perceptions and how end users process information cognitively. The Ashton project in DFS suffered numerous delays ranging from long delays to periods of total inactivity. The reasons for these delays are not dissimilar to reasons for many other projects, such as poor planning, communication, costing issues and poorly trained staff. But do these delays and inactivity gaps affect end users and how they perceive the new system? Are their cognitive thoughts favouring long delays or are they indifferent to any type of delay? This study focuses on these perceptions during the implementation phase and subsequently the usage phase. In addition, this paper investigate whether a link between implementation and usage in regard to delays affecting satisfaction levels exist or not. 2. LITERATURE REVIEW 2.1 Introduction The literature review examines articles related to end user perception, adoption and acceptance during information systems implementation, end user expectations, implementation gaps and delays in systems development and the effect that delays have on end users in different contexts as mentioned in the introduction. 2.2 Delays and the Technology Acceptance Model (TAM) The authors examined the influence of delays in implementation on users’ Perceived Usefulness of a System (PU) and Perceived Ease of Use (PEOU) using the Technology Acceptance Model (TAM) by Davis [9] (see Figure 2). This was researched because the authors felt the gap in implementation of DFS’s Ashton information system may have had an effect on users’ perceptions of the system. Figure 2. The Technology Acceptance Model (TAM) Chau [8] examines TAM and also the Computer Utilisation Model and integrates these two models. His research found that Ease of Use (EOU) had the largest influence on Computer Aided Software Engineering (CASE) acceptance, and that the implementation gap was found to have a relatively small and negative effect on CASE acceptance by end users, through its influence on ease of use, near-term usefulness and long-term consequences. A point made about the implementation gap is that the wider the gap between old and new [procedures, skills and knowledge], the longer will be the time likely to be needed for individuals to learn the new skills and acquire knowledge and thus to adapt to the new procedures Chau [8, p.272]. He notes that abandonment of CASE is a chronic problem and that CASE adoption might require unlearning old practices. His paper sees the implementation gap as an external variable (as in the Technology Acceptance Model) that negatively affects the usefulness and ease of use of CASE tools, as perceived by systems developers. However, its direct effect on long-term consequences was not significant. Chau [8] appears to be one of the few academics examining the effects of an implementation gap using TAM, and the authors investigate whether his findings regarding CASE play out in a similar manner with Ashton, with particular reference to end users. TAM is not without its critics. The authors were acutely conscious of criticisms other authors have levelled against the model. Hsieh and Wang [17, p218] were scathing of the following proclivity of information systems researchers, stating “The independent attempts by several researchers to expand TAM has led to a state of theoretical chaos and confusion in which it is not clear which version of the many iterations of TAM is the commonly accepted one”. They also underpin the point that TAM serves as a “diversion of researchers’ attention away from more important phenomena,” and that TAM constantly talks about the importance of PU but very little research effort is going into investigating what actually makes a system useful. PU and PEOU have largely been treated like black boxes that very few academics and practitioners have tried to pry open. Venkatesh et al. [7], one of the foremost proponents of TAM, acknowledge that the model is predictive, but as it is generic it does not provide a sufficient level of understanding, and information to give system designers a platform to create user acceptance for new systems. In response to these criticisms the authors would make the point that TAM should account for delays but because of the amount of iterations to TAM, proposing a new model would be a better approach. Organisations need to be aware of the potential problems which implementation gaps and delays in development can cause in regard to end user acceptance of information systems. This awareness may help increase end user enrolment and eventually lead to stabilisation. 2.3 Expectation Confirmation Model (ECM) ECM deals with end user system acceptance (see Figure 3). The authors studied the work of Brown, Venkatesh, Kuruzovich and Massey [7]. Their paper examines the three models of: (1) disconfirmation, (2) ideal point and (3) experiences. Satisfaction with an information system is calculated as a function of expectations and experiences. Expectation, experiences and system satisfaction were all measured using the main components of the TAM, these being ease of use and usefulness. Figure 3. The Expectation Confirmation Model (ECM)
  • 3.     One useful point is that while technology acceptance research typically uses behavioural intention as the dependent variable, other research (Brown, Massey, Montoya - Weiss and Burkman, 2002) has suggested that system satisfaction, and not behavioural intention to use the system, is the appropriate dependent variable when the system in question is large scale, integrated and its use is mandated in the organisation [7, p.56] Ashton, the system at the core of the authors’ study, meets two of the three criteria above, however, it is used by all the employees in the “Fund Accounting” team. It could be argued that system satisfaction is the dependent variable. The ECM looks at expectations of employees, primarily post implementation of information systems. In addition, it looks at post-adoption satisfaction as a function of expectations, perceived performance, and disconfirmation of beliefs (the extent to which beliefs are not met). In the paper by Hsieh and Wang [17], PEOU, PU and satisfaction are examined to see how they affect a user’s extended use of a system, (extended use meaning using more functionality to support job performance and tasks). Employees in DFS have used Ashton since January 2013 and this study examines if delays in information systems implementation affected their extended use. 2.4 Delays in other fields Limited research has been conducted into how delays in development impact on employees’ perceptions, but plenty of research has been conducted on usage after implementation. The authors’ research attempts to establish if a link could exist between time pre and post implementation in regard to user perception and usage. Chau [8] is the primary academic piece the authors found that examines delays and implementation gaps. However, while academics have failed to get to grips with studying delays in an IT context they have studied delays in many other fields, from download delay on the internet [16], transmission delay in teleconferencing [29], delay of receiving benefits and burdens [25], and delays in playing online live games [26]. The study by Okhuysen, Galinsky and Uptigrove [25] on the effect time delays can have when distributing benefits (financial bonuses, pay rises) and burdens (redundancy, pay freezes) was intriguing. For the majority of individuals negative events are more salient, and negative thoughts and information weigh more heavily on one’s mind than positive thoughts. Humans tend to discount the impact benefits and burdens have when delays occur in receiving them. Rewards that are received sooner are often preferred over future rewards, that is the subjective value of an outcome is discounted as a function of the delay. Renner [27] observed that a delay in receiving a benefit is discouraging, which causes immediate benefits to be inflated in value and delayed ones discounted. If these findings are translated into the context of the authors’ study on Ashton, the system being implemented is the benefit, but because the system was delayed continuously over a five year period, the benefits gets discounted and eroded by the end users. The delay in implementation can potentially cause a benefit to become a burden for end users. Implement the system quicker and the benefit can be realised. Broadband speeds have improved in the last ten years, but there are still servers that cause download time delay for internet pages and files. Whilst these delays are relatively short in nature, it has been found by Hoxmeier and DiCesare [16] that browser based applications delays in excess of twelve seconds can cause levels of intolerance by end users to increase to such a level that satisfaction with the provider decreases substantially and this may lead to discontinued use. This shows that short delays cause issues for end users. In 1987 Geist, Allen and Nowacyzk [11] advocated the importance of response time and proposed that user perception of computer system response time be studied further and a model of user perception be designed. To outline further the relevance and importance of delays and time as variables in system implementation, Rushinek and Rushinek, [30] analysed the results of a questionnaire they drafted to determine the effects of seventeen different variables on user satisfaction. A total of 15,288 questionnaires were sent to subscribers of Computerworld magazine and the results showed that the number one factor relating to user satisfaction was response time. In the context of this study, response time is similar to delays. Three of the top five responses in the above study related to time factors as well, and whilst the variables concerning time and delays are cognitive in nature they cannot be ignored. 2.5 Other research With regard to the mandatory use of information systems by employees, Delone and McLean [10, p16] state that “even when use is required, variability in the quality and intensity of this use is likely to have a significant impact on the realisation of system benefits”. Wilson and Howcroft [36] recount the tale of nurses working at the Eldersite Hospital in the North of England and how they resisted enrolment with regard to their new Zenith Nursing Information System. The nurses who were the users consciously act to determine their circumstances by rejecting a system that they see as detrimental to their work. Nursing system end users complained that because the system was so slow and not user- friendly it interfered with them doing their jobs correctly. The nurses, the ‘relevant social group’ in question, consistently reported that they had not been involved in the design of the Nursing Information System. In the above case the Eldersite nurses could continue taking care of patients while ignoring the Zenith system and its’ time-intensive characteristics. This case shows that mandatory use of a system does not determine usage and success. 3 RESEARCH AIMS, OBJECTIVES AND HYPOTHESES 3.1 Aims and Objectives The aims and objectives of this study are: § To examine the effect long delays in system implementation have on end users perceptions of that system. § To examine the effect gaps of inactivity in system implementation have on end user perceptions of that system. § To examine if satisfaction levels of end users in the usage phase are affected by the delays in the implementation phase. § To see if the current models of IT acceptance, namely TAM and ECM, are relevant to this research or does a new model need to be developed to account for delays.
  • 4.     § To gain an understanding of other variables that might cause delays to occur. § To see if delays in implementation have any bearing on perceived success or failure. The authors’ aim is to research the above objectives in a form which allows subjective accounts of retrospective events from employees and also which can account for online quantitative based survey tools. There appears to be a lack of empirical data available implementation phase delays in the context of user perception. The authors’ literature review delved into the research that has already been carried out, but the novel nature of the authors’ empirical research has the potential to add to a research area that could be described as being neglected. 3.2 Long Delays in Implementation For the purpose of this research project, a long delay is defined as a period of five years or more from the time the new system got initiated until it went live. This incorporated the inactivity gap above as well as delays internally and from the vendor in developing the system. The authors wanted to see how these continuous delays over five years affected end user satisfaction levels with the system and how they perceived these delays, whether negatively or positively. 3.3 Implementation Gaps and Short Delays This study defines an implementation gap as a twelve-month period of non-activity where all development on the project is stopped. The authors aim, while studying the implementation gap in the Ashton development, was to develop a theory about whether this twelve month long gap in systems development had any influence on employees’ perceived levels of satisfaction with the system, and whether they perceived this gap as a negative or positive development. 3.4 Models of Information Technology Acceptance The Technology Acceptance Model (TAM) as envisioned by Davis [9], was built upon a foundation laid by the Theory of Reasoned Action (TRA); which was conceptualised by Ajzen and Fishbein [2]. TRA is a model for the prediction of behavioural intention, encompassing predictions of attitude and predictions of behaviour. TAM is an information systems theory, which models how end users come to accept and use technology. The model suggests that when end users are presented with a new technology, a number of variables influence their choices about how and when they will use it; notably: 1. Perceived usefulness (PU): the degree to which a person believes a particular system would enhance his or her job performance and 2. Perceived Ease of Use (PEOU): Davis defined this as “the degree to which a person believes that using a particular system would be free from effort” [9, p82] TAM demonstrates that external variables affect both PU and PEOU. This meant that the authors could slot in their external variable delays, and see how this affected actual system use. The survey data is used to see if end users’ experiences with Ashton compared to the old system (Idrone), can affect Perceived Usefulness. The survey data also studies Perceived Ease of Use of the Ashton system. TAM posits to predict system success and failure. Adams, Nelson and Todd [1] claim that – together - TAM and TAM’s extension, TAM2, account for only 40% of a technological system's use. The lack of any practical value is a criticism that has been levelled against TAM. Legris, Ingham and Collerette [18] suggest that TAM must be extended to include variables that account for change processes. TAM does not give delays the respect the authors think it deserves. The authors attempt to discover if delays can be added to TAM to explain why end users accept or reject IT systems. The Expectation Confirmation Model (ECM) is a useful model for the prediction of end user satisfaction with an information system (see Figure 3). The ECM is a cognitive theory, which seeks to explain post adoption satisfaction as a function of expectations, perceived performance and disconfirmation of beliefs. This study shows satisfaction with an information system is a superior method of measuring a system’s success, as opposed to just use of a system. This is because use of a system can be mandatory in organisations. A system could be a failure in employees’ eyes, in terms of their satisfaction levels, but because it is mandatory and being used by employees, it is trumpeted by management as a success, which may not be true. Ginzberg [12] argues that the most consistent explanation for why people are satisfied with an information system is that their expectations are neither too high nor too low, but in reality individuals expectations are wholly subjective. Bhattacherjee [4] talks about cognitive beliefs and how individual perceptions of system usefulness can change over time, and that satisfaction can become the main behavioural determinant of success.. Expectations influence both perceptions of performance of an information system and disconfirmation of beliefs, and affect post adoption satisfaction indirectly. 3.5 Hypotheses The authors defined the following five hypotheses which are tested for the purpose of this study: Hypothesis One (H1): Long delays in system implementation have a negative effect on end users’ perception of the system Hypothesis Two (H2): Short continuous gaps of inactivity in system implementation have a negative effect on end users’ perception of the system Hypothesis Three (H3): Long delays in system implementation affect end users’ satisfaction negatively during actual usage Hypothesis Four (H4): Short continuous gaps of inactivity in system implementation affect end users’ satisfaction negatively during actual usage Hypothesis Five (h5): Delays have no impact on whether an information system succeeds or not (success being defined as enrolment of end users’) 4. RESEARCH METHODS 4.1 Adapted Research Methods This study uilises qualitative and quantitative research focusing primarily on the Dublin office as this is where management, the project team, system testers and the majority of end users worked.
  • 5.     4.1.1 Qualitative The retrospective nature of this research meant interviews were used to question employees. Qualitative research can involve the study of case studies, stories, experiences, and in the authors’ case interviews. This type of research lets the researcher interpret the data he is collecting, so it can be defined as an interpretive study of subjective issues and problems in regard to certain events. The method is described as person centred by Moustakas, [23], in that personal problems or situations become real and alive. Heidegger was an advocate of a retrospective qualitative model called hermeneutic phenomenology which studies texts and multimedia. He believed that understanding is a basic form of human existence in that understanding is not a way we know the world, but rather the way we are. [14] 4.1.2 Quantitative In the context of examining a multinational organisation for this reseasrch, the authors also decided to utilise online survey tools to collect quantitative data without imposing adversely on employees. Surveymonkey.com was used for the first survey. Surveymonkey.com provides data collection, data analysis, brand management, and consumer marketing services. Qualtrics.com was used for the second survey. Qualtrics.com statistical analysis has been cited in many quantitative academic journals. [32] 4.2 Research Design, Structure and Procedure As the aim of the authors’ research was to see how delays and implementation gaps could affect end users over a period of time a diary and case study based approach was deemed appropriate. Case studies are well-used in academia and can ignite further research to develop theories explored. Longitudinal research can help ascertain if opinions and perceptions of systems change over time. By conducting multiple interviews with certain employees, a diary of interviews could be reviewed. These interviews were carried out over a twelve month period between March 2013 and March 2014. All interviewees were given lists of questions before scheduled meetings, so that they would be comfortable with what was being asked. Data was collected from interviews, observation research and online surveys. These were retrospective in nature using the hermeneutic approach. Online surveys gave the full employee base a chance to contribute. The first phase of structured interviews took place in the first week of March 2013, and the interviewees were the head of accounts (JF), the project manager (ND) and the main system tester (WC). These interviews focused on the history behind the decision process to implement the system, the ongoing development process and the potential reasons why the system was taking so long to go live. The interviews contained between twenty-one and thirty questions and were a mix of open and closed questions. The authors’ reasoning for this approach was that they wanted the interviewees to have a forum to explain their opinion of what the background to the Ashton implementation had been. These interviews were used as the basis for this study’s initial online survey sent out to the employee user base in June 2013. A draft set of fifty-one questions was first designed, discussed with JF and consolidated to an agreed thirty-six questions. JF did not want too much of employees’ time to be taken up by the online survey, hence the reduction. A pilot test was carried out using three employees asking ten questions in an online survey. The purpose of the pilot was to eliminate any possible duplications or inconsistencies. No data from this was used in the final analysis. The authors then proceeded to send the final draft of survey one to employees for completion. The survey was open for four weeks and it was agreed that anonymity would be removed in order to track completion by the employees. Second interviews took place in September and October 2013 with JF, ND and WC. These interviews were carried out as part of the authors’ longitudinal research and to see if opinions and perceptions of Ashton had changed since March 2013. The final round of structured interviews took place in January and February 2014 with WC and another system tester. JF and ND were unavailable for further interviews. Again these interviews examined the interviewees’ perceptions and changes in thoughts processes regarding Ashton. The second and final online survey focused on satisfaction levels before and after usage, and whether the long delays and one year gap of inactivity had affected it. As cognitive thoughts can change over time the findings here were useful in answering the research questions. The second survey was kept to twenty questions. These questions were drafted using a five-point Likert scale with strongly agree and strongly disagree at either end of the scale. Some questions were open-ended with the questionnaire distributed in February 2014. 4.3 Data Set Eight interviews and two online surveys (Thirty-three employees took part in the first online survey and twenty-five in the second online survey) were conducted over the period of research..The majority of answers to the first survey contained closed questions. The bulk of the questions asked in the second survey were done using the Likert scale. The online survey tools used, automatically generated results for these questions. The eight interviews and the text answers to the two surveys were analysed using two theories. Firstly a line by line inductive analysis using coded units of meaning as per Glaser [13]. A coded unit of meaning was defined by Tesch [33, p.116) as a “segment of text that is comprehensible by itself and contains one idea, episode or piece of information”. So each line of the interviews and survey text was defined as per the method outlined. Secondly, using Heidegger’s phenomenological theory each sentence is analysed using the question “what does this sentence reveal about the phenomenon or experience being described?” (see [35, p.93]). 5. RESULTS 5.1 Hypothesis One Hypothesis One: Long delays in system implementation have a negative effect on end users’ perception of the system. Q17 on the first survey asked, “How did the length of time before Ashton went live (approximately five years) affect your perception of the system”. Only 15% of respondents agreed that the long delay in implementation had negatively affected their perception of the system. This figure is similar to the result for the same question on the second survey, where 12% said the five-year delay negatively affected their perception of the system. In the first
  • 6.     survey, 12% felt the delay positively affected their perception of the new system. For the second survey, 32% either disagreed or strongly disagreed that the five-year delay had a negative impact on their satisfaction with the system. In addition, 73% of respondents (second online survey) stated that the long delay in development had little or no effect on their perception of the system. Q14 of the second survey asked respondents to give their reaction to the following statement: “Which statement do you agree with? Choose one: 1. A long delay in development equals a good quality system. (Answer 32%) 2. A long delay in development equals a poor quality system. (Answer 50%) 3. A short delay in development equals a poor quality system. (Answer 9%) 4. A short delay in development equals a good quality system. (Answer 9%) The above supports H1 as respondents state they believe a long delay in systems development manifests itself as a perception that the information system will be of poor quality. Q2 of the second survey asked respondents whether they agreed or disagreed with the following statement: “The five year delay in development of Ashton had a negative impact on my satisfaction with the system”. Strongly agree 0% Agree 12% Neither agree nor disagree 56% Disagree 20% Strongly disagree 12% This result was inconclusive. In a qualitative interview conducted on the 3rd January 2014, WC commented that the long delay of five years negatively affected his perception of the system, stating that: “It was clear that the people developing Ashton were not capable of doing a good job” and that DFS “should have bought an [off-the-shelf] accounts package and implemented it within one month”. In conclusion, H1 can only be partially supported. 5.2 Hypothesis Two Hypothesis Two: Short continuous gaps of inactivity in system implementation have a negative effect on end users’ perception of the system. Q19 of the first survey asked: “Do you think the mothballing (postponement of development for one year) of development of the system had a positive or negative effect on employee’s perception of the system? Please justify your answer”. The question was closed as it had two answers, either positive or negative, but the authors allowed respondents to elaborate further on their answers in an optional text box. This question can be analysed in two ways; respondents who answered positive or negative and then the reasons why can be analysed separately, to see if trends appear. Result of the 33 who answered was as follows: Positive affect 39% Negative affect 61% The negative response clearly demonstrates the one year gap had a negative effect on employee perception of the system. This answer supports H2. With regard to the implementation gaps in development, in the Irish office the main reasons for positive answers were: 1. That the system wasn’t ready 2. Better to have a system that works The main reasons for negative answers were: 1. Lack of confidence in the system 2. Something must be wrong with the system for such a delay to happen Q1 of the second survey asked respondents to give their reaction to the following: “The gap of one year in the middle of development negatively affected my satisfaction levels with Ashton”: Strongly agree 8% Agree 16% Neither agree nor disagree 48% Disagree 24% Strongly disagree 4% This result was partially supported with one quarter of respondents agreeing and disagreeing and half of respondents neither agreeing nor disagreeing. In a qualitative interview on the 4th January 2014, WC stated that in his opinion the one year delay/ implementation gap negatively affected his perception of the system stating that this “gave [him] the impression [the Ashton developers] didn’t know what they were doing”. In conclusion, H2 can be supported. 5.3 Hypothesis Three Hypothesis Three: Long delays in system implementation affect end users’ satisfaction negatively during actual usage. Q13 of the second survey asked respondents to give their reaction to the following statement: “The delay mentioned had an impact on my satisfaction level WHEN I STARTED TO USE Ashton”. Strongly agree 4% Agree 24% Neither agree nor disagree 56% Disagree 16% Strongly disagree 0% Again this result was inconclusive, there may have been an impact but this impact is difficult to quantify. This question requires further research. Q16 of the second survey asked respondents to give their reaction to the following statement:
  • 7.     “Since I began using Ashton my satisfaction levels with the system have increased”. Strongly agree 4% Agree 64% Neither agree nor disagree 12% Disagree 16% Strongly disagree 4% This result is conclusive with the majority of respondents agreeing with the statement. In a qualitative interview on 4th January 2014, after being asked if he believes that the long delays of five years in systems development had affected end users’ satisfaction levels, WC stated: “No, I don’t believe so, everything that went on before is irrelevant when you start using the system.” Therefore, H3 is not supported. 5.4 Hypothesis Four Hypothesis Four: Short continuous gaps of inactivity in system implementation affect end users’ satisfaction negatively during actual usage. Q4 of the second survey asked: “Is your user satisfaction with the Ashton system affected by past delays, actual usage or by both?” Past Delays 4% Actual Usage 71% Both 25% This would suggest that H4 could not be supported. Q8 of the first survey examined the following question: “How would you describe your Information Technology (specifically, systems development) knowledge”? The respondents were very confident in their answers with 88% describing their systems development knowledge as good or very good. If the results are taken at face value, this may go some way towards explaining end user satisfaction levels with Ashton, and would broadly be in line Adam Mahmood, Burn, et al.’s [1] hypothesis number five; that “there will be a positive relationship between (self-reported) computer skills and user satisfaction.” This finding may contain a self-reported bias as one of the authors of this report was not under the impression that high levels of proficiency regarding systems development were present among his co-workers. Q18 of the first survey asked: “How has your perception of Ashton changed over time”? Twenty-one percent felt that their perception of Ashton had not changed at all over time. One respondent explained that their perception has not changed, and that they saw no benefit to using Ashton over the old accounting information system. Eighteen percent made comments referring to the importance of actual use of the system. For example, one respondent claimed “as I am learning to use it, I think it [Ashton] will become more useful,” while another respondent proclaimed, “I have welcomed [Ashton’s] implementation in Fund Accounts team in a positive [way] but was [originally] not in favour of this…but by using this system over time, I gradually [am coming] to like it.” Another respondent’s answer hinted at how far Ashton still had to evolve in terms of systems development “At the beginning (about 1-2 years ago) I thought ‘it’s a waste of money’, but now I think it will work well in a few years’ time.” Q5 of the second survey, asked respondents whether they agreed or disagreed with the following: “I am satisfied with the training I have received for Ashton”. Strongly agree 4% Agree 56% Neither agree nor disagree 16% Disagree 16% Strongly disagree 8% In a qualitative face interview with an Ashton systems tester on 3rd of January 2014, when asked: “Did a lack of training or (no training) impact on your satisfaction with Ashton”? the individual replied: “Yes. It made me dissatisfied with the system”. When also asked - If further training was provided to you on the Ashton system, would this increase your satisfaction with the system? the individual replied: “Yes, it would”. When asked if further training was provided to you on the Ashton system in the future, would this make it more likely that you would continue to use the system? The system tester replied. “Yes”. Q19 of the second survey asked respondents to give their reaction to the following statement: “Now that I have used the Ashton system, I am willing to learn and use the extra functions (extended use) of Ashton”. Strongly agree 20% Agree 64% Neither agree nor disagree 12% Disagree 4% Strongly disagree 0% This result is affirmative with the majority of respondents (84%) either agreeing or strongly agreeing with this statement. Q3 of the second survey asked respondents whether they agreed or disagreed with the following statement: “Do you agree that the events (such as the events mentioned in the two previous questions) affect my level of satisfaction when ACTUALLY USING the Ashton system”? Strongly agree 4% Agree 24% Neither agree nor disagree 44% Disagree 28% Strongly disagree 0% This result was inconclusive. The above data confirms that other variables affected satisfaction when end users started using the system and delays had little or no affect. In conclusion, H4 cannot be supported.
  • 8.     5.5 Hypothesis Five Hypothesis Five: Delays have no impact on whether an information system succeeds or not (success being defined as enrolment of end users). Q36 of the first survey asked: In your opinion has Ashton been a success or failure? Success 42% Failure 6% Somewhere in between 52% Keep in mind that for Question 19 of the first survey, 61% stated that they believed the postponement in development of the system for one year had a negative effect on their perception of the system. This result supports hypothesis five; that delays have no impact on the success of the information system (where success is defined as enrolment of end users). Q15 of the second survey asked respondents to give their reaction to the following statement: “Considering all delays in development and the fact that I have used Ashton, I am confident that it will succeed”. Strongly agree 0% Agree 68% Neither agree nor disagree 24% Disagree 4% Strongly disagree 4% This result is conclusive and suggests that once respondents got to use the Ashton system they became more confident it would succeed. This finding also supports H5. It appears that delays have no impact on end user enrolment, if you use the above definition of success. However, if we were to use the Standish Group’s definition of success i.e. a system being delivered on time, within budget and with all the required features and functions, then Ashton could indeed be considered a failure. Ashton spent its entire budget of €130,000, however, did not exceed this budget. The project was delivered late and not all features were delivered. In conclusion, H5 is supported. 6. DISCUSSION 6.1 Introduction Previous academic literature did find that delays in implementing a system had a negative effect on user acceptance, albeit small [8]. Research on delays in other fields found that long delays could soften the effect of burdens in a work environment and reduce the positive effect of benefits [25]. Even small delays of twelve seconds can make individuals become disillusioned and annoyed with browsers [28]. So whilst research is available to some degree the authors’ research has made a contribution to the field. 6.2 Discussion of Results From the authors’ research, gaps of inactivity did have a negative effect on end users perception. The gap in question was one year in duration and was due to staff turnover, budget issues and other variables taking precedence such as compliance in regulatory reports. The empirical analysis showed that employees’ perceived ease of use and usefulness diminished in the implementation phase, because of the inactivity gap. This inactivity gap happened three years into development, when employees were already wondering if the system was ever going to go live. Whilst employees looked unfavourably at the gap in inactivity, the delay of five years before the development went live had a neutral to positive effect on their perception. Only 12% of respondents viewed this as a negative development. The concept that shorter delays and gaps of inactivity of roughly one year might have a greater negative impact on employees’ perception of an information system than longer delays of approximately five years is worth considering. Though it may seem counterintuitive, the longer system implementation takes, the less employee perception seems to be affected in a negative way. From the data the authors collected, 82% of respondents initially perceived Ashton as a good idea. Thus, the majority of employees backed the new systems implementation project. After five years of delays, when the authors asked employees’ opinions on whether they perceived Ashton positively or negatively, 85% of employees gave positive replies. This figure has only deviated marginally from the 82% at the beginning. From this it appears that smaller delays and gaps in implementation seem to affect employees’ perception more negatively. A tipping point may exist where, after a certain period of delay, a general sense of apathy sets in for employees. The employees may come to believe that the system will never actually be delivered. This certainly could be construed to have been the case with Ashton; with almost 25% of employees reportedly feeling completely indifferent towards the new system. This finding could have profound consequences for employee motivation, concerning to learning about and adopting an information system, as part of their daily working practices. This finding also caused the authors to question whether the timeframes involved with delays (after an initial period) actually matter. Stemming from empirical research, shorter delays and inactivity gaps start off having a strong negative influence on end users’ perception of the usefulness of a system, however, this negative perception begins to erode over time as the period of delay becomes longer in duration. Again, the reason for this may be either that employees cease caring, or possibly that they begin to believe in management’s assertions that the information system will be delivered shortly. Because of the lack of academic research on delays, this point would have to be teased out further in different scenarios to see if it holds true. Gaps of inactivity or shorter delays do seem to affect employees’ perception more negatively than prolonged delays of five years. There are gaps in the literature research on delays, and academics need to address the issues raised here by studying this in a bigger arena. The qualitative interviews tried to tease out what had happened in DFS for these delays to occur. Whilst these explicitly were not part of this study’s hypothesis they are an important part of the delays variable and are now discussed. Could delays be reduced by better communication, planning and research practices? Considering interviews with management, there were admissions that communication and planning practices with the vendor could have been better. DFS while admitting some fault were unwilling to take responsibility for the delays; JF stated; “People in DFS can’t be blamed for the delays. I blame poor communication with the vendor for problems…..There was a knowledge gap on both sides.”
  • 9.     Conducted interviews ascertained that no hard research had been done on the functionality of Ashton and the majority of research carried out was based on the Ashton model used in the UK. Senior management admitted that the initial requirements and specifications for the information system were not properly communicated to the vendor. Several practices could be improved on for future implementation projects keeping the following in mind: -­‐ The importance of clear communication -­‐ Set timelines and the importance of estimates and costs -­‐ Agreeing on expectations on both sides While these employees’ opinions differ on best practice implementation guidelines, it seems to be generally in line with what was implemented by DFS. It is proposed to administrate additional consultations with employees in order to tap into their respective ideas in context of information system implementation issues. Only 27% of end users were consulted when developing Ashton. In The authors’ opinion this figure should be increased to 70% of end users (at least) and 100% of end users if possible, especially when the user base is relatively small. Interviews can be useful tools for gathering ideas from employees, and online survey tools can be used which won’t adversely affect working schedules as they can be completed outside working hours. DFS employed a lazy and meandering approach to employee consultation and communication. This approach caused a lack of clarity in what exactly the company wanted to get out of the Ashton project. Poor project management meant delays were inevitable. More frequent, face-to-face contact with the vendor is necessary too. The vendor was located in the UK meaning physical distance added a further potential barrier to communication. If this required more travelling to the UK and back then so be it. There should have been increased use of online collaborative tools, such as Skype and Google Hangout. Globalisation of technology should mean that geographic distance is not a barrier to completing projects in a timely manner. JF stated that he did not think the vendor being based in the UK had any bearing on communication, however, this study suggest that it does seem to have hindered proper and frank discussions. The knowledge gap between finance and IT is large and skilled project managers need to be in place. When the authors initially began researching the Ashton implementation, they asked the IT department in DFS if they were willing to participate in qualitative interviews. They agreed, but subsequently reneged on this agreement citing conflict of interest, and also the fact that the project was still deemed to be ongoing (after a draft set of questions had been sent for the proposed interview). Whilst the real reason for this refusal may never be known, the authors can only speculate that they were not willing to divulge sensitive information, or possibly they felt at fault for the long delays in Ashton’s implementation. In the authors’ opinion, internal confusion and lack of planning between departments was one of the root causes of delays. This poor relationship only added to the chaos that was occurring in the implementation process and this added fuel to the fire in the ongoing delays. 6.3 Validity of IT Acceptance Models The authors believe that, in the context and business environment, TAM failed to predict the acceptance of the Ashton information system. Using the criteria of TAM, the authors predicted that the Ashton system would have been rejected and been a failure in terms of user enrolment. Benbasat and Barki [3], claim that TAM and TAM2 only predicts 40% of system use. The authors’ would therefore question TAMs predictive capabilities based on this. The authors’ agree with Benbasat and Barki [3] assertion that researchers might be better suited devoting their time and energy to new and more important strands of research. This is what the authors have attempted to do with the research on delays. Delays are too important a variable to be ignored and dismissed. The authors found the Expectation Confirmation Model (ECM) to be useful and relevant for academics and IT practitioners. They believe that end user satisfaction with an information system is a more suitable method of judging the success of an information system, rather than TAM’s concentration on ‘actual use’ of the system. In the authors’ opinion, actual use is a poor measurement of success when use of the system is mandatory. Although the authors’ research was primarily focused on the implementation phase, and how delays affect end users, they did question if research carried out previously on satisfaction during usage would hold true. H3 and H4 followed on from H1 and H2, and asked whether satisfaction levels would be impacted by the gaps in inactivity and the long delay in the usage phase. From the survey analysis and interviews both these hypotheses were not supported. Whilst it could be argued that the results of these hypotheses were predictable, the authors still wanted to see if delays would alter satisfaction at the usage stage. The authors’ study confirms previous research that actual system usage is a primary motivator of satisfaction, but that delays do have a cognitive impact on employees’ perceptions in the implementation phase. 6.4 “The Implementation Delay Model” The authors proposed a new model called “The Implementation Delay Model” (IDM). This model aims to show how short gaps of inactivity and long delays as defined in the authors’ research, affect end users in how they perceive a system, and ultimately in how they perceive a system to be a success or failure before usage. The authors’ findings will need to be examined further in different environments and to a bigger scale. The model is simplistic in its approach, however, this is deliberate so as to see what happens during the two types of delays. The inactivity delay of one year caused a negative perception, which can lead to a perception of system failure even before usage. Long delays of five years appear to derive a neutral/positive perception from end users, and in a sense do not have as much of an impact. 7. CONCLUSION As a proviso, the authors would like to stress that the following conclusions are specific to the context of the working environment at DFS, which can be classified as a unique case. The first conclusion the author can draw from the results suggest that there exists strong evidence in terms of implementation gaps (short delays) having a negative effects on employees’ satisfaction with an information system. Secondly, longer delays in systems implementation seem to have a neutral to positive effect on employee perception. Thirdly, no clear relationship between implementation and its effect on usage of the Ashton information system could be identified. It appears that no matter how long the delay or how
  • 10.     botched the implementation, employees will still use the system if it can be used. Finally, the authors firmly believe that the area of delays is an understudied area of information systems, which can reveal hidden treasures for those undaunted by the challenge of studying such an ephemeral subject. 8. FURTHER RESEARCH To overcome limitations of this resesearch, the authors propose the following research projects to be carried out: • Conduct a similar study covering different industries • Conduct a similar study focusing on different type of organisations, such as small, medium, large, national and international operating. • Conduct a study elaborating on the causes of delay • Investigate a time range for the point at which apathy sets in for end users experiencing delays in system implementation REFERENCES [1] Adams, D. A; Nelson, R. R.; Todd, P. A., Perceived usefulness, ease of use, and usage of information technology: A replication, MIS Quarterly 16, (1992): 227–247 [2] Ajzen, I., and Fishbein, M. Understanding attitudes and predicting social. Behaviour. Englewood Cliffs, NJ: Prentice- Hall, (1980). [3] Benbasat, I. and Barki, H. Quo vadis TAM?. Journal of the association for information systems, 8,4 (2007): 7. [4] Bhattacherjee, A. Understanding information systems continuance: an expectation-confirmation model. MIS quarterly (2001): 351-370. [5] Bloch, M.; Blumberg, S; and Laartz, J. Delivering large- scale IT projects on time, on budget, and on value. McKinsey and Company. (2012) Available at: http://www.mckinsey.com/insights/business_technology/deliver ing_large-scale_it_projects_on_time_on_budget_and_on_value [6] Bokhari, R.H. The relationship between system usage and user satisfaction: a meta-analysis. Journal of Enterprise Information Management, 18, 2, (2005): 211-234. [7] Brown, S.A.; Venkatesh, V.; Kuruzovich, J.; and Massey, A. P. Expectation confirmation: an examination of three competing models. Organisational Behaviour and Human Decision Processes, 105, (2008) 52-66. [8] Chau, P. An empirical investigation on factors affecting the acceptance of CASE by systems developers. Information and Management, 30, (1996) 269-280. [9] Davis, F.D., and Venkatesh, V. Toward preprototype user acceptance testing of new information systems: implications for software project management. Engineering Management, IEEE Transactions on, 51, 1 (2004): 31-46. [10] Delone, W.H., and McLean, E.R. The DeLone and McLean model of information systems success: a ten-year update. Journal of Management Information Systems, 19,4 (2003): 9-30. [11] Geist, R.; Allen, R.; and Nowaczyk, R. Towards a model of user perception of computer systems, response rime user system interaction. Proceedings of CHI+ GI. , 87, (1987). [12] Ginzberg, M. J. Early diagnosis of MIS implementation failure: promising results and unanswered questions. Management Science, 27, 4 (1981): 459-478. [13] Glaser, B.G., 1992. Basics of Grounded Theory Analysis: Emergence Vs. Forcing. Mill Valley, CA: Sociology Press. [14] Heidegger, M., 1962, Being and Time, New York: Harper [15] Holt, N., 2003. Coping in professional sport: A case study of an experienced cricket player. Athletic Insight, 5(1), pp. 1-11. [16] Hoxmeier, J. A., and DiCesare, C. System response time and user satisfaction: An experimental study of browser-based applications. AMCIS 2000 Proceedings (2000): 347. [17] Hsieh Po-An, J.J. and Wang, W. Explaining employees’ Extended Use of complex information systems. European Journal of Information Systems, 16, (2007) 216-227. [18] Legris. P,; Ingham. J,; and Collerette, P. Why do people use information technology? A critical review of the technology acceptance model. Information and Management, 40, 3, (2001) 191-204. [19] Levy, Y. and Ellis, T. A systems Approach to Conduct an Effect Literature Review in Support of Information Systems Research. Informing Science Journal, 9, (2006) 181-212. [20] Malone, S, Career Transitions in Sport: A Psychological Case Study (2013): 18 – 20 [21] Mahmood A.M.O, Burn, J. M., Gemoets, L. A., and Jacquez, C. Variables affecting information technology end-user satisfaction: a meta-analysis of the empirical literature. International Journal of Human-Computer Studies 52, 4, (2000): 751-771. [22] Miscione, G. Telemedicine in the Upper Amazon: Interplay with Local Medical Practices. MIS Quarterly, 31, 2, (2007) 403- 425. [23] Moustakas, C., 1994. Phenomenological research methods. Thousand Oaks, CA: Sage Publications [24] Munn, P. and Drever, E. Using Questionnaires in Small Scale Research. A Teachers’ Guide. Scottish Council for Research in Education, (1990). [25] Okhuysen, G. A., Galinsky, A. D., and Uptigrove, T. A. Saving the worst for last: The effect of time horizon on the efficiency of negotiating benefits and burdens. Organizational Behavior and Human Decision Processes, 91, 2 (2003): 269- 279. [26] Pantel L and Wolf L.C, On the impact of Delay on real- time multiplayer games, ACM, May 2002 [27] Renner, K. E. Temporal integration: An incentive approach to conflict resolution. In B. A. Maher (Ed.), Progress in experimental personality research, 4, (1967) New York: Academic Press. [28] Rose, G.M., and Straub, D.W. The effect of download time on consumer attitude toward the e-service retailer. E-service Journal, 1,1 (2001): 55-76. [29] Ruhleder, K and Jordan B, 1999, Meaning-Making Across Remote Sites: How Delays in Transmission Affect Interaction, Conference on Computer Supported Cooperative Work [30] Rushinek, A. and Rushinek, S.F. What Makes Users Happy? Communications of the ACM , 29,7, (1986): 594-598. Standish Group International. The Chaos Manifesto 2013; www.versionone.com/assets/img/files/CHAOSManifesto2013. [31] Strutz, M.L. (2008). A Retrospective Study of Skills, Traits, Influences, and School Experiences of Talented Engineers. ASEE North Central Section Conference Ilin.asee.org [32] Tesch, R., 1990. Qualitative Research: Types and software tools: New York: Falmer Press. [33] Tolkien J.R.R. (1954), The Fellowship of the Ring [34] Van Manen, M., 1997. Researching the Lived Experience: Human Science for an Action Sensitive Padagogy, 2nd ed. London Ontario: The Althouse Press. [35] Wilson, M., and Howcroft, D. Power, politics and persuasion in IS evaluation: a focus on ‘relevant social groups’.The Journal of Strategic Information Systems, 14,1(2005):17- 43 [36] Wittman M and Paulus M.P, Decision making, impulsivity and time perception, Trends in Cognitive Sciences, 2007