WELCOME TO THE BLOG!

I would like to define the topic we are going  to discuss in this presentation as the Use and Interpretation of Quasi-Experimental Studies. I find this methodology intriguing and hope your will share my excitement over the possibilities it presents. There is a broad range of various types of quasi-experimental designs that have a diversity of applications in particular situations. In the background section, I will briefly introduce the more widely-used quasi-experimental designs and outline their benefits and limitations.

For the analysis and appraisal of this specific methodology, I have chosen the article which I consider relevant for all of us, regardless of the variety of our work settings and professional interests, and hope you will find it appealing, because it is about graduate nursing students and different types of learning environments, the internet-based one included.

I believe that by discussing the specifics of quasi-experimental study design, we will achieve a deeper understanding of the application of this interesting methodology that can be used to investigate various complex interventions.
Background

Definition: Quasi-experiments are studies that aim to evaluate interventions but that do not use randomization. Similar to randomized trials, quasi-experiments aim to demonstrate causality between an intervention and an outcome. These studies can use both preintervention and postintervention measurements as well as nonrandomly selected control groups (Harris et al., 2006). 

Because quasi-experiments were developed by researchers with psychological orientations, they have been published more often by such researchers (Grant & Wall, 2009). However, currently they are widely used in organizational research, in the context of business and educational settings, in medical informatics, in a hospital or public health setting, in a word – in situations when researchers often choose not to randomize the intervention for one or more reasons: 

1. Ethical considerations;
2. Difficulty of randomizing subjects; 
3. Difficulty to randomize by locations (e.g., by wards);
4. Small available sample size;
5. A need to intervene quickly (Harris et al., 2004). 
Categories of Quasi-Experimental Study Designs:

A. Quasi-experimental designs without control groups
B. Quasi-experimental designs that use control groups but no pretest
C. Quasi-experimental designs that use control groups and pretests
D. Quasi-experimental designs that use control groups, pretests and postest
E.  Interrupted time-series designs

In research literature, there is a relative hierarchy within these categories of study designs, with the last two designs being considered the higher rated categories in terms of establishing causality (Harris et al., 2006). The main use of the design with pretests and postests, or controlled before and after studies, as Muir Gray (2009) defines them, is “to asses the impact of changes in health service organization or policy” (p. 164). An interrupted time-series design is used to investigate complex interventions immediately and over time when rendomization is not possible or practical, such as a change in policy: “guideline implementation strategies in primary care or a mass media campaign” (Muir Gray, 2009, p. 167). 
                                    

Benefits: Grant and Wall (2009) identify five key benefits of quasi-experimentation:

1. Strengthening causal inference when random assignment and controlled manipulation are not possible or ethical;
2. Building better theories of time and temporal progression;
3. Minimizing ethical dilemmas of harm, inequity, paternalism, and deception;
4. Facilitating collaboration with practitioners;
5. Using context to explain conflicting findings.

Limitations: Harris et al. (2004) identify potential methodological flaws of quasi experiments in the studies conducted in medical disciplines. The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi experiments meet some requirements of causality, because the intervention precedes the measurement of the outcome. Also, the outcome can be demonstrated to vary statistically with the intervention. Thus, the question arises: Are there alternative explanations for the apparent causal association? If these alternative explanations are credible, the evidence is less than convincing.

The methodological principles that most often result in alternative explanations in quasi-experimental studies include the following: 

1. Difficulty in controlling for important confounding variables;
2. Results that are explained by the statistical principle of regression to the mean;  
3. Maturation effects (Harris et al., 2004).

In an interrupted time-series, maturation effects, for example, occur if the intervention being evaluated is a technique requiring training and a set of skills (Muir Gray, 2009).

Future Opportunities for Quasi-Experimentation
Bridging the Positivist-Interpretivist Divide

Despite the weaknesses mentioned above, quasi-experimentation is designed to flourish in a methodologically diverse universe - one in which the research question, not an ideological or methodological commitment, guides the investigator’s choice of method. 

Historically, quasi-experimentation has been a tool embraced by positivists favouring quantitative methods, whereas interpretivists have preferred qualitative methods such as narrative inquiry and case studies. However, recent developments in both quasi-experimentation and case study research suggest that the potential exists for a synthesis of these two methods (Grant & Wall, 2009). Although many case studies are post hoc explorations of an interesting development or phenomenon, there is a suggestion that case studies can qualify as quasi-experiments if they meet certain criteria (Campbell & Stanley as cited in Grant & Wall, 2009). 

In ‘The Effect of Type of Learning Environment on Perceived Learning Among Graduate Nursing Students’ by Wells and Dellinger (2011), the authors describe a quasi-experimental study which was conducted to examine the effect of type of learning environment (Internet only, compressed video remote-site, and compressed video host-site) on perceived learning among graduate nursing students. A quasi-experimental posttest-only design was used. The Internet only and compressed video remote-site comprised the experimental groups and the compressed video host-site group served as the control. 

Conceptual Framework: According to the Learner-Interaction Model used to guide the study, learning is a function of different types of interactions, specifically, interactions between learner and instructor (learner-instructor) and learner and learner (learner-learner). While these types of interactions occur in every course, they are critical in asynchronous learning environments. Both types of relationships provide students with social, emotional, and academic support, but learner-system interaction promotes or constrains the quantity and quality of the interactions (Swan as cited in Wells & Dellinger, 2011).

The researchers’ findings showed no differences in perceived learning and final course grades among students enrolled in the three sections of the research course. 

According to the Checklist of Factors (Muir Gray, 2009), the questions to consider when appraising the results of a quasi-experiment, are as follows: 

1. Was the control group comparable to the intervention (here: the experimental) group(s) in terms of population characteristics, performance and setting? 
Although, the design used by the researchers did not have the pretest, I think the questions of the checklist are applicable to the study presented in the article. Thus, the answer to the fist question will be – yes. According to Wells and Dellinger (2011), no differences were found in the average age or computer skills of students enrolled in the three types of learning environments. 

2. Was data collection contemporaneous in the intervention and control groups? 
The students participating in the study were enrolled in the research course during their first or second semester in the graduate program, and data collection occurred at midsemester in all three groups, so the data collection was contemporaneous. 

3. Was the same method of data collection used in the intervention and control groups? 
Demographic data were collected via a researcher-generated questionnaire in all three groups. A Learner-Interaction Tool was also used for all the study participants to measure learner-instructor interaction, learner-learner interaction, perceived learning, and learner-system interaction. A seven-point Likert-type scale was used with responses ranging from 1 (strongly disagree) to 7 (strongly agree).

4. Were follow-up data collected for 80-100% of participants? 
There was no follow-up performed, which is considered one of the study limitations. The researchers suggested that if this study were replicated with students who had been enrolled in a graduate program for four semesters or more, there would be more opportunities for interaction with the technology and with other learners, and the results might support the conceptual model and validate that perceived learning is also influenced by learner-learner and learner-system interactions.

5. Was the assessment of outcomes blinded? 
A three-digit coding number was recorded on each instrument to ensure student anonymity. The lead faculty member was not present in the classroom during data collection.

6. Is it unlikely that the control group received the intervention? 
Although the host-site students were physically located in the same classroom as the instructor, the setting did not fully represent a traditional classroom. Students were fully aware of the need to constantly engage the desk microphones when speaking in order to be heard by students at the remote site. Also, the instructor needed to adjust the cameras to focus on the host-site classroom, the remotesite classroom, or the PowerPoint presentation - all these requirements are not typical of the traditional classroom. There is a possibility that students in the host site might be more similar to students in the remote site than to students in a traditional classroom setting, which can be viewed as some sort of intervention. 

Conclusion 

Summing up the assessment, I can say that despite the weaknesses of the study design used by the researchers (Wells & Dellinger, 2011), there are important implications this study has for future research: The findings suggest that quality of instruction is more important than the medium by which course content is delivered. A significant effect of the interaction between learner and instructor on perceived learning underscores the impact of faculty on the learning process. As Internet and other asynchronous teaching modalities become more prevalent in higher education, with universities competing for students nationally and internationally, teaching quality must be maintained. The finding that type of learning environment did not affect perceived learning is a desirable outcome. 


Questions to the Class 


1.Several factors limited the generalizability of the study findings, according to the researchers (Wells & Dellinger, 2011). 

Do you think differences in age and computer skills may have an impact on the students’ perceived learning when they are enrolled in the internet-based courses? What other demographic variables, or other factors, can have effects on perceived learning? 

2. How could your practice setting or profession engage in a quasi-experimental research? What would be the benefits and potential limitations of this specific quasi-experimental design study? 

Please, provide your answers in the appropriate forum thread, so that we can all take part in the discussion.  



              THANK YOU!

References

Fleiss, J. L., Levin, B.A., & Paik, M.C. (2003). Statistical Methods for Rates and Proportions (3rd ed.). New York: Wiley Series in Probability and Statistics.

Grant, A. M., & Wall, T. D. (2009). The neglected science and art of quasi experimentation: Why-to, when-to, and how-to advice for organizational researchers. Organizational Research Methods, 12(4), 653-686.

Harris, A. D., Bradham, D. D., Baumgarten, M., Zuckerman I. H., Fink J.C., & Perencevich, E. N. (2004). The use and interpretation of quasi-experimental studies in infectious diseases. Clinical Infectious Diseases, 38(1), 1586-1591.  

Harris, A. D., McGregor, J. C., Perencevich, E. N., Furuno, J. P., Zhu J., Peterson, D. E., & Finkelstein, J. (2006). The use and interpretation of quasi-experimental studies in medical informatics. Journal of the American Medical Informatics Association, 13(1), 16-23. 

Levy, Y., & Ellis, T. J. (2011). A guide for novice researchers on experimental and quasi-experimental studies in information systems research. Interdisciplinary Journal of Information, Knowledge, and Management, 6, 151-160.   

Merriam-Webster Dictionary. (n. d.). An Encyclopaedia Britannica Company. Retrieved from http://www.merriam-webster.com/dictionary/

Muir Gray, J. A. (2009). Evidence-based healthcare and public health: How to make decisions about health services and public health (3rd ed.). New York, NY: Churchill Livingstone.

Psychology Glossary. (n. d.). Confounding Variable. Retrieved from http://www.alleydog.com/glossary/definition.php?term

Wells, M. I., & Dellinger, A. B. (2011). The effect of type of learning environment on perceived learning among graduate nursing students. Nursing Education Perspectives, 32(6), 406-410.