According to the Checklist of Factors (Muir Gray, 2009), the questions to consider when appraising the results of a quasi-experiment, are as follows:
1. Was the control group comparable to the intervention (here: the experimental) group(s) in terms of population characteristics, performance and setting?
Although, the design used by the researchers did not have the pretest, I think the questions of the checklist are applicable to the study presented in the article. Thus, the answer to the fist question will be – yes. According to Wells and Dellinger (2011), no differences were found in the average age or computer skills of students enrolled in the three types of learning environments.
2. Was data collection contemporaneous in the intervention and control groups?
The students participating in the study were enrolled in the research course during their first or second semester in the graduate program, and data collection occurred at midsemester in all three groups, so the data collection was contemporaneous.
3. Was the same method of data collection used in the intervention and control groups?
Demographic data were collected via a researcher-generated questionnaire in all three groups. A Learner-Interaction Tool was also used for all the study participants to measure learner-instructor interaction, learner-learner interaction, perceived learning, and learner-system interaction. A seven-point Likert-type scale was used with responses ranging from 1 (strongly disagree) to 7 (strongly agree).
4. Were follow-up data collected for 80-100% of participants?
There was no follow-up performed, which is considered one of the study limitations. The researchers suggested that if this study were replicated with students who had been enrolled in a graduate program for four semesters or more, there would be more opportunities for interaction with the technology and with other learners, and the results might support the conceptual model and validate that perceived learning is also influenced by learner-learner and learner-system interactions.
5. Was the assessment of outcomes blinded?
A three-digit coding number was recorded on each instrument to ensure student anonymity. The lead faculty member was not present in the classroom during data collection.
6. Is it unlikely that the control group received the intervention?
Although the host-site students were physically located in the same classroom as the instructor, the setting did not fully represent a traditional classroom. Students were fully aware of the need to constantly engage the desk microphones when speaking in order to be heard by students at the remote site. Also, the instructor needed to adjust the cameras to focus on the host-site classroom, the remotesite classroom, or the PowerPoint presentation - all these requirements are not typical of the traditional classroom. There is a possibility that students in the host site might be more similar to students in the remote site than to students in a traditional classroom setting, which can be viewed as some sort of intervention.