Abstract Trouble in River City: The Social Life of video games by



Download 6.01 Mb.
Page36/57
Date02.06.2018
Size6.01 Mb.
#53333
1   ...   32   33   34   35   36   37   38   39   ...   57

Research Design


The research design included a pre-test, assignment to condition, and post-test. The conditions were a treatment condition, in which the subject received a free copy of Asheron’s Call 2 and was asked to play for one month (at least twice a week), and a control condition, in which subjects were promised similar prizes through a raffle. The recruitment and measurement procedures were carried out over the Internet between late January and early March, 2003 (see Appendix C for correspondence and instruments). After completing the final survey, subjects were debriefed with a web page. A novel design allowed for an efficient use of resources, for filtering out ineligible subjects, and for using a stratified random assignment to condition.

The recruitment script informed potential subjects that by promising to take two surveys and being willing to try a new game, they would have a 50% chance of getting a free copy. This way, all of the potential subjects had to complete the study’s first wave before anything else happened. This wave was timed to minimize the gap between the pre-test measure and receipt of the stimulus. It was presupposed (correctly) that a large number of people could complete the survey near the end of a work week, and the pool could be analyzed, sorted, filtered and assigned to condition on a Sunday and Monday. With the first wave of data in hand, this window allowed for random assignment to condition based on stratifications drawn from the first survey wave. Stratified random samples are a strong control over variables that may be especially risky sources of variation (Westley, 1989). For example, the limited number of female participants were assigned evenly to the two conditions. In that two-day interim, the pool was also checked for eligibility. The primary tests of eligibility were answers to questions about the potential subjects’ Internet access, home computer specifications, valid US postal address, and a check-box for promising to complete the second survey a month later. To filter out bogus entrants, email addresses were tested with a mass mailing, and any non-working accounts were eliminated from the study. Questionable or incomplete phone numbers were checked, and subjects were dropped when the numbers could not be verified.

With a clean sample in place, copies of Asheron’s Call 2 were mailed with a cover letter and time diary to 378 subjects. 430 subjects were retained for the control group.5 The experimental group mailing was timed to allow for nearly simultaneous arrivals of the stimulus package, with addresses farther from Michigan being mailed on Tuesday, February 3, 2003, and closer addresses mailed on Wednesday, February 4.6 A second benefit of this timing was that the stimuli mostly arrived on Thursday, Friday, or Saturday, the most ideal days of the week for most people to begin a leisure activity.

Software & Procedures


The online recruitment and measurement required a series of software steps. First, a website was created for the survey questionnaire (see Appendix D). This site consisted of a front page with instructions, eligibility requirements and a statement of consent, plus eight pages of survey questions. The pages were created through a combination of html programming and automated programming using the program Macromedia Dreamweaver. The site was then moved online using Fetch FTP software. The “back-end” of the survey interface was controlled by a cgi script called “Survey 1.0,” operated by the University of Michigan Information and Technology Division. Survey 1.0 took the form-based results from the survey site and wrote them as a tab-delimited flat file to a secure sever space located in the University’s Institutional File Space (IFS). This file was appended as each subject completed the survey. Once the survey window had closed, the file was downloaded to a computer and imported into the SPSS statistics program for analysis. This process was completed once for the pre-test and once for the post-test.

Recruitment


Deciding whom and how to solicit subjects was a question of balancing the hypotheses with the practical constraints of the real world. In this case, the goal of the sampling was not to gather a group that resembled the general population, but instead resembled people who play, or might play, online MMRPGs. Because these kinds of game experiences are becoming a broader cultural phenomenon, it was also important to sample groups that might try online games in the future, broadening the power of the findings. Therefore, women, minorities, and people from a range of socioeconomic backgrounds were recruited. Additionally, the sample needed to include variation related to the hypotheses, some of which relate to the difference between first-time players and hoary veterans. In order to test these differences, the sample had to include variation on prior play.

The sample was recruited through a series of news releases and message board posts across a variety of online sites, with the intention of drawing a sample that yielded the greatest generalizability. The solicitation scripts are listed in Appendix C. The study incentive was a free copy of an online game, and the script highlighted the need for first-time players, who conceivably might be more interested in dabbling with such a game if it were perceived to be both valuable (the retail value of each copy was $50) and free. The sample was collected starting Wednesday, January 29, 2003 and ending on Sunday, February 1, 2003. 82% of the sample was collected on Thursday and Friday. Because the goal was to gather a group of eligible subjects who had a range of prior play experience, subjects were solicited on both game and non-game web sites. 7 Additionally, they had to be able to run the game, which requires a better-than-average 3D graphics card. People with access to such machines were more likely to be long-term game players, making it important to over-solicit the less experienced or first-time players.

This web-based solicitation had distinct advantages and disadvantages. The costs were considerably lower than other means, and the specialization of existing web sites allowed for precision in targeting particular populations based on their age, gender and interests (e.g. female action game players). The varying success rates of the different sites also allowed for more concentrated recruiting on sites that were yielding more subjects. Still another benefit was the word-of-mouth recruiting enabled by a networked world. The solicitation scripts asked for personal referrals if the reader knew someone who might try a game for free. Based on the self-reports from the first survey, 42% of those who completed it had heard about the study through a friend. Lastly, online access was a precondition for participation, so no one able to see the initial solicitations was automatically ineligible. These advantages were achieved at the cost of control over precisely who self-selected to sign up. For example, a few potential subjects signed up more than once, and many were from foreign countries that made them ineligible. However, the screening process allowed such cases to be filtered out.

The posting process allowed for an additional check on the quality of the sample. Although some of the subjects reported that they learned about the study through the news releases, many reported that they saw the messages posted on discussion boards. These boards allow for discussion threads, and so discussions about the study’s legitimacy or the eligibility requirements could be followed, and, where appropriate, responded to.8 As noted earlier, an over-reliance on young subjects has limited researchers’ ability to make statements about their findings, and the typical game population for an MMRPG skews older than even college age. For this reason, adolescent-oriented message boards and sites were avoided. Only 4.6% of the sample pool was under 18.



Final Sample


The second, and final, wave of responses was collected starting one month after the initial mailing reached the subjects on Thursday, March 6, again targeting a late-week/weekend period. The survey site was left open for one week, and the majority of subjects completed the questionnaire within the first three days. A series of three reminder emails were sent to non-respondents. The initial plan for the study was to collect a sample that contained equal portions of Newbies, Veterans and Elders across the treatment and control conditions. However, Veterans and Elders who were willing to cease all play for a month proved nearly impossible to find. In addition, legal difficulties relating to the potential lottery licensing of the raffle were raised by the University’s Institutional Review Board, and mentions of a raffle had to be dropped from all correspondence (see Appendix E for IRB approval). This caused the retention rate in the control group to decrease markedly among experienced players, effectively eliminating the control groups for the Veteran and Elder subjects. Overall, the final retention rates for the study were 91.7% for the experimental group (347/378), and 40.5% for the control group (174/430). As Table 3 illustrates, most of the loss in the control group came from subjects with prior play experience, invalidating treatment vs. control tests for those groups. The Veteran and Elder sub-populations had extremely high retention rates for the treatment condition, and so remained in the study for pre- versus post-test measures that will be described below.
Table 3

Final Distribution of Prior Play Across Design Cells




Newbies

Veterans

Elders

Control

138

11

25

Treatment

75

133

139

For first-time “Newbie” players, there were sufficient subjects to proceed with treatment vs. control tests. For this group, it is fair to ask whether the control and treatment groups were in fact equivalent. A key to field settings in experimental design is keeping the groups under study as equivalent as possible. The nonequivalence is problematic to the extent that it covaries with the variables under study. For differing retention rates, this is best checked by testing for differences between the groups at the pre-test that might affect the variables under study (Cook & Campbell, 1979; Culbertson, 1989). Fortunately, when the Newbies were tested for differences with independent samples t-tests, there were no significant differences between the experimental and control groups on basic demographic variables, or for the variables under study (See Table 4 for a selection).


Table 4

Pre-Test Differences Between Newbie Control and Experimental Groups,
Selected Variables





Control (n = 138)

Treatment (n = 75)

Measure

Mean

SD

Mean

SD

Age

27.7

8.23

27.2

8.21

Online Bridging

38.61

6.08

37.99

6.92

Online Bonding

27.19

9.78

28.98

9.69

Offline Bridging

36.69

6.69

35.65

5.77

Offline Bonding

42.12

6.43

40.88

6.12

Loneliness

13.10

4.83

13.67

4.34

Depression

22.95

7.87

22.92

6.77

Normative Beliefs in Aggression (NOBAGS)

10.75

3.25

11.49

4.04

Education

4.1

1.72

3.79

1.34



Download 6.01 Mb.

Share with your friends:
1   ...   32   33   34   35   36   37   38   39   ...   57




The database is protected by copyright ©ininet.org 2024
send message

    Main page