Moves in Mind: The Psychology of Board Games


Part 1: Psychology lite, with neuroscience sprinkles



Download 0.99 Mb.
Page9/28
Date02.02.2017
Size0.99 Mb.
#15207
1   ...   5   6   7   8   9   10   11   12   ...   28
Part 1: Psychology lite, with neuroscience sprinkles.

If gaming can be considered an addiction, it would most likely fit the mold of a behavioral addiction. First, we're going to talk about the common-sense psychology side of addiction: behavioral addiction. Then we'll talk about the neuroscience side of addiction: brain chemistry and dopamine. What you should get from the following section is a very basic sense of what addiction is. After this section we'll explore how addiction might or might not relate to games.



Theoretical Behavioral Addiction

Dr. Mark Griffiths is one of a growing body of psychologists forwarding the concept that “excessive behaviors of all types,” for instance addictions to shopping, gambling, or sex, are addictive in very similar ways. These addictions don't have to involve drugs, yet even drug addiction shares features with these other addictions. The actual features cited by Griffiths hearken back to the theoretical models of Iain Brown, and may even represent a psychology-based foundation for all addiction.

Everyone is vulnerable to becoming addicted, according to Brown, but to different extents. Some people have had an excessively rough life, and still others have had too easy a life, or are just bored. Specifically culture, economics, social circumstances, personality, and low tolerances for stress are some of the factors that might make one person more susceptible to addiction. You might call particularly vulnerable people “addictive personalities,” simply because they are more at risk.

While these certain personalities are susceptible, behavioral addiction requires a behavior. A normal personality usually has a number of activities that they regularly use to feel excited, relaxed, or what have you. Yet people are drawn to some things over others. A huge gambling win is more attractive than cleaning a toilet. For most people. When the soon-to-be addict finds that special activity, they can have what Brown calls an “aha” moment. As this especially alluring behavior becomes more prominent in a person's life, other things disappear completely from that person's repertoire of activities. At its most extreme, such a behavioral addiction dominates a person's life. They need the activity, and they'll sacrifice nearly anything – long term plans, the company of people, even work in order to have it.



Dopamine and Brain Chemistry

Back in the forbidden caverns of hard science, addiction is usually attributed to genetic factors and dopamine. The National Institute of Health states that genetic factors are significant in addiction. Some brains are just more susceptible to the neurotransmitter dopamine, a type of chemical in the brain. The research of neuroscientists Depue and Collins reflects this in stating that individual differences in dopamine processing can predetermine individuals as more or less likely to develop addictions. They also assert that motivation is based on two major factors, “the availability of reward, and the effort required to obtain it…”

Enter such a nefarious behavior. So much dopamine is released while engaging in some behaviors that neurons, our basic brain cells, get accustomed to having that dopamine around. These neurons stimulate the nucleus accumbens, part of the brain. As the brain gets used to this stimulation, it requires more and more dopamine for the same effect. When the dopamine producing behavior is finally stopped, the brain isn't used to the lowered dopamine levels. At this point, craving and addiction enter the picture.

Regardless of the perspective you like best – psychology, neuroscience, or any of the many humanities-based theories out there, there seems to be good backing for the idea that addiction has a lot to do with your personality. If a person has an addictive personality, then the actual activity isn't the problem. These people are going to get addicted to whatever they try, be it games, running, eating peanuts, or even work. If a person thinks that they have a problem, then they need to take responsibility, seek treatment, and modify their behavior. Following is research that has actually suggested that some gamers may be addicted.



Research

German researchers Sabine Grüsser and Ralf Thalemann have suggested that some gamers exhibit signs traditionally associated with addiction: susceptibility to triggers and diminished startle reflex. Grüsser additionally reflects our earlier conversation on addiction by saying that as one activity comes to be used exclusively in order to deal with adversity, it becomes the only behavior that can activate the brain's dopamine system, and that such chemical monopolization is common to all addictions.

Is this great news for journalists, unpopular politicians, or groups such as Online Gamers Anonymous and EverQuest Widows? First off, keep in mind that much of the research in this field is above all preliminary in nature. Moreover, so far research has simply suggested that at most, people are becoming addicted to games, not that games themselves are actually responsible for addicting people. The difference is subtle, yet significant. It also helps to keep in mind that certain works in psychology and the humanities are not entirely definitive.

This is not to discount addicted gamers. Some people do play to a point where gaming negatively affects their lives, and we need to be sensitive to that. The most populous country in the world, China, wouldn't have passed a law regulating massively multiplayer online (MMO) gameplay without at least some reason. Who knows? Dazzling new research might, hypothetically, prove that games addict in ways that television and gambling may never hope to rival. But for now we don't know exactly what's happening. Research into games is new.



Part 2: Trouble in paradise.

There are problems with some of the most influential articles studying both games, and their relation to addiction. The most notable problems have to do with conceptual confusion, reliance on self tests, sampling techniques, and differences between games. Problems with all of these slow the advancement of gaming knowledge.



Conceptual Confusion

Conceptual confusion is when an author takes two or more important keywords, and then mixes them up. Usually this just happens when one researcher talks about another researcher's work a little bit carelessly. In Internet addiction research, which originally served as a foundation for computer-related and gaming-related addiction research, major works have been accused of conceptual confusion. This is important, because this research continues to be used by new studies, even studies involving games. This means that new researchers entering the field must critically analyze any addiction criteria they plan on using. Additionally, these foundational authors could gain a great deal of credibility by revisiting and defending their methods.



Self Assessments

Size matters. While Dr. Kimberly Young's criteria for Internet addiction has recently grown in size, her criteria for 'obsessive online gaming' still consists of eight questions. According to Young the test taker “may be addicted to online gaming” if they answer yes to just one of the eight questions. Psychologist John Charlton has asserted that attempting to diagnose addicts using checklists is likely to drastically overestimate the amount of people who are actually addicted.



Sampling

Many studies, especially the more humanities-based studies of gaming in general, suffer from major problems when it comes to something called sampling. A researcher can have the greatest survey of all time, but it won't matter if the right people don't fill it out. The goal here is to make sure that the 50 people who take that survey perfectly represent the 5000 people that you want to talk about. Gaming research so far has distributed most surveys through online forums, college classrooms, and websites. The problem is that the people who visit online forums, or go to college, or visit gaming websites, probably don't represent the whole gamer population. Some people get their games at Blockbuster. Some MMO players aren't very likely to take boring academic surveys when “rolling on epic drops” is also an option for their evening. Very little of the gaming research out there represents all gamers.



Differences Between Games

One last distinction that is sometimes overlooked by psychologists and other practitioners: the differences between games. We have to be careful not to interchangeably use some studies of single player, online, or especially MMO games. There are differences between these games. For example, if one study examines single player and online FPS game players, and finds that they enjoy greater visual acuity, then we should not assume that MMORPG players will necessarily also enjoy such benefits. It's possible, even probable that players in different types of games will reap these ocular benefits, but it is not guaranteed.



Part 3: Are games addictive?

To revisit part one: so far research has simply suggested that at most, people are becoming addicted to games, not that games themselves are actually responsible for addicting people. Some people do seem to be addicted, yet games may not be the real culprit. Nevertheless, research that directly examines whether certain games are addictive should not be shunned, it should be welcomed. There are two reasons for this, neither of which is completely obvious. First, it is entirely possible that games are in no way addictive. If research can prove this, then it would inspire a huge amount of confidence in parents, legislators, and gamers. The second, less obvious, advantage to studying links between addiction and games is understanding. If games can be linked to addiction, then knowing how and why could possibly show us what a “healthier” game would look like.

To keep this article in perspective, we're talking about games. Recreation. Stuff that people do for fun. Even if it were possible to remove the proverbial nicotine, or addictive ingredient, would we want to? If it takes the fun out of games, then the answer is probably no. We still have responsible players who count on us for quality entertainment. But who knows? Perhaps laborious, calculated efforts to create that “healthier” game will help one developer to produce the most exciting game ever. In any case, there are people who do seem to have serious problems with gaming, but there are also people who watch too much TV, or spend too much time reading. Do these other media forms face criticism, or a looming threat of legislation? Not really.

Addiction is complicated. To revisit the introduction's caveat: this article isn't intended to transform you into a trained clinician. Instead, it's meant to shed some light on the very basics of addiction. It also shows why some of the research deserves to be viewed with a critical eye. Some people do have problems with games; that's getting harder to discount. What we can do, as game creators, is understand that a problem exists, and try to understand research advances as they occur.



Resource List

Brown, I., A Theoretical Model of Behavioral Addictions – Applied to Offending. In Hodge, J. E., McMurran, M. & Hollin, C. R. Eds. (1997). Addicted to Crime? John Wiley & Sons Ltd., New York , NY . (p. 13).

Charlton, J.P. (2002). A factor-analytic investigation of computer ‘addiction' and engagement. British Journal of Psychology, 93, 329-344.

Depue, R. & Collins, P. (1999). Neurobiology of the structure of personality: Dopamine, facilitation of incentive motivation, and extraversion. Behavioral and Brain Sciences, vol. 22, p. 491-569.

Griffiths, M. (2005). A ‘components' model of addiction within a biopsychosocial framework. Journal of Substance Use, vol. 10, p. 191-197

Kimberly Young's website. Links to her surveys come from her website:

National Institute of Health website

Press release discussing Sabine Grüsser and Ralf Thalemann's research, presented at the 2005 Annual meeting of the Society for Neuroscience.

_____________________________________________________

 

 

Copyright © 2003 CMP Media Inc. All rights reserved.



   
Gama Network Presents:





Beyond Psychological Theory: Getting Data that Improves Games 


By Bill Fulton
Gamasutra
March 21, 2002
URL: http://www.gamasutra.com/gdc2002/features/fulton/fulton_01.htm

How can I make my game more fun for more gamers?

This is the question for those who want to make games that are popular, not just critically acclaimed. One (glib) response is to "design the games better." Recently, the idea of applying psychological theories as a way of improving game design has been an increasingly popular topic in various industry publications and conferences. Given the potential of applying psychological theory to game design, I anticipate these ideas to become more frequent and more developed. While I think using psychological theories as aids to think about games and gamers is certainly useful, I think that psychology has much more to offer than theory. An enormous part of the value of psychology to games lies in psychological research methods (collecting data), not the theories themselves.

I should clarify some terms here. When I talk about "psychology," I do not mean the common perception of psychology--talking to counselors, lying on the Freudian couch, mental illness, etc. In academia, this kind of psychology is called "clinical psychology." In this paper, "psychology" refers to experimental psychology, which employs the scientific method in studying "normal populations functioning normally."

But before I talk about how psychological research methods can help improve games, I need to first explain more about how psychological theories are helpful, and the limitations they have with respect to game design.




Psychological theories can be useful, but data are more useful
All designers think about what people like, hate and want. Some designers may be consciously using theories from psychology as part of the process to evaluate what people want, but most designers probably just rely on their intuitive theories of what they perceive gamers want.

The risks of relying on intuitive psychology. What I call "Intuitive psychology" is the collection of thoughts, world views, 'folk wisdom' etc. that people use to try to understand and predict others. Some examples might make this clearer--one common intuitive psychological belief about attraction is that "opposites attract." However many people (and many of the same people) also believe the opposite, that "birds of a feather flock together." Both of these ideas have some merit and are probably true in some ways for most people. But given that they are clearly conflicting statements, it is unclear which statements to believe and act on--which statement is true? Or, more likely, when is each statement more likely to be true? Does the degree of truthfulness for these statements vary by people? By situation? By both? The problem with intuitive psychology is that many intuitions disagree with each other, and it is unclear which world view is more likely to be right, if either of them are at all. You're just trusting that the designer's theories are close enough to reality in that the design will be compelling.

The insufficiency of formal theories of psychology. Formal theories of psychology have been subjected to rigorous testing to see when they map onto reality, and when they do not. In order for a theory of psychology to gain any kind of acceptance, the advocates have to have battled with some success against peers who are actively attempting to show it to be incorrect or limited. This adversarial system of determining "truth" and reliable knowledge employs the scientific method of running experiments and collecting data. Because of this adversarial system, formal theories of psychology are more trustworthy than intuitive theories of psychology--you know that they are more than just one person's unsubstantiated opinion about what people want.

But while theories of psychology from academia can be quite useful as a lens to examine your game, their limitation is that they are typically too abstract to provide concrete action items at the level designers need. This lack of specificity in psychological theories hasn't really hurt designers too much, because in the most part designers (and people in general) have a decent enough idea of how to please people without needing formal theories. I think very few people had light bulbs go on when they learned that Skinner's theory of conditioning stipulates that people will do stuff for rewards. But the work that Skinner and others did on how to use rewards and punishers well in terms of acquisition and maintenance of behaviors can be enlightening. But academic theories of psychology don't get granular enough to tell us whether gamers find the handling of the Ferrari a bit too sensitive.

An example of why academic theories of psychology aren't enough is in order. Skinner's Behaviorism is probably one of the most well-defined and supported theories, and the easiest to apply to games. (In fact, John Hopson wrote an excellent article in Gamasutra in April 2001 demonstrating how to analyze your game through behaviorism's lens.) One of Hopson's examples is about how players in an RPG behave differently depending upon how close they are to reinforcement (e.g., going up a level, getting a new item, etc.). He talks about how if reinforcers are too infrequent, the player may lose motivation to get that next level. However, how often is not often enough? Or too often? (Who wants to level up every five seconds?) Both "too often" and "not often enough" will de-motivate the player. Designers need to find a 'sweet spot' between too often and not often enough that provides the optimal (or at least a sufficient) level of motivation for the player to keep trying to level up. Theory may help designers begin to ask the more pertinent questions, but no theory will tell you exactly how often a player should level up in three hours of play in a particular RPG games.


Beyond Theory: the value of collecting data with psychological methods
So I've argued that the psychological theories (both intuitive and academic) have limitations that prevent them from being either trustable or sufficiently detailed. Now I'm going to talk about what IS sufficiently trustworthy AND detailed--collecting data with psychological methods. Feedback gleaned via psychological testing methods can be an invaluable asset in refining game design.

As I said at the beginning of this paper, the central question for a designer who wants to make popular games is "how do I make my game more fun for more gamers?" and that a glib response is to "design the games better." Taking the glib answer seriously for a moment, how do you go about doing that? Presumably, designers are doing the best they can already. The Dilbertian "work smarter, not harder" is funny, but not helpful. The way to help designers is the same way you help people improve their work in all other disciplines--you provide them feedback that helps them learn what is good and not so good about their work, so that they can improve it.



Of course, designers get feedback all the time. In fact, I'm sure that many designers sometimes feel that they get too much feedback--it seems that everyone has an opinion about the design, that everyone is a "wannabe" designer (disguised as artists, programmers, publishing execs, etc.), as well as everyone's brother. But the opinions from others often contradict each other, and sometimes go against the opinions of the designer. So the designer is put in the difficult situation of knowing that their design isn't perfect, wanting to get feedback to improve it, and encountering feedback that makes sense, yet is often contradictory both with itself and with the designer's own judgment. This makes it difficult to know what feedback to act on. So the problem for many designers is not a lack of feedback, but an epistemological problem--whose opinion is worth overruling their own judgment? Whose opinion really represents what more gamers want?


Criteria for good feedback and a good feedback delivery system
Before launching into a more detailed analysis of common feedback loops and my proposed "better" one, I need to make my criteria explicit for what I consider "good" feedback and a good feedback delivery system. The addition of "delivery system" is necessary to provide context for the value (not just accuracy) of the feedback. The criteria are:

  1. The feedback should accurately represent the opinions of the target gamers. By "target gamers," I mean the group of gamers that the game is trying to appeal to (e.g., driving gamers, RTS gamers, etc.) If your feedback doesn't represent the opinion of the right group of users, then it may be misleading. This is absolutely critical. Misleading feedback is worse than no feedback, the same way misleading road signs are worse than no signs at all. Misleading signs can send folks a long way down the wrong road.

  2. The feedback should arrive in time for the designer to use it. If the feedback is perfect, but arrives too late (e.g., post RTM, or after that feature is locked down), the feedback isn't that helpful.

  3. The feedback should be sufficiently granular for the designer to take action on it. The information that "gamers hate dumb-sounding weapons" or that "some of the weapons sound dumb" isn't nearly as helpful as "Weapon A sounds dumb, but Weapons B, C, and D sounds great."

  4. The feedback should be relatively easy to get. This is a pragmatic issue--teams won't seek information that is too costly or too difficult to get. Teams don't want to pay more money or time than the information is worth ($100k and 20 person hours to learn that people slightly prefer the fire-orange Alpha paint job to the bright red one is hardly a good use of resources.)

The first criterion is about the accuracy of the feedback which is critical; the rest are about how that feedback needs to be delivered if it is going to be useful, not merely true.


Common game design feedback systems and their limitations

There are many feedback systems that designers use (or, in some cases, been subjected to). Most designers, like authors, recognize that they need feedback on their work in order to improve it-- few authors have reason to believe that their work is of publishable quality without some revision based on feedback. I'm going to list the feedback systems of which I am aware, and discuss how good of a feedback delivery system it is. There are two main categories of feedback loops: feedback from professionals in the games industry, and from non-professionals (i.e., gamers). While these sources obviously affect each other, it is easier to talk about them separately



Feedback from Professionals in the games industry
There are two main sources of this kind of feedback:

  1. Feedback from those on the development team. This is the primary source of feedback for the designer--people working on the game say stuff like "that character sucks" or "That weapon is way too powerful." This system is useful because it ably suits criteria two through four (the feedback is very timely, granular enough, and easy to get), but still leaves the designer with a question mark on criteria one--how many gamers will agree that that weapon is way too powerful

  2. Feedback from gaming industry experts. Game design consultants ("gurus"), management at publishers, game journalists, etc. can also provide useful feedback. While their feedback can often meet criteria three (sufficiently granular), criteria two (timely) is sometimes a problem--long periods can go between feedback, and recommendations can come after you can use it. And the designer is still left with questions about criteria one (accurately represents gamers), although some could argue that they may be more accurately representing gamers because they have greater exposure to more games in development.

So while feedback from professionals is the current bread and butter for most teams and definitely nails criteria two, three and four, it operates a great deal on faith and hope on criterion one--that the feedback from industry professionals accurately maps onto gamers' opinions. The reason this assumption is questionable is perhaps best illuminated by a simple thought experiment--how many games do you think a typical gamer tries or sees in a year? How many do you think a gaming industry professional tries or sees? They are probably different by a factor of ten or more. Gaming industry professionals are in the top 1 percent in knowledge about games, and their tastes may simply be way more developed (and esoteric) than typical gamers' tastes. While some professionals in the industry are probably amazingly good at predicting what gamers will like, which ones are they? How many think they are great at it, when others disagree?

So while feedback from industry professionals is necessary when designing the game, they may not be the best at evaluating whether gamers will like something. In the end, they can only speak for themselves.



Feedback from Non-professionals
Game teams are not unaware of the problem of their judgment not always mapping onto what most gamers really want. Because of this, they often try to get feedback from those who are more likely to give them more accurate feedback, and the obvious people to talk to are the gamers themselves. Some common ways that this is done are listed below, along with some analysis of how good a feedback system it is according to the four criteria.

  1. News group postings/Beta testing/fan mail. This is reading the message boards to see what people say about the game. The main problem with this as a feedback system is with criteria two (timely). The game has to be able to be fairly far along (at least beta, if not shipped) in order to get the games to people; typically, that feedback arrives too late to make any but the most cosmetic of changes. Also, the feedback often runs into problems of not being sufficiently granular to take action on. ("The character sucks!") But at least this kind of feedback is relatively cheap in both time and money.

  2. Acquaintance testing. This is where you try to get people (typically relatives, neighbor's kids, etc.) from outside the industry to play your game and give you feedback. This feedback is often sufficiently granular and may be relatively accurate, but it is often not that timely due to scheduling problems, and can be costly in time.

  3. Focus groups/Focus testing. This kind of feedback system is typically done by the publisher, and involves talking to small groups (usually four - eight gamers) in a room about the game. They may get to see or play demos of the game, but not always. One typical problem with focus groups is that often tend to happen very late in the process when feedback is hard to action on (not timely) and not sufficiently granular. The costs for focus groups can also be quite high.

This approach has potential to be useful, in that it involves listening to gamers who aren't in the industry. However, there are many pitfalls to this--It is often dubious as to how accurately the feedback represents gamers due to the situations themselves (only certain kind of people post messages, people feel pressured to say positive things, the people running the test often lack sufficient training in how to avoid biasing the participants, etc.), and the relatively small number of people. How to minimize these concerns and create a feedback system that works on all four criteria is discussed in the next section.


Designing a better feedback system

Up to this point, I've mostly been criticizing what is done. Now I need to show that I have a better solution. I'm going to outline some of the key factors that have allowed Microsoft to develop a feedback system that we think meets the four criteria that I set up for a "good" feedback system. We call this process of providing designers with feedback from real users on their designs "user-testing," and the people who do this job "user-testing specialists."

The importance of using principles of psychological testing. Experimental psychology has been studying how to get meaningful, representative data from people for over 70 years, and the process we use adheres to the main principles of good research. This is not to say that all psychological research is good research any more than to say that all code is good code; researchers vary in their ability to do good research the same way that not all programmers are good. But there are accepted tenets of research methodology that have been shown to yield information worth relying on, and our processes have been designed with those in mind. (For the sake of not boring you senseless, I'm not going to attempt to summarize 70 years of research on how to do research in this paper.) What I'm going to do instead is describe the day-to-day work that the user-testing group at Microsoft does for its dev teams (both first and third party).

The actual testing methods we use. The user-testing group provides three major services: usability testing, playtesting, and reviews. These services are described in detail below.



1. Usability research is typically associated with small sample observational studies. Over the course of 2-3 days, 6-9 participants come to Microsoft for individual 2-hour sessions. In a typical study, each participant spends some unstructured time exploring the game prior to attempting a set of very specific tasks. Common measures include: comments, behaviors, task times and error rates. Usability is an excellent method to discover problems that the dev team was unaware of, and to understand the thoughts and beliefs of the participant and how they affect their interaction with the game. This form of testing has been a part of the software industry for years and is a staple of the HCI (Human-Computer Interaction) field more so than psychology. However, methods used in HCI can be traced to psychological research methods and can essentially be characterized as a field of applied psychology.

2. Playtest research is typically associated with large, structured questionnaire studies that focus on the first hour of game play. The sample sizes are relatively large (25-35 people) in order to be able to compute reliable percentages. Each person gets just over 60 minutes to play the game and answer questions individually on a highly structured questionnaire. Participants rate the quality of the game and provide open-ended feedback on a wide variety of general and genre-specific questions. Playtest methods are best used to gauge participants' attitudes, preferences, and some kinds of behavior, like difficulty levels. This form of testing has a long history in psychology in the fields of attitudinal research and judgment and decision-making.

3. Reviews are just another version of feedback from a games industry professional. However, these reviews are potentially more valuable as the reviewers are user-testing specialists, who are arguably have more direct contact with real gamers playing games than other game professional. Their entire job is to watch users play games and listen to their complaints and praises. Furthermore, teams often repeat mistakes that other games have made, and thus experienced user-testing specialists can help teams avoid "known" mistakes.

The result of each of these services is a report is sent to the team which meticulously documents the problems along with recommendations on how to fix those problems. Our stance is that the development teams are the ones who decide if and how to fix the problems.

One noticeable absence in our services is "focus groups." Our belief (supported by research on focus groups) is that focus groups are excellent tools for generation (e.g., coming up with new ideas, processes, etc.), but are not very good for evaluations (e.g., whether the people like something or not). The group nature of the task interferes with getting individual opinions, which is essential for the ability to quantify the evaluations.

How this feedback system fares on the four criteria for a good feedback system. So, how does the way we do user-testing at Microsoft stack up to the four criteria? Pretty well (in my humble opinion). A recap of the criteria, and my evaluation of how we do on them is given below.



  1. The feedback should accurately represent the opinions of the target gamers. We supply reasonably accurate, trustworthy feedback to teams, because:

    a. We have a large database of gamers (~12,000) in the Seattle metro area, who play every kind of game. So we can almost always bring the right kind of gamers for each kind of game.

    b. We hire only people with strong backgrounds in experimental or applied psychology in order to minimize the biases of the user-testing specialist. We also have a rigid review process for all materials that get presented to the user.

    c. We thoroughly document our findings and recommendations, and test each product repeatedly, which allows us to check the validity of both our work and the team's fixes over multiple tests and multiple participants.



  2. The feedback should arrive in time for the designer to use it. We are relatively fast at supplying feedback. The entire process takes about six days to get some initial feedback, and about 11-14 days for a full report. If the tests are well planned, they can happen at key milestones to maximize the timeliness of the feedback.

  3. The feedback should be sufficiently granular for the designer to take action on it. The level of feedback in the reports is extremely granular, because the tests are designed to yield granular, actionable findings. The user-testing specialist typically comments at the level of which cars or which tracks caused problems, or what wording in the UI caused problems. The recommendations are similarly specific. Usability tests typically yield more than 40 recommendations, whereas playtest tends to have anywhere from 10-30 items to address.

  4. The feedback should be relatively easy to get. The feedback is relatively easy for the dev team to get--they have a user-testing lead on their game, and that person sets up tests for them and funnels them the results. However, the feedback is relatively inexpensive, when compared to the multi-million dollar budgets of modern games. The total cost of our operation is "substantial," but economies of scale make the cost per game relatively small.


Vital statistics on the user-testing group at Microsoft


Group history: the usability portion of the user-testing group has been around in a limited fashion since Microsoft entered the games business in earnest, in 1995. Funding was at a very low level (one usability contractor and 30+ titles to support) until the Games Group began investing more heavily in 1998 with the introduction of the Playtest group. The usability and playtest group merged to form the user-testing group in 2000. The current user-testing processes have been relatively stable since 1997 (usability) and 1998 (playtest).

Current composition of user-testing group: 15 FT user-testing specialists, 3-5 contract specialists, 3 FT support staff. Almost all user-testing specialists have either two or more years of graduate training in experimental psychology, or equivalent experience in applied psychology and are gamers. All four founding members of the user-testing group are still with the group.



Amount of work: In 2001, we tested approximately 6500 participants in 235 different tests, on about 70 different games. 23 of those games were non-Microsoft products. In 2002, we expect to produce about 50 percent more than we did in 2001. From 1997 to Jan 2002, the group has produced 658 reports on 114 products (53 Microsoft, and 61 non-Microsoft products) representing the opinions of more than 15,000 hours of consumer reactions to games prior to their release.

Special thanks to Randy Pagulayan and Ramon Romero for their help editing this article.



Copyright © 2003 CMP Media Inc. All rights reserved.

   
Gama Network Presents:





The Psychology of Choice


By John Hopson
Gamasutra
February 6, 2002

URL: http://www.gamasutra.com/features/20020204/hopson_01.htm

The play of any computer game can be described as a series of choices. A player might choose the left or right hand tunnel, decide to skip this target and save ammunition, or play a fighter rather than a mage. The total path of a player through the game is the result of a thousand little choices, leading to success or failure in the game and to enjoyment or dislike of the game itself. The principles underlying the choices players make and the way in which a designer can shape those choices is a key component of game design.

As in my previous article, the kind of psychology discussed here is often called behavioral psychology. This sub-field of psychology focuses on experiments and observable actions, and is a descriptive rather than normative field of study. Instead of looking at what people should do, it studies and tries to explain what they actually do. By understanding how people react to different kinds of choices, we can design games that help them make the kind of choices that they'll enjoy, and understand how some game designs can unintentionally elicit bad choices.

Maximizing

The most obvious thing to do when confronted with multiple options is to pick the choice or pattern of choices that maximizes reward. This is the sort of solution sought by game theory, one that mathematically guarantees the greatest level of success. While most players don't try to work out the exact algorithms behind weapon damage, they will notice which strategies work better than others and tend to approach maximal reward.

Usually, participants maximize when the choices are simple and deterministic. The more complex the problem, the more likely they are to engage in exploratory actions and the less likely they are to be sure that they are doing the optimal thing. This is particularly true in situations where the contingency is deterministic. If the pit monster attacks every time the player gets to a certain point, they'll quickly pick this up and learn the optimal point to jump over it. If it attacks probabilistically, the player will take longer to guess what rules govern the pit monster's attack.

While maximizing is the best thing for the player, it's probably not a good thing for the designer. If the player is doing as well as it's possible to do, it implies that they've mastered the game. It also means that the game has become perfectly predictable and most likely boring. A contingency with an element of randomness will maintain the player's interest longer and be more attractive. For example, subjects will generally prefer a 30 second variable interval schedule (rewards being delivered randomly between zero and sixty seconds apart) to a 30 second fixed interval schedule (rewards being delivered exactly 30 seconds apart), even though both provide the same overall rate of reward.

There is another, subtler problem with maximizing. As discussed in the previous article, sharp declines in the rate of reward are very punishing for players and can result in quitting. If the player has learned to maximize their reward in one portion of the game, creating a high and consistent level of reward, moving to another part or level of the game will most likely result in a drop in reward. This contrasting low level of reward is extremely aversive and can cause the player to quit. It may even be an effective punishment for exploring new aspects of the game, as the transition from the well understood portion to the unknown marks an inevitable drop in rewards.

To avoid maximizing, there are two basic approaches. First, one can make sure that the contingencies are never so simple that a player could find an optimal solution. The easiest way of doing this is to make the contingencies probabilistic. Massive randomness isn't necessary, just enough to keep players guessing and engaged. Second, the more options there are within the game, the more things there are to compare, the less likely it is that there will be a clear ideal strategy. If all the guns in the game work the same but do different levels of damage, it's easy to know you have the best one. If one gun is weaker but does area damage and another has a higher rate of fire, players can explore a wider variety of strategies. Once there is a clear best way to play the game, it ceases to be interesting in its own right.



Matching

Once there are multiple options producing rewards at different rates, the most common pattern of activity observed in humans and animals is matching. Essentially, matching means that the player is allocating their time to the various options in proportion to their overall rate of reward. More formally, this is referred to as the Matching Law, and can be expressed mathematically as the following equation:




Let's say our player Lothar has two different areas in which he can hunt for monsters to kill for points. In the forest area, he finds a monster approximately every two minutes. In the swamp area, he finds a monster every four minutes. Overall, the forest is a richer hunting ground, but the longer Lothar spends in the forest the more likely it is that a new monster has popped up in the swamp. Therefore Lothar has a motive to switch back and forth, allocating his time between the two alternatives. According to the Matching Law, our player will spend two-thirds of his time in the forest and one-third in the swamp.

The key factor in matching is rate of reward. It's the average amount of reward received in a certain period of time that matters, not the size of an individual reinforcer or the interval between reinforcers. If the swamp has dragons that give Lothar 100 points, while the forest has wyverns that give him only 50 points but appear twice as often as the dragons, the overall rates of reward are the same and both areas are equally desirable.

Now that I've set up a dichotomy between matching and maximizing, let me confuse things a bit. Under many circumstances, matching is maximizing. By allocating activity according to rate, the player can receive the maximal amount of reward. In particular, when faced with multiple variable interval schedules, matching really is the best strategy. What makes matching important to our understanding of players is that matching appears to be the default strategy when faced with an ongoing choice between multiple alternatives. In many cases, experiments show subjects matching even when other strategies would produce higher rates of reward.

Matching (and switching between multiple options in general) also has the helpful property of smoothing out the overall rate of reward. If there are several concurrent sources of reinforcement, a dip in one of them becomes less punishing. As one source of points falls off, a player can smoothly transition to others. A player regularly switching back and forth between options also has a greater chance of noticing changes in one of them.

Overmatching, Undermatching, and Change-Over Delays

At its discovery, matching was hailed as a great leap forward, an example of a relatively complex human behavior described by a mathematical equation, akin to physics equations describing the behavior of elementary particles. However, it was quickly discovered that humans and animals often deviated from the nice straight line described by the Matching Law. In some situations, participants overmatched, giving more weight to the richer option and less to the leaner option than the equation would predict. In others, the participants undermatched, treating the various contingencies as more equal than they actually were.

Neither of these tendencies is especially bad for game design, in small quantities. As long as the players are exploring different options and aren't bored, we don't usually care how much time they spend on each. Extreme undermatching implies the player isn't really paying attention to the merits of each option. Overmatching can mean that the player has chosen an option for reasons other than merit, such as enjoyment of the graphics.

Fortunately for behavioral psychology, these deviations could be predicted and controlled. One important factor in determining how closely participants match is the amount of time and/or effort required to change between options. The farther apart the options are or the more work is required to switch between them, the more players will tend towards overmatching. For example, imagine a typical first person shooter game, in the vein of Quake or Unreal. If switching from their current gun to a different one has a delay of 20 seconds during which they can't fire, they'll switch from one to another less often than they would otherwise. Even if the current gun isn't perfect for the current situation, the changeover cost might keep the player from switching. If the delay is long enough, switching can become non-existent as the costs outweigh any possible benefits.

At the other end of the spectrum is the case where changeover is instantaneous. Consider a massively multiplayer game where monsters spawn periodically in various locations. Switching between multiple spawning sites normally takes time, but suppose a player could teleport instantly from one to another with no cost. The best strategy would be to jump continuously back and forth, minimizing the time between the appearance of a monster and the kill. That makes sure the player gets as many points as possible in a given period of time.

Obviously, neither of these extremes is really desirable for game designers. Ideally we want to be able to adjust the time/difficulty/expense of changing strategies to strike just the right balance exploration and exploitation. What that balance is has to be an individual choice, the concept of a change-over delay is just a tool for achieving that balance.



Risk

Another important factor players consider in choosing between alternatives is risk. Game theory says that players should weigh the options such that they'll maximize overall reward in the long term. For each alternative, they should multiply the possible reward by the odds of receiving that reward and choose the best option.

However, this article is concerned with what players actually do, not what they mathematically should do. Psychologists generally use two terms to describe how subjects react to risky situations. Subjects are risk-prone when they prefer the more uncertain alternative and risk-averse when they tend towards safer options. In one experiment, pigeons were offered a choice between two keys to peck. The left provided 8 pieces of food every time, the right provided 16 half the time and no food half the time. The pigeons consistently preferred the more reliable schedule, and were therefore risk-averse. In a later study, the left key produced 3 bits of food every time while the right key produced 15 one-third of the time. In this study, the pigeons preferred the riskier alternative.

So far, this is perfectly in accord with game theory, with subjects taking risks when those risks offer an overall greater chance of reward. But what about the example mentioned earlier in this article, where subjects preferred a variable interval schedule to a fixed interval schedule? Even when the two options provided equal rates of overall reward, subjects preferred the probabilistic option. The difference lies in the expected outcome of each individual response. In the pigeon experiment we just described, each choice was discreet. A peck, an outcome, and the subject was presented with a fresh choice. Each choice contained the totality of possible outcomes, so the subjects' behavior reflected the total contingency.

In the fixed-interval / variable-interval experiment, one could respond any number of times on the fixed interval option but would not receive the reward until the interval had elapsed. On the variable interval schedule, every single response had a small chance of being rewarded. Therefore, there was always a reason to try the variable schedule, but only occasionally a reason to respond on the fixed schedule. The subjects were responding to the proximate outcomes, rather than the overall outcomes. This is an example of how subtle changes in the schedule can cause drastic changes in behavior. Whenever we provide players with rewards, we're creating a schedule of reinforcement that will influence them to behave in particular ways. Because we can't avoid these effects, we have to understand them so that they can be made to work for us, rather than against us.

Odysseus' Choice

One factor we haven't addressed yet is when the decisions are made. Many of the choices we make in games don't have immediate effects, only helping or harming the player minutes or hours down the line. A character might have to choose whether to take a potion that gives them extra strength now or save it for later play. A player in a tank combat game might choose a fast, lightly armored tank rather than a slower, better protected one. Not all choices are followed by immediate consequences, and this delay often distorts the player's perception of their options.

Take the situation where a person has two possible options, each with a different level of reward. For example, a person might choose between receiving one piece of candy or two pieces of candy. If the delays are equal, the person would naturally choose the one with the larger reward. However, as the delay to the lesser reward decreases, the relative value of that reward starts to rise. If someone is offered one piece of candy right now compared to two pieces next year, most people would probably choose the more immediate reward.




Because he wanted to hear the Sirens but also make it home alive, Odysseus ordered his crew to tie him to the mast and to plug their ears.

This kind of decision making is often studied in children, who tend to be more strongly affected by these delays. However, its effects can be seen throughout life, from decisions about saving money to the relative addictive qualities of recreational drugs. A drug which takes effect faster will generally be more addictive than a slower one of equivalent strength.

A practical question arising from this research is under what circumstances do people tend to make more accurate decisions. One of the answers that psychologists have discovered has a parallel in an ancient Greek myth, Odysseus and the Sirens. Odysseus knew his boat was about to sail near the place where the Sirens were singing and that anyone who heard them would throw themselves into the sea in a vain attempt to reach them. Because he wanted to hear the Sirens but also make it home alive, he ordered his crew to tie him to the mast and to plug their ears with beeswax so they would not hear the call. In this way, his ship sailed safely past, his crew unhearing of both the Sirens and his pleas to be untied.

Because he made the decisions at a long delay from both outcomes, his choice was a good one. If he'd waited until the Sirens were right there and had to choose, his decision would have maximized the short term happiness of listening to their song over the longer term reward of making it home alive.

More generally, the more distant all of the outcomes are, the more people's choices tend to maximize long-term success. Of course, you may not want players doing deep long-term thinking. It's up to the designer what's best for his or her game, whether to skew the players towards one option or another, towards one strategy or another. Delays between action and outcome are just one of the tools available to influence how players choose.



Conclusion

To explain every choice a real human being makes would take a model as complex as the human mind. Psychology cannot offer use that yet, but it can give us rules of thumb and general patterns of choice that can describe a generous portion of what we do when presented with multiple options. Every game offers its players a sequence of choices, each with attendant consequences for choosing wisely or poorly. By understanding some portion of the rules that govern how human beings react to those choices, we can design games that elicit the kinds of choices that make the game a more enjoyable experience for the player.



Copyright © 2003 CMP Media Inc. All rights reserved.


Designing for Motivation


By David Ghozland

The importance of a game's experience depends on the how much general interest it can generate. Creating and keeping the player’s interest is the way to manage his motivation. His motivation is the factor that will determine if a player will continue playing after a few minutes, as well as how long he will play and whether he will finish the game.

As game creators, we have the advantage of knowing that the player is motivated when he starts a game, because the player has already taken the first steps: both buying and launching the game on his PC or console (this motivational work has been done by marketing).

This is where we step in, seizing these invaluable moments when the player starts playing. These are the very first minutes when we must deploy a maximum of ingenuity and design. The first contact is done at that critical moment; when everything begins and when everything can end as well.

The player’s action responds to a paramount need which we should never forget or undermine – that is, HAVING FUN (everything else just serves to increase the intensity of the game experience). The essence of our work is to answer this necessity while keeping or even increasing his initial motivation throughout the game. If the player loses his motivation, it is because he does not have fun anymore. He will first “switch off’, and then stop playing.

Managing the player’s motivations means meeting his needs. If the general need is to have fun (the reason for the purchasing the game), then the needs that must be answered by a game designer are created beforehand by the game design and are also accepted by the player.




Download 0.99 Mb.

Share with your friends:
1   ...   5   6   7   8   9   10   11   12   ...   28




The database is protected by copyright ©ininet.org 2024
send message

    Main page