Published as: Sugrue, B. & Clark, R. E. (2000), Media Selection for Training. In S. Tobias & D. Fletcher (Eds.), Training & Retraining: a handbook for Business, Industry, Government and the Military. New York: Macmillan



Download 240.76 Kb.
Page4/5
Date18.10.2016
Size240.76 Kb.
#2649
1   2   3   4   5

Trainees can control access to information, or the system can decide when information should be presented. However, the extent to which a trainee should have control over access to information during training is unclear. There is much debate in the literature on the topic of learner control (Chung & Reigeluth, 1992; Hannafin & Sullivan, 1995). Research in this area has not produced consistent results regarding the effects of different combinations of system and learner control on learning and performance. The source of this inconsistency lies in the use of study designs which have not permitted the examination of the effects of different levels of learner control over different elements of instruction, or the effects of increased delegation of control on learners with different ability and affective characteristics. Aptitude-treatment interaction research indicates that learners with low ability, high anxiety, and low self-efficacy may not be capable of assuming control of their learning (Lohman, 1986).

The ideal approach to controlling the type, amount and timing of information during training is an approach that is flexible enough to adapt the level of system control depending on trainee performance. To implement such an approach, the system must monitor a trainee's level of performance under the initial/default control-sharing arrangement. If a trainee's performance is errorless, then the balance of control can be shifted more toward the trainee; as soon as a trainee begins to make errors, then the system can assume more control over the type and timing of information presented until the trainee's performance resumes a less errorful pattern.

Instructional methods for practice. The training designer's job is to find out what behaviors and thought processes constitute ability to perform the targeted performance tasks (using cognitive task analysis procedures) and to create a set of practice and information resources that will enable novices to acquire knowledge structures and thought processes similar to experts. Trainees should ideally learn to perform a task by performing the task, or simplified versions of the task, in contexts similar to those in which they will be expected to perform the task on the job. Practice activities can vary in the extent to which they mirror the physical context of the real task (contextual authenticity) and in the extent to which they require the trainee to engage in the cognitive processes employed by expert performers (cognitive authenticity). All practice activities should have high cognitive authenticity, but the level of contextual authenticity can vary. An example of a task that has low contextual authenticity but high cognitive authenticity would be one where trainees are given a written description of a number of customers and are asked to select or suggest a sales technique that might work with each customer. Although the trainees are not meeting the customers in a live situation, they are engaging in thought processes similar to those they would exercise in a more realistic situation.

The most contextually authentic practice is, of course, on the job itself. Even if one conducts training away from the job setting, when a trainee returns to the job, he or she is still in "training". The trainee is now performing tasks with perhaps less external monitoring, and has access to whatever declarative and procedural knowledge he or she remembers, plus on-line or paper-based job aids, or more expert workers on the job. In terms of contextual authenticity, the current trend is to make each practice problem during training as realistic as possible (Norman & Schmidt, 1992; Collins, 1994), the theory being that actions and decisions are linked to the contextual cues that will be expected to trigger them on the job. However, cognitive authenticity and opportunity to practice in varied contexts appear to be more critical than replication of superficial conditions (Schmidt & Bjork, 1992).

Practicing a task facilitates the process of compiling task-relevant procedures. Practice methods with high contextual authenticity include problems, cases, or scenarios in simulated environments similar to those encountered on the job. Practice methods with low contextual authenticity usually involve written or oral exercises where scenarios are described, and students make decisions by selecting from lists of options, or describing what they would do next, either orally or in writing.

The amount of practice provided will depend on (a) the novelty and variety of contexts to which the trainee is expected to transfer skills being learned (the more novel and varied the contexts, the more practice on a variety of tasks), and (b) the speed and accuracy of a trainee's current performance. Training can provide a fixed amount of practice for all trainees, or can vary the amount of practice for different trainees based on the trainees' performance or trainees' requests. Timing of practice and its intermingling with information remains an arbitrary decision. Some theorists advocate providing information before any practice activities (Anderson, 1993); others advocate making practice the first training event a trainee encounters, with information being given or accessible during that practice (Barrows, 1989; Schank, 1994). Novices in a domain may need more information prior to the first practice task than do trainees with more expertise.

In the context of practice, as was the case with information, the ideal control condition would be one where the system can share with trainees control over the amount and timing of practice, and can adjust the balance of control depending on trainee performance on practice tasks. Trainees should always have as much control as they indicate (through accurate task performance) they can handle, and a secondary goal of training should be to increase the extent to which they can direct their own learning. However, one can have training programs where the type, amount, and timing of practice are totally controlled by the system, or where the trainee is given total control.

Instructional methods for monitoring. There are two main types of methods through which the external training environment can provide support for monitoring:

data collection on aspects of trainees' performance and perceptions, and

guidance and tools to help trainees do their own monitoring or monitor each other's performance.

Guidance can be in the form of descriptions or demonstrations of strategies for monitoring one's own or a peer's performance. An example of providing guidance for self-monitoring in description form would be telling trainees to review their own performance on a practice activity (which might have been videotaped so they can play it back), and to count the number of instances of commonly-made errors. Trainees could be given some chart or form to help them keep track of time, errors, problems, or questions that arise during practice. Trainees could also take turns monitoring each other's performance using similar tools.

Alternatively, the system can record aspects of performance such as the following:


  • trainees' actions and selections during practice,

  • products that are generated as a result of task performance,

  • verbalizations during task performance,

  • time on task, and/or

  • amount and type of information accessed before and during task performance.

Anderson's (1993) cognitive tutors record every action taken by a learner as he or she works on exercises related to, for example, LISP programming. These data are used to calculate the probability that a student knows particular rules essential to expertise in the domain. The computer-based problem-solving environments developed by Stevens, McCoy, & Kwak (1991) record the sequence of information items/options a student accesses, and the time spent in each information item. This facilitates detailed analysis of the problem-solving activity of individuals and groups of students. The analysis is displayed visually, revealing, for example, that students who performed poorly on a problem were consistently accessing a particular set of irrelevant information options.

Since poor performance can stem from a lack of effort as well as from a lack of relevant declarative and procedural knowledge related to the task, the system can also monitor trainees' perceptions of the value and demands of the task, and relate these to time on task. To monitor trainees' perceptions of the task one can ask direct questions and record trainees' oral, written or selected responses to the questions. If a trainee performs poorly, spends less time on the practice task than most other trainees, and also indicates that the task has little value or is either too easy or too difficult, then the system can target intervention at the value or demands of the task, rather than at task-related declarative and procedural knowledge.

The amount of monitoring (either data collection or guidance for self-monitoring) can be fixed (anywhere from low to high) or can vary. One can start with a high degree of monitoring support and gradually reduce it, or one can start with a low amount of monitoring and increase it if a trainee is making many errors during practice, so that one has enough data to identify the source(s) of those errors. One can choose to monitor every practice activity or to record performance and perceptions on a sample of tasks. For example, one could allow trainees to attempt the first practice activity unmonitored, and then monitor their performance and perceptions during a second similar task. The timing of monitoring can be fixed or flexible. The system can control all aspects of monitoring, or trainees can be given control over when and what is monitored by the system or how much guidance they want for self-monitoring. Control can also be shared with the trainee, the amount of system control increasing or decreasing depending on the accuracy of trainee performance on practice activities.

Monitoring is treated here as a distinct category of support, separate from the analysis of the data recorded, because (a) the media that are used to record data on performance and perceptions can be different from the media used to analyze the data or adapt the level of support for other processes based on that analysis; and (b) external support for learning could stop at this point. Data on performance might simply be reflected back to the trainee who would have complete responsibility for interpretation of the data and correction of errors in performance. According to Collins and Brown (1988), by reflecting back to a trainees the process by which they carried out a task, and also providing a model of accurate task performance, trainees can themselves discover elements that need improving. In addition, trainees are learning to monitor and diagnose sources of errors their own performance.



Instructional methods for diagnosis. The external training environment can provide two types of support for error diagnosis. The system can analyze data gathered during monitoring or can offer guidance for students to do their own (or each others') analysis and diagnosis of sources of errors. Providing guidance for self-diagnosis and peer-diagnosis might involve giving trainees checklists which would describe the most likely causes of particular errors in performance. Another example of guidance for self-monitoring would be to replay the trainee's performance alongside an expert's performance on the same task, and have the trainee note differences between his own and the expert's performance.

Performance data can be analyzed in conjunction with data on trainees' interaction with information resources, and trainees' perceptions. A range of quantitative and qualitative/logical analysis procedures can be applied to the data that has been gathered on trainee performance and perceptions. The goal is to link patterns of errors in performance to gaps in goal interpretation, or declarative and procedural knowledge so that the trainee can be directed to the goal elaboration, information or practice most likely to help him or her correct the error. For example, if a trainee practicing a technique for conflict resolution constantly omits the final step where the persons in conflict agree on their responsibilities as part of an action plan to solve the problem that is causing the conflict, then we have a clear indication that this trainee needs additional information on the final part of this technique, and maybe also a separate practice activity that focuses on this step of the procedure. If a trainee constantly misclassifies examples of one of ten components of an engine, then the trainee needs to review information on that component. It a trainee's pattern of errors seems random, then it may indicate that the trainee was generally not investing effort and needs more elaboration on the value of the goal and demands of the task. Analysis of perception data can confirm this hypothesis. Analysis of perceptions is important because if a trainee's perception of the value of a task is low or if the perception of difficulty is too high or too low, then the first line of intervention to improve performance should target these perceptions rather than particular pieces of declarative and procedural knowledge.

Analysis of the manner in which a trainee interacted with available information before and during a practice activity may confirm an initial diagnosis of gaps in declarative knowledge based on errors in performance. For example, it may turn out that a trainee, whose errors all involved selection of incorrect cables when practicing installation of telecommunication networks, spent very little time viewing information on cables. This would increase one's confidence in the diagnosis of the source of the trainee's poor performance and would make the prescription for adaptation clear; the trainee could be directed to review information on cables, or information on cables could be retrieved and displayed by the system.

In addition to analyzing data for individual trainees, data can be aggregated and patterns of errors and perceptions across groups of trainees can be analyzed to identify common sources of those errors. If many trainees are making similar errors or have maladaptive perceptions of a task, then this may indicate a need for a global change in the program; for example, it may be that more information needs to be provided up-front on a particular aspect of the task. If many trainees question the value of a task, then some up-front discussion of the value of the task might be built into the next version of the program, or the intervention to correct errors might involve a group discussion on the value of the task.

The amount of analysis that should be done on data gathered while trainees are performing practice tasks will depend on the resources available to do such analysis and on the extent of errorful performance; the more errors being made by a group of trainees, the greater the need for analysis. However, no more analysis should be done than is necessary to maintain accurate performance for all trainees. The more detailed and individualized the analysis, the more efficient and effective will be the training, since it will be possible to give appropriate support for correction of errors only to those trainees who need the support.

The timing of analysis/diagnosis is related to the timing of system adaptation or guidance for adaptation. Data on performance and perceptions do not have to be analyzed immediately. The training designer must decide whether to provide immediate individualized adaptation, immediate group adaptation, delayed individualized adaptation, or delayed group adaptation. There are advantages and disadvantages to different timings of analysis and adaptation; therefore, the timing of analysis is an arbitrary decision. Recent research indicates that immediate corrective feedback may improve performance during training, but immediate feedback has a negative effect on retention and transfer because it reduces the trainees' depth of processing during learning (Schmidt & Bjork, 1992). However, to avert the development of enduring misconceptions or an increase in the perceived difficulty of a task, immediate adaptation may be preferable if a trainee is making consistent and frequent errors.

The system can control how much support should be given for diagnosis, and when it should be given, or it can be left to the trainees to ask for guidance or for analysis of their performance. The system can decide how much analysis of performance data it will do, or how much guidance to give trainees to help them self-diagnose. Alternatively, analysis might only be provided if a trainee requests it. If control is shared between system and trainees, then the system might gradually decrease the amount of diagnosis it does or the amount of guidance it gives for self-diagnosis.

Instructional methods for adaptation. Methods can be provided to adapt instruction for trainees based on diagnosis of the sources of errorful performance, or guidance can be given for trainees to do their own adaptation, or to help each other improve their performance. External support can be provided to adapt a trainee's goal interpretation, and task-relevant declarative knowledge and procedural knowledge. Thus, there are three types of adaptation methods: adaptation of goal elaboration, adaptation of information, and adaptation of the practice components of training. When a trainee has performed poorly on a practice task, new information can be presented, or information the trainee has already seen can be presented again. New simpler practice tasks can be prescribed, or more scaffolding can be provided to complete the current task or a similar task. New information on the goal of the task, its value, and difficulty can also be presented.

We suggested earlier that analysis of errors in performance should identify the most likely information, or adjusted practice that would lead the trainee to correct errors. We also suggested that the source of poor performance may not be a lack of specific declarative or procedural knowledge, but rather a lack of effort resulting from either a perception of the value of the task that is too low, or a perception of the task as too easy or too difficult. If the data indicate maladaptive perceptions, then the first line of intervention or adaptation of the training would be to target the trainee's perceptions of the task goal so that he or she sees value in it and perceives it as a moderate challenge (Keller, 1983; Lepper et al., 1993). Value can be increased by exposing the trainee to a story from or about a person the trainee identifies with and respects. The story should emphasize the high value the "role model" places in this task. Lepper et al. (1993) and Keller (1983) have described strategies for increasing or decreasing the perceived difficulty of a task; for example, to decrease the perceived difficulty of a task, it can be broken down into smaller pieces or more hints can be given during task performance. To increase the perceived difficulty of a task, stories can be told about how difficult other trainees have found the task and how much effort they spent to complete it.

Depending on trainees' error patterns, extra elaboration on the goal of the task can be provided, or a trainee can be instructed to review particular pieces of information, or to complete a simpler task, or to spend more time on the task because "most people find this task very difficult and need to spend longer working on it than they anticipated". The amount of adaptive intervention for a particular trainee will at first be fairly arbitrary, but as a record of the trainee's performance is compiled, then the amount can be tailored more precisely, increasing and decreasing as the trainee's performance improves or declines. The timing of adaptation can be immediate or delayed. Adaptation can occur as soon as an error is detected, or intervention can be delayed until a consistent pattern of errors has been observed and a likely source identified. The system can control the amount and timing of adaptation or trainees can be given the option of requesting "help" when they think they need it.

Summary of Training Methods Selection

We have separated the strategies or methods that are commonly used in training into six categories based on the cognitive activity they support. We have suggested that methods used to externally support any cognitive component of the learning process can vary in type, timing, amount and in terms of who or what is controlling their deployment. As a first step toward selection of media, designers should select the type, amount, timing, and control of methods that will be used in a particular training program. Figure 6 summarizes the sequence of steps a designer would need to follow when implementing the "method-selection" phase of media selection.

----------------------------

Insert Figure 6 here

----------------------------

If a program has already been designed, the designer can use this model to review the design and identify how each of the six cognitive components of learning is being supported in that program. If the final combination of media has already been selected, this activity will serve as a check to confirm (a) that adequate external support for all cognitive processing has been designed into the training, and (b) that the media selected are capable of delivering the selected type, timing, amount, and control of that external support. Essentially, one is asking the following questions about the training:

How will information about the goal of the training be communicated to the trainees?

How will information related to the training task(s) be delivered or made available?

How will opportunities to practice be provided?

How will trainees' performance be monitored?

How will performance data be analyzed and sources of errors diagnosed?

How will the training be adapted for individual trainees so that errors in performance are corrected?

One can make an assumption prior to design that trainees will be able to do much of the processing for themselves, in which case the amount of external support will be less. However, the training should be designed so that performance under conditions of low support is monitored closely initially, to confirm the hypothesis that the trainees are capable of regulating their own attainment of the goals. If monitoring reveals that there are some trainees who are performing poorly in the low-support environment, then there needs to be a way to increase the level of support for those trainees. Alternatively, one can make a decision to begin with a high level of system support for all cognitive processes and withdraw support as trainees indicate (through accurate performance) that they can assume more of the burden themselves. In either case (low support with monitoring and increase if necessary, or high support with monitoring and decrease as necessary), external monitoring of trainee performance will be an essential component of the training. Figure 7 illustrates the flexible nature of a model of training where control is shared between the system and the trainee, and where the amount, type and timing of support can vary.

----------------------------

Insert Figure 7 here

----------------------------

Once methods have been selected, the next stage of the media selection process involves the selection of media attributes that are necessary to supply the selected type, amount, timing, and control of methods. The next section describes how to select media attributes.

Selecting Media Attributes to Facilitate Delivery of Training Methods

The methods specified for a training program will dictate the media attributes or capabilities necessary to deliver that program. Although it may turn out that the same medium will be employed to deliver a number of components of a training program, we recommend that the designer/selector begin by selecting separately the media attributes/capabilities/requirements for each of the six categories of external support (methods). Based on Levie's (1989) and Kozma's (1991) categorization of attributes, we think that it is useful to classify media attributes into five types: transmission, storage, recording, processing, and retrieval. Transmission attributes are capabilities for transmitting different types of information such as visual, audio, or textual information, and also for transmitting contexts for practice. Storage attributes are capabilities for storing different types and amounts of information. Recording attributes are a medium's ability to record different types of trainee inputs. Processing attributes are capabilities for analyzing different types of trainee inputs and manipulating stored resources. Retrieval attributes are capabilities for accessing stored information.

Different attribute categories are more relevant to particular instructional method categories. Figure 8 summarizes the media attributes most closely related to each of the six methods that can be incorporated into training. We will take each category of methods in turn and indicate what media attributes are most important given particular method decisions. Attributes needed for goal elaboration and information methods are similar since both of those methods involve providing either descriptions or demonstrations; therefore we combine our discussion of selecting media attributes for those two methods.

----------------------------

Insert Figure 8 here

----------------------------



Selecting media attributes for goal elaboration and information methods. The categories of media attributes most closely related to the goal elaboration and information components of training are transmission, storage, and retrieval. We identified two main types of methods for supporting goal interpretation and encoding of declarative knowledge about a training task: provision of descriptions about the goal or the task, and provision of demonstrations that illustrate the goal or how the task is performed. If a training program only provides descriptive information, then any medium that can transmit verbal information (written or audio) will be capable of transmitting goal elaboration and task-relevant information. If demonstrations are part of the training design, then media capable of transmitting visual as well as verbal information will generally be required. However, there are situations where a non-visual demonstration would be possible or appropriate, for example, an audio demonstration of a particular style of playing a musical instrument, or a tactile demonstration of a physical procedure for visually-impaired trainees.

The actual information transmitted will be dictated by the performance goal. For example, if the performance goal is that the trainee should be able to select the most appropriate mix of people for project teams, then descriptive information might include definitions and examples of different types of team members, criteria for classifying people as different types, and rules about what mix of types of people work well on different types of projects. Information in the form of demonstrations related to the same task might include some acted-out sample cases showing how managers formed teams for different projects, with perhaps narrated commentary on the thought processes the managers were going through in order to make the selections.

The amount of information to be presented or available during training will require a medium that can store that amount and type of information. If the amount and timing of information to be delivered is to be variable, and controlled by the trainee, then a medium capable of processing trainee searches and retrieving appropriate information will be required. On-line databases, books, and user phone hotlines with a human on the end of the phone delivering relevant information in response to user queries are examples of media capable of processing trainee requests for information. An example of a medium with the combined attributes of storage of a large amount of visual information, variable trainee-controlled retrieval of that information, and transmission of visual information would be interactive video. Examples of media capable of storage of a large amount of verbal information, processing of trainee requests, and rules for retrieval based on student responses would be computers or human trainers. Printed materials with indexes to the information in them could store a large amount of both verbal and static visual information, allow trainee-controlled retrieval, and even have simple rules to direct the trainee to particular parts of the materials depending on their performance (self-monitored) on practice tasks. The user manuals that accompany computer software are an example of such printed materials.

Selecting media attributes for practice methods. The category of media attributes that is most relevant to the practice component of training, is transmission. The medium selected for this component of training must be able to transmit the level of authenticity of context prescribed in the training design. The more contextually authentic the practice tasks, the more they will require media capable of replicating on-the-job conditions. Thinking of media as environments capable of providing contexts for task practice which are more and less authentic requires a broad notion of what constitute media. The real job environment becomes a medium. A room in which trainees can role-play interpersonal situations becomes a medium. A flight simulator becomes a medium. A kit of materials for building a kitchen cabinet and a room that will serve as the simulated kitchen become a medium/environment for authentic practice. The entire environment for practice should be considered a medium, and an important feature of that environment is its contextual authenticity.

Regardless of the level of contextual authenticity of practice, the amount, timing, and control of practice activities will indicate additional media attributes. As was the case with goal elaboration and information, if the amount and timing of practice to be delivered is to be variable, and controlled by the trainee, then a medium capable of processing trainee requests for particular practice activities and retrieving those practice tasks will be required.



Selecting media attributes for monitoring methods. The category of attributes most relevant to facilitating the monitoring function of training is the recording attribute. If trainees are expected to do their own monitoring, with external guidance for self-monitoring provided, then any medium capable of storing and transmitting descriptions or demonstrations of self- or peer-monitoring will do. A wide range of media can be used to provide guidance, for example, print, video, human, computer. If tools for self-monitoring or peer-monitoring are part of the training design, (for example, a chart where a trainee can keep track of how many practice tasks he or she has completed, or how many instances of particular errors he or she has made), then media capable of storing and transmitting those tools will also be required.

If the system is to monitor performance, then media capable of recording whatever types and amounts of data were specified in the design will be required. For example, if the design specifies that all trainee actions will be recorded while trainees are working on practice activities, then media capable of recording trainee actions will be required. A video camera could be used or a human trainer might observe and make notes, or a computer might record an "audit trail" as a trainee performs a task. If the products of trainee practice are to be monitored, then a wider range of media can be used. Perception data can be recorded by any combination of media capable of posing questions to trainees and recording their responses. For example, a human trainer might ask trainees to write down some reasons why the task they are learning to perform is important, or to indicate their perceived difficulty of particular tasks on a scale from one to five (either on paper or on a computer screen).



Selecting media attributes for diagnosis methods. The category of attributes most relevant to delivery of Diagnosis instructional methods is processing attributes. As was the case with monitoring, if trainees are to be guided in their own efforts to diagnose sources of error, then any medium capable of storing and transmitting advice (descriptions or demonstrations of how to do self- or peer-diagnosis) may be used. If, in addition to advice, tools to aid in self-diagnosis are specified in the design, then media capable of delivering those tools will be required. If the training design specifies that the system will go beyond guidance to actual analysis of trainees' performance data, then media capable of processing and analyzing those data will be required. Currently, only two media are capable of data processing/analysis, either immediate or delayed: humans and computers. Thus, any training program that plans to be diagnostic and adaptive to trainee needs has to employ computers or humans in the analysis of performance data. Similarly, any training program NOT employing computers or humans as either the sole delivery medium, or as part of a more diverse combination of media, CANNOT provide more than guidance for trainees' own monitoring and diagnostic processes.

Processing of performance data can be either immediate or delayed. An example of delayed processing would be when trainees' work is collected and reviewed some time later by a human trainer. If trainees are given control of delayed analysis, then a variety of media can be used to request such analysis. For example, trainees might use electronic mail or voice mail to request analysis of their work. However, a human or a computer will still be required to perform the analysis.



Selecting media attributes for adaptation methods. The category of attributes most relevant to adaptation is that of retrieval. The design for a training program can specify that trainees be given guidance regarding how to correct their errors, or that the system should intervene and retrieve specific goal elaboration, information, or practice activities for the trainee. If adaptation is to be immediate, then a medium capable of monitoring, analyzing and adapting simultaneously is called for. Thus, the media attributes of recording, processing, and retrieving must all be present. Only computers and humans have this capability. If the design of a training program specifies delayed adaptation, then the range of media capable of delivering adaptation is increased. For example, upon analysis of trainee performance, a set of materials can be retrieved from storage and transmitted to a trainee; these materials should include information and/or practice tasks that are likely to help the particular trainee achieve the goal. In another case, a trainee might simply receive a delayed written critique (perhaps via electronic mail) of his or her performance or a delayed face-to-face consultation with a human trainer during which the trainee is directed to review particular pieces of information and do another practice task.

Trainee control of available adaptation and guidance gives the trainee the option of requesting help, either while they practice or some time later. Many computer-based/multimedia training programs have "help" buttons and menus which allow trainees to retrieve information that they think they need. If on-demand help, rather than system-generated help is specified in the training design, then a medium with the ability to retrieve task-relevant information on demand will be needed.



Summary of Media Attributes Selection

We have proposed five categories of media attributes and indicated how different instructional methods are related to these attributes. The media attributes most relevant to goal elaboration and information methods are transmission, storage, and retrieval. The attribute most relevant to practice is transmission. The attribute most relevant to monitoring is recording. The attribute most relevant to diagnosis is processing; and the attribute most relevant to adaptation is retrieval. The amount, timing, and control of methods will dictate the need for other media attributes. Figure 9 lists the steps necessary to match media attributes to the methods specified for a particular training program.

----------------------------

Insert Figure 9 here

----------------------------

Selecting A Final Combination of Media

Once a training designer has identified the media attribute requirements for the training methods selected, then he or she can proceed to the last stage of the media selection process which involves selecting the most economical and accessible mix of media that includes all of the attributes needed to deliver the training program. Earlier we suggested that the selection of training methods influences the extent to which trainees will learn and be motivated to learn, and in some cases, the efficiency (time to learn) of the training. We suggested that the choice of media influences access to training and its cost and efficiency (see Figure 1). When selecting media, as opposed to selecting methods or media attributes, the goal should be to maximize access and efficiency, while minimizing the costs of training.



Required media attributes that restrict media options. There are just a few media attribute requirements that restrict the range of media that can be used to deliver particular components of training. First, a human or computer will be required if a medium capable of processing student actions and retrieving appropriate adaptations to goal elaboration, information, or practice is needed. If the specifications for a training program do not include system diagnosis and intervention to correct errors, or system control of variability in amount or timing of other methods, then the training can be formatted for delivery in virtually any medium or mix of media. If the training will only give guidance and some tools for trainees to do their own diagnosis and adaptation, then a computer or human will not be necessary. If amount and timing of goal elaboration, information, practice, and monitoring are fixed (and controlled by the system), or if trainees are expected to control their own progress through available information and practice activities, then computers or humans will not be necessary. An example of fixed amount and timing of information and self-control of practice would be a training situation where all trainees watch a live one-way video demonstration of a welding technique and then go back to their jobs and practice the technique, using a checklist to monitor their own performance.

The second restriction on media occurs if a medium capable of transmitting a contextually authentic practice environment is required. In that case, either the real job environment, or some "virtually real" environment, or computer-based simulation of the real environment will be needed. The third and final restriction on the choice of media arises if a medium capable of transmission of visual demonstrations is required; in that case, either a human who can perform a live demonstration, or a "visual medium" that can transmit a pre-recorded demonstration will be required.



Interchangeable media. Any medium or mix of media can provide fixed or optional access to descriptive information about goals and tasks, cognitively authentic practice tasks (or directions for setting up contextually authentic tasks), and guidance and tools for self- or peer-monitoring, diagnosis, and adaptation. All of these methods can be delivered in print, video, audio, or by humans or computers. One might have to be creative about the kinds of practice activities one could deliver/describe, for example, on audiotape, but it would not be impossible to train someone to cook, dance, run a business, or fly an airplane using, for example, only radio, as long as that trainee was able and willing to do the following:

1. translate some of the information given in one medium into a medium that could be consulted later to review information that might help correct errors in performance (for example, recording a live radio broadcast or making written notes during the broadcast),

2. set up the environment necessary to carry out whatever practice tasks are described or demonstrated (for example to practice a cooking technique, the trainee would have to assemble all of the necessary ingredients and utensils),

3. do his or her own monitoring and diagnosis, and seek out the information needed to correct errors, and

4. invest the effort required to master the task.

Schramm (1977) described many instructional programs in operation around the world where single media, such as radio or television, or combinations of media, such as correspondence with home-study kits, were used to teach skills with equal effectiveness (for those who completed the courses) as traditional classroom-based programs. However, the number of students who participated seriously and actually completed the "media-extended" programs was generally only a fraction of the total number who enrolled initially. High drop-out rates are a problem in most distance learning programs. We hypothesize that the more support that is given for each of the six processes involved in learning, the more students will complete distance learning programs. Many distance learning programs provide minimal support for monitoring, diagnosis and adaptation; therefore only students who can perform these processes for themselves are likely to succeed. The most successful distance learning programs have been those that relied heavily on "study groups" where small groups of trainees meet, listen or view material, discuss it, and practice (Schramm, 1977). The group provided a mechanism for peer-monitoring, diagnosis, and adaptation.



Cost and ease of access. As the media available for distance learning, and on-the-job training become more and more sophisticated, the "best use" and combination of technologies such as two-way video conferencing with old-fashioned media such as print can be guided by selecting the least expensive and most accessible media that will give the level of external support for cognitive processing selected for the training. If one opts for a newer medium, then one should at least be clear about which functions of training it can and cannot deliver, and use it for functions that it renders more accessible or less costly. For example, two-way video technology makes access to an expert human tutor easier. The distant human tutor can tune in and monitor trainees located in various places around the world as they practice the same tasks in slightly different job environments, providing immediate diagnosis and adaptation to improve performance. A two-way audio link during a live broadcast might make it easier for students to access relevant information since they can ask questions of the remote, live trainer.

A computer might make it easier and less expensive to provide a practice environment that simulates real task conditions. A computer might also make it easier to access large amounts of descriptive information and demonstrations related to the tasks being trained. A CD-ROM might make access quicker and easier than access to the same information over local or global computer networks. A CD-ROM or videodisk will make access to segments of video easier than videotape.

The relative costs of different media combinations will be influenced by the size of the audience for the training and the extent and efficiency of the development systems and facilities available. If the audience for training is small, then media that require less time-consuming up-front development, for example, human trainers with some print materials may be preferable to computer-based training. However, if a company has shells or templates for creating computer-based/multimedia training that embodies appropriate types and amounts of external support, then computer-based training may always be the least expensive option in that organization. Rather than repeat here procedures for calculating the relative costs of different media for delivering training, we refer the reader to Levin (1983).

Summary of Final Stage of Media Selection

The third stage of media selection boils down to making a decision regarding whether or not a computer or human "trainer" will be required for diagnosis, and adaptation, whether or not a visual medium will be required for demonstrations, and whether a medium that can provide contextually-authentic practice will be required. Those decisions will be based on the methods and media attributes that have been selected in the previous stage of the media selection process. Once decisions regarding humans, computers, visuals and practice environments have been made, then almost any medium or mix of media could embody any remaining attributes required. The final selection of media should be based on ease of access for the target audience, and cost of development and delivery. Figure 10 summarizes the steps and criteria for making final media selections. We will now illustrate how the three-stage process of media selection could be applied to selection of media for a multimedia development project.

----------------------------

Insert Figure 10 here

----------------------------

Applying the Model to Select Media for a Multimedia Training Environment.

Multimedia training is training that is delivered via computers, but incorporates almost every other medium, including video, audio, still and animated graphics, and text. Thus, it is possible to find every conceivable combination of media attributes that might be required to deliver a training program in a multimedia system, with the exception of the actual job environment for practice (unless, of course, the tasks being trained happen to be tasks that are performed with a computer).



Assuming that multimedia will be the delivery system for a program to train newly-hired chemical plant workers in plant safety regulations and procedures, the focus turns to more micro-level decisions such as which components of the program should use video, or which should use text. Applying the three-stage media-selection model to this situation, we would first make decisions regarding the type, timing, amount, and control of goal elaboration, information, practice, monitoring, diagnosis, and adaptation methods for this program. Then we would identify the attributes needed to support our method selections, and finally we would select the media components that would be least expensive to develop and deliver, and would permit the easiest access within the multimedia system.

Types of methods. For this training program, we chose descriptive methods to present the goals, a mixture of descriptive and demonstration methods for information, low contextually authentic methods for practice, collection of product data for monitoring, and guidance methods for diagnosis and adaptation. We assumed that the trainees would be able to do their own diagnosis and adaptation with some guidelines and tools provided by the system. We also assumed that as long as the practice activities engaged cognitive processes similar to those required in real situations, and as long as the trainees were exposed to a representative range of situations, both in demonstrations and practice scenarios, then transfer of training would occur (Anderson, Reder, & Simon, 1996).

Amount, timing, and control of methods. To verify that the trainees were capable of self-diagnosis and adaptation, we specified a high level of monitoring in the early stages of the program. Those trainees who were not initially succeeding with the level of support provided were to receive additional training in self-regulated learning skills. We decided to provide a fixed amount of practice at fixed times during the program with optional access to information and goal elaboration during those practice activities. The system was to control the collection of data. Students' final decisions (selected from menus of alternative decisions) in safety-related scenarios and exercises were to be recorded, and the system was to inform the student of the correctness of his or her decisions. No recording of the process by which the student reached a decision was specified (for example, the system should not monitor what information the trainee consulted while working on a scenario, or the system should not probe the trainee for the reasoning behind a selection). The system was not to analyze a trainee's pattern of responses and was only to give general guidance regarding how to correct errors in performance. For example, trainees could be told at the beginning of the program that when they made an incorrect selection in a practice activity, they should review whatever information they considered relevant before attempting the practice activity again.

Selecting media attributes to match methods. Since we opted to simply describe the goal of each safety-related task, we needed a medium or media that could present verbal information (written or oral). For information, we had specified a mixture of descriptive and demonstration methods; therefore, we needed a medium or media that could show visually the safety procedures and their outcomes. Since we wanted to give optional access to information during practice tasks, we needed the system to be able to store information, process trainee queries, and retrieve the appropriate information immediately. For practice, we did not want a high level of contextual authenticity; therefore, we only needed a medium capable of transmitting a description or demonstration of a scenario and a medium capable of allowing the trainee to select from a menu of options what he or she would do or decide regarding safety in the scenario.

For monitoring, we needed the system to record student selections at the end of practice scenarios, and to tell the trainee if the selection was correct or incorrect. We needed the system to also transmit some general advice to students on how to improve their performance, for example, that they should keep track of the errors they made and see if there was a pattern to their errors, or that they should search through the information bank for information that would clarify why a particular decision they made was less than optimal. Such guidance could be delivered verbally, via text or audio. We had specified more intense monitoring of trainees in the first part of the training, and so we would temporarily need the system to record more data on the trainees' actions, and to identify students who need additional training in self-regulation before proceeding with such low-support training.



Selecting media. The only restrictions on our final selection of media were the need for a computer or human to monitor trainee decision-making during practice, and the need for transmission of visual demonstrations. The relative costs of developing, storing, and transmitting visual demonstrations of safety procedures in different media within the multimedia system was calculated and it was decided that sequences of still photographs rather than full motion video or computer-generated animations of procedures would be the least expensive option. Color, graphics, and text overlays would be used to highlight important elements of a demonstration.

Since we wanted to give optional access to information during practice, we needed a medium with appropriate storage, processing, and retrieval capabilities; the computer had all of the attributes we needed. Furthermore, the cost of repurposing existing paper- and video-based material on safety regulations and procedures for computer storage and retrieval was less than providing all of the print and video materials to each trainee.

Since the training was to be computer-based anyway, we already had a medium with the ability to record student selections. The computer would be programmed to record more data during the first module of the training, and to automatically terminate the program if a trainee was making consistent errors, and not accessing appropriate information before attempting the activity again. A message would appear on the screen telling the trainee that he or she should sign up for some self-regulation/self-directed learning skills training. The use of a multimedia system to deliver training does not rule out the use of a human for some elements of the training. Thus, we could have opted to use human trainers to monitor student progress in the early part of the program. Human trainers could have reviewed the data recorded by the computer and decided which trainees were not capable of learning in such a low-support environment. However, the computerized monitoring was less expensive to deliver and more accessible than human monitors, and so humans were not selected.

We had specified a low level of contextual authenticity for practice; therefore, we did not need the real job environment to provide external support for compilation of procedural knowledge. We did not even need the full capabilities of the computer to transmit realistic scenarios; the scenarios and selection options could have been conveyed through print or through a human trainer. However, given that (a) we wanted to record trainees' decisions and give them immediate feedback on their accuracy, and (b) we wanted trainees to access information during practice, we decided to make all of the practice activities computer-based. Video clips depicting practice scenarios were considered, but would have been more expensive to stage, record, edit and store in computer memory than computer-generated graphic images, text, and audio segments representing the same situations. For example, a computer animation could depict a scenario in which there was a spill of a hazardous chemical during the night shift with only one operator on duty. A narrator could describe what the operator did and ask the trainee if the operator had made the correct decision; the trainee would indicate his or her answer by clicking the mouse on a "yes" or "no" button. Alternatively, the trainee could be asked to select (by clicking the mouse on the appropriate box) which of a number of actions the operator should take.

The final media selections for the safety training program did not include video, but included text, still and animated graphics, audio, all stored and accessible via the computer. The recording capabilities of the computer were also used to monitor student performance. If the original decision to create multimedia training had not been made, then we would have had more media options, and it might have been less expensive to provide information and practice scenarios on videotape, with print materials for recording student responses, and human trainers monitoring trainee performance. The advantage of the multimedia system is that parts of the training (particularly the information bank) could also provide electronic performance support on the job if an employee had access to a computer terminal.

Summary. In the example just described, even though it had been decided a priori that the training would be multimedia, there were still media selections to be made. We were able to go through the three-stage process in order to arrive at a set of media recommendations for each component of the training. We were also in a position to justify each choice by referring to an instructional method decision which, in turn, called forth particular attributes, some of which restricted our choices of media. When choices remained, final decisions were made based on ease of access, given restricted media choices that had already been made, and relative costs of development and delivery. This may seem like a rather laborious process to go through to arrive at what might seem fairly obvious and intuitive selections. However, it is a procedure which, like any other procedure, becomes automated with repeated application. At least it has its origin in a cognitive view of learning and instruction, and is more internally consistent than previous media selection procedures.

Future Scenarios

Training is becoming less and less separated from other aspects of organizational development (Dubois, 1993; Robinson & Robinson, 1995). The role of the training department in an organization is changing from design, development, and delivery of training products to analysis of the environmental conditions and personal competencies that maximize or compromise the performance of humans in the organization. The analysis of conditions and competencies leads more often to prescriptions for selection of employees and tools to enhance their performance while doing the job, than to prescriptions for training. Thus, in the future, media selection will occur in the context of designing performance support systems rather than in the context of designing training programs.

If we abandon a traditional view of training, the focus of our media choices shifts to selection of media for monitoring human performance and the conditions in which it occurs, and media for delivering the kinds of resources that support and enhance the development and exercise of human competence. Decisions regarding media for monitoring performance, either for selection or identification of competence on the job, are similar to selection of media for monitoring performance during training. Some mechanism for recording the process and/or the products of task performance will be required.

The kinds of resources normally provided to support human performance on the job include easily accessible information banks, and mentoring/coaching. These two types of resources can provide external support for the cognitive processes required to develop expertise. The coach/mentor can serve all the functions of training: goal elaboration, information, assignment of practice tasks (real tasks being thought of as practice tasks), monitoring of performance, identification of sources or errors, and suggestions for improving performance. Most online help systems do not provide support for all of the cognitive processes required for performance, although they could do so. A help system could provide a demonstration of task performance, could talk the trainee through a practice task, could monitor the trainee's performance, could diagnose error sources and prescribe some new task or information search.

Moving from event-based training to embedded performance support does not change the cognitive processes that need to be engaged to progress from novice to expert. Thus, a media selection model which selects media to deliver support for different cognitive processes in the context of training can transfer easily to the context where performance support tools rather than training materials are being designed. Designers of performance support systems can consider



  • what type, amount, timing, and control of goal elaboration, information, practice, monitoring, diagnosis, and adaptation should be provided on the job

  • what kinds of transmission, storage, retrieval, recording, and processing capabilities the system will require

  • the most accessible and least expensive options for delivering the type and level of support selected.

If the trend toward more trainee control over learning in a higher-stakes environment (i.e., on the job) continues, then organizations will need to invest in more training or on-the-job support for self-regulation or self-directed learning skills. There is no reason why media selection decisions related to delivery of programs and tools for self-regulation should be different from the decisions that need be made for other domains. Learners need self-regulation goals and information about how to, for example, monitor their own performance or correct errors. Learners need opportunities to practice self-regulation skills such as planning, selecting relevant information, and monitoring their own performance. In addition, learners who are just beginning to develop self-regulation skills will need to be monitored, have their weaknesses in self-regulation diagnosed, and have the "training" adapted if necessary.

In the future, media selection decisions will be made in the context of increasingly interconnected, and increasingly fast networks of media. The two primary anchors for all delivery systems will be computer networks and humans, with an increasing codependence between the two anchors. If we keep in mind that we need to support multiple cognitive processes with our performance support/training systems, then we will exploit the full range of capabilities of new media systems.


Conclusion

Media selection for training may still be less of a scientific endeavor than we would like. However, rather than following rules with little theoretical or empirical basis, we have suggested that media be selected for their ability deliver various types and amounts of support for cognitive processing during learning. If many media are capable of delivering one component of a training program, then we advocate choosing the most accessible to trainees and the least expensive that will do the job. Ideally, one would postpone the final selection of media until one had designed the methods to support whichever of the processes involved in learning are to be supported in a particular program. Then one could list the attributes or capabilities that the delivery media would need to have in order to implement the training as designed. However, we are not so naive as to think that media selection will not continue to occur at other times and for other reasons in the real world.

In situations where media might be selected even before training is designed, one can at least review the pre-selected media to make sure that they possess all of the attributes that the design requires for implementation. If the initial choice is not capable of delivering some element of the design, then a medium that has the missing attribute can be added. Ultimately, if the design is sound (i.e., provides the right amount of external support for learning), and the selected media can deliver it, then trainees will achieve the performance goals, regardless of how high-tech or expensive the media happen to be.
References
Ackerman, P.L. (1989). Individual differences and skill acquisition. In P.L. Ackerman, R.J. Sternberg, & R. Gagne (Eds.), Learning and individual differences: Advances in theory and research. New York: Freeman.

Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.

Anderson, J.R. (1993). Rules of the mind. Hillsdale, NJ: Erlbaum.

Anderson, J.R., Corbett, A.T., Fincham, J.M., Hoffman, D., & Pelletier, R. (1992). General principles for intelligent tutoring architecture. In J.W. Regian & V.J. Shute (Eds.), Cognitive approaches to automated instruction (pp. 81-106). Hillsdale, NJ: Erlbaum.

Anderson, J.R., Corbett, A.T., Koedinger, K.R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. The Journal of the Learning Sciences, 4(2), 167-207.

Anderson, J. R., & Fincham, J. M. (1994). Acquisition of procedural skills from examples. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(6), 1322-1340.

Anderson, J.R., Reder, L.M., & Simon, H.A. (1996). Situated learning and education. Educational Researcher, 25(4), 5-11

American Telephone and Telegraph Company. (1987). Developing training media. Reading, MA: Addison-Wesley.

Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28, 117-148.

Bandura, A. (1989). Regulation of cognitive processes through perceived self-efficacy. Developmental Psychology, 25, 729-735.

Barrows, H.S. (1985). How to design a problem-based curriculum for the preclinical years. New York: Springer.

Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives. Handbook I: Cognitive domain. New York: McKay.

Boekaerts, M. (1987). Situation-specific judgments of a learning task versus overall measures of motivational orientation. In E. De Corte, H. Lodewikjs, R. Parmentier, & P. Span (Eds.), Learning and instruction: European research in an international context (Vol. 1, pp. 169-179). Oxford: John Wiley and Sons.

Brandenburg, D.C., & Binder, C. (1992). Emerging trends in human performance interventions. In H. D. Stolovitch and E. J. Keeps (Eds.). Handbook of human performance technology. New York: Jossey-Bass.

Braby, R. (1973). An evaluation of ten techniques for choosing instructional media. TAEG Report No. 8. Orlando, FL: Training Analysis and Evaluation Group.

Braby, R., Henry, J.M., Parrish, W.F., Jr., & Swope, W.M. (1975). A technique for choosing cost-effective instructional delivery systems.. TAEG Report No. 16). Orlando, FL: Training Analysis and Evaluation Group.

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Research, 18(1), 32-42.

Campbell R., & Monson, D. (1994). Building a goal-based scenario learning environment. Educational Technology, 34, 9-14.

Cantor, J. A. (1988). Research and development into a comprehensive media selection model. Journal of Instructional Psychology, 15(3), 118-131.

Carnoy, M., & Levin, H.M. (1975). Evaluation of educational media: some issues. Instructional Science, 4, 385-406.

Chi, M.T.H., Bassok, M., Lewis, R., Reiman, P., & Glaser, R. (1989). Self-explanations: how students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.

Chi, M.T.H., Glaser, R., & Farr, M.J. (Eds.). (1988). The nature of expertise. Hillsdale, NJ: Erlbaum.

Chung, J., & Reigeluth, C.M. (1992). Instructional prescriptions for learner control. Educational Technology, 32(10), 14-20.

Clark, R.E. (1982). Antagonism between achievement and enjoyment in ATI studies. Educational Researcher, 17(2), 92-101.

Clark, R.E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445-459.

Clark, R.E. (1988). When teaching kills learning: Research on mathemathantics. In H. Mandl, E. De Corte, N. Bennett, H.F. Friedrich (Eds.), Learning and instruction: European research in an international context. (Vol. 2.2, pp. 1-22). Oxford, England: Pergamon.

Clark, R.E. (1990). A cognitive theory of instructional method. Paper presented at the annual meeting of the American Educational Research Association, Boston, MA.

Clark, R.E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29.

Clark, R.E., & Salomon, G. (1986). Media in teaching. In M. Wittrock (Ed.), Handbook of research on teaching, 3rd edition. New York: Macmillan.

Clark, R.E., & Sugrue, B. (1989). Research on instructional media, 1978-1988. In D. Ely (Ed.), Educational media yearbook, 1987-88. Littleton, CO: Libraries Unlimited.

Cognition and Technology Group. (1992). Technology and the design of generative learning environments. In T. M. Duffy & D. H. Jonassen (Eds.), Constructivism and the technology of instruction: A conversation. (pp. 77-89). Hillsdale, NJ: Erlbaum.

Collins, A. (1994). Goal-based scenarios and the problem of situated learning: A commentary on Andersen Consulting’s design of goal-based scenarios. Educational Technology, 34, 30-32.

Collins, A. & Brown, J.S. (1988). The computer as a tool for learning through reflection. In H. Mandl & A. Lesgold (Eds.), Learning issues for intelligent tutoring systems (pp. 1-18). Berlin, Germany: Springer-Verlag.

Collins, D.L., Hernandes, J.M., Ruck, H.W., Vaughn, D.S., Mitchell, J.L., & Rueter, F.H. (1987). Training decisions system: Overview, design, and data requirements. (AFHRL-TP-87-25). Brooks Air Force Base, TX: Air Force Human Resources Laboratory.

Corno, L. & Mandinach, E.B. (1983). The role of cognitive engagement in classroom learning and motivation. Educational Psychologist, 18(2), 88-108.

Dick, W. & Carey, L. (1990). The systematic design of instruction, 3rd edition. Glenview, IL: Scott Foreman.

Dubois, D.(1993). Competency-based performance improvement: A strategy for organizational change. Amherst, MA: HRD Press.

Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41, 1040-1048.

Ericsson, K.A., & Smith, J. (Eds.). (1991). Toward a general theory of expertise: Prospects and limits. Cambridge, England: Cambridge University Press.

Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49(8), 725-747.

Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new area of cognitive developmental inquiry. American Psychologist, 34, 906-911.

Fitts, P.M., & Posner, M.I. (1967). Human performance. Monterey, CA: Brooks/Cole.

Fletcher, J.D. (1990). Effectiveness and cost of interactive videodisc instruction in defense training and education. (IDA Paper P-2372). Alexandria, VA: Institute for Defense Analysis.

Froiland, P. (1993). Who’s getting trained. (1993). Training, 30(10), 53-60.

Gagne, R. M. (1965). The conditions of learning. New York: Holt, Rinehart & Winston.

Gagne, R.M., & Medsker, K.L. (1996). The conditions of learning: Training applications. Fortworth, TX: Harcourt Brace College Publishers.

Giardina, M. (1992). Interactivity and intelligent advisory strategies in a multimedia learning environment: Human factors, design issues, and technical considerations. In M. Giardina (Ed.), Interactive multimedia learning environments: Human factors and technical considerations on design issues (pp. 48-66). Berlin, Germany: Springer-Verlag.

Glaser, R. (1992). Expert knowledge and processes of thinking. In D. F. Halpern (Ed.), Enhancing thinking skills in the sciences and mathematics (pp. 63-75). Hillsdale, NJ: Lawrence Erlbaum Associates.

Hannafin, M.J. (1992). Emerging technologies, ISD, and learning environments: Critical perspectives. Educational Technology Research and Development, 40(1), 49-63.

Hannafin, R.D., & Sullivan, H. (1995). Learner control in full and lean CAI programs. Educational Technology Research and Development, 43(1), 19-30.

Head, G.E., & Buchanan, CC. (1981). Cost/benefit analysis of training: A foundation for change. NSPI Journal, 20(9), 25-27.

Heidt, E. U. (1975). In search of a media taxonomy: Problems of theory and practice. British Journal of Educational Technology, 6(1), 4-23.

Heidt, E.U. (1977). Media and learner operations: The problem of a media taxonomy revisited. British Journal of Educational Technology, 8(1), 11-26.

Heidt, E. U. (1989). Media selection. In M. Eraut (Ed.), The international encyclopedia of educational technology (pp. 393-398). New York: Pergamon.

Industry Report. (1994). 1994 industry report. Training, 31(10), 29-62.

Jonassen, D.H. (1992). Designing hypertext for learning. In E. Scanlon & T. O'Shea (Eds.), New directions in educational technology (pp. 123-130). Berling, Germany: Springer-Verlag.

Jonassen, D.H., & Wang, S. (1993). Acquiring structural knowledge from semantically structured hypertext. Journal of Computer-Based Instruction, 20(1), 1-8.

Keller, J.M. (1983). Motivational design of instruction. In C.M. Reigeluth (Ed.), Instructional design theories and models. Hillsdale, NJ: Erlbaum.

Kemp, J.E., Morrison, G.R., & Ross, S.M. (1994). Designing effective instruction. New York: Macmillan.

Kozma, R. (1991). Learning with media. Review of Educational Research, 61(2), 179-211.

Kozma, R.B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7-19.

Kyllonen, P.C. & Shute, V.J. (1989). A taxonomy of learning skills. In P.L. Ackerman, R.J. Sternberg, & R. Gagne (Eds.), Learning and individual differences: Advances in theory and research. New York: Freeman.

Langer, E. (1994). The illusion of calculated decisions. In R.C. Schank, & E. Langer (Eds.), Beliefs, Reasoning, and Decision Making: Psycho-Logic in Honor of Bob Abelson. Hillsdale, NJ: Erlbaum.

Lepper, M. R., Woolverton, M., Mumme, D. L., & Gurtner, J. (1993). Motivational techniques of expert human tutors: Lessons for the design of computer-based tutors. In S. P. Lajoie & S. J. Derry (Eds.), Computers as cognitive tools (pp. 1-11). Hillsdale, NJ: Erlbaum.

Levie, W.H. (1989). Media attributes. In M. Eraut (Ed.), .), The international encyclopedia of educational technology (pp. 398-401). New York: Pergamon.

Levin, H.M. (1983). Cost-effectiveness: A primer. Beverly Hills, CA: Sage.

Lohman, D.F. (1986). Predicting mathemathantic effects in the teaching of higher-order thinking skills. Educational Psychologist, 21(3), 191-208.

Main, R.E., & Paulson, D. (1988). Guidelines for the development of military training decision aids. (NPRDC Technical Report 88-16). San Diego, CA: Navy Personnel Research and Development Center.

Mayer, R.E. (1980). Elaboration techniques that increase the meaningfulness of technical text: An experimental test of the learning strategy hypothesis. Journal of Educational Psychology, 72(6), 770-784.

Mayer, R.E., & Sims, V.K. (1994). For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. Journal of Educational Psychology, 86, 389-401.

Merrill, M.D., Tennyson, R.D., & Posey, L.O. (1992) Teaching concepts: An instructional design guide. Englewood Cliffs, NJ: Educational Technology.

Montague, W. E. (1988). Promoting cognitive processing and learning by designing the learning environment. In D. H. Jonassen (Ed.), Instructional designs for microcomputer courseware (pp. 125-149). Hillsdale, NJ: Erlbaum.

McCombs, B. L. (1988). Motivational skills training: Combining metacognitive, cognitive, and affective learning strategies. In C. E. Weinstein, E. T. Goetz, & P. A. Alexander (Eds.), Learning and study strategies: Issues in assessment, instruction, and evaluation (pp. 141-169). San Diego, CA: Academic Press.

Mousavi, S.Y., Low, R., & Sweller, S. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87(2), 319-334.

Norman, G. R., & Schmidt, H. G. (1992). The psychological basis of problem-based learning: A review of the evidence. Academic Medicine, 67(9), 557-565.

Nowakowski, A. (1994). Reengineering education at Andersen Consulting. Educational Technology, 34, 30-32.

Park, I., & Hannafin, M.J. (1993). Empirically-based guidelines for the design of interactive multimedia. Educational Technology Research and Development, 41(3), 63-85.

Plass, J.L., Chun, D.M., Mayer, R.E., & Leutner, D. (1996). Supporting visual and verbal learning preferences in a second language multimedia learning environment. Manuscript submitted for publication.

Pintrich, P. R., & De Groot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82, 33-40.

Regian, J.W., & Schneider, W. (1990). Assessment procedures for predicting and optimizing skill acquisition. In N. Frederiksen, R. Glaser, A. Lesgold, & M. Shafto (Eds.), Diagnostic monitoring of skill and knowledge acquisition (pp. 297-323). Hillsdale, NJ: Erlbaum.

Reiser, R. & Gagne, R. (1982). Characteristics of media selection models. Review of Educational Research, 52(4). 499-512.

Reiser, R.A. & Gagne, R.M. (1983). Selecting media for instruction. Englewood Cliffs, NJ: Educational Technology.

Reynolds, A. & Anderson, R.H. (1992). Selecting and developing media for instruction, 3rd edition. New York: Van Nostrand Reinhold.

Robinson, D.G, & Robinson, J.C. (1995). Performance consulting: Moving beyond training. San Francisco, CA: Berrett-Koehler.

Romiszowski, A.J. (1970). Classifications, algorithms and checklists as aids to the selection of instructional methods and media. In A.C. Bajpai, & J. Leedham (Eds.), Aspects of educational technology, Vol 4. London: Pitman.

Romiszowski, A.J. (1981). Designing instructional systems: Decision making in course planning and curriculum design. London: Kogan Page.

Romiszowski, A.J. (1988). The selection and use of instructional media, 2nd edition. London: Kogan Page.

Rumelhart, D.E., & Norman, D. A. (1981). Analogical processes in learning. In J.R. Anderson (Ed.), Cognitive skills and acquisition. Hillsdale, NJ: Erlbaum.

Rumelhart, D.E. (1980). Schemata: The building blocks of cognition. In R.J. Spiro, B.C. Bruce, & W.F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 33-58). Hillsdale, NJ: Erlbaum.

Salomon, G. (1979). Interaction of media, cognition and learning. San Francisco, CA: Jossey Bass.

Salomon, G. (1983). The differential investment of mental effort in learning from different sources. Educational Psychologist, 18(1), 42-50.

Salomon, G. (1984). Television is "easy" and print is "tough": The differential investment of Mental effort in learning as a function of perceptions and attributions. Journal of Educational Psychology, 76(4), 647-658.

Salomon, G., Perkins, D., & Globerson, T. (1991) Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3), 2-9.

Schank, R.C., & Jona, M.Y. (1991). Empowering the student: New perspectives on the design of teaching systems. The Journal of the Learning Sciences, 1(1), 7-35.

Schank, R.C. (1994). Goal-based scenarios. In R.C. Shank, & E. Langer (Eds.), Beliefs, Reasoning, and Decision Making: Psycho-Logic in Honor of Bob Abelson. Hillsdale, NJ: Erlbaum.

Schiffrin, R.M., & Schneider, W. (1977). Controlled and automatic human information processing: I Detection, search and attention. Psychological Review, 84, 1-66

Schmidt, R. A., & Bjork, R. A. (1992). New conceptualizations of practice: Common principles in three paradigms suggest new concepts for training. Psychological Science, 3(4), 207-217.

Schramm, W. (1977). Big media, little media. Beverly Hills, CA: Sage.

Schunk, D. H. (1984). Self-efficacy perspective on achievement behavior. Educational Psychologist, 19, 48-58.

Seels, B. B. & Richey, R. C. (1994). Instructional technology: The definition and domains of the field. Washington, DC: Association for Educational Communications and Technology.

Shute, V.J. (1993). A comparison of learning environments: All that glitters... In S.P. Lajoie & S.J. Derry (Eds.), Computers as cognitive tools (pp. 1-11). Hillsdale, NJ: Erlbaum.

Shute, V.J. (1992). Aptitude-treatment interactions and cognitive skill diagnosis. In J.W. Regian & V.J. Shute (Eds.), Cognitive approaches to automated instruction. Hillsdale, NJ: Erlbaum.

Smith, P.L., & Ragan, T.J. (1993). Instructional design. New York: Macmillan.

Snow, R.E. (1994). Abilities in academic tasks. In R.J. Sternberg & R.K. Wagner (Eds.), Mind in context: Interactionist perspectives on human intelligence. New York: Cambridge University Press.

Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1992). Cognitive flexibility, constructivism, and hypertext: Random access instruction in ill-structured domain. In T. M. Duffy & D. H. Jonassen (Eds.), Constructivism and the technology of instruction: A conversation (pp. 57-75). Hillsdale, NJ: Lawrence Erlbaum Associates.

Spiro, J. R., & Jehng, J. (1990). Cognitive flexibility and hypertext: Theory and technology for the nonlinear and multidimensional traversal of complex subject matter. In D. Nix & R. Spiro (Eds.), Cognition, education, and multimedia: Exploring ideas in high technology (pp. 163-205). Hillsdale, NJ: Erlbaum.

Stevens, R. H., McCoy, J. M., & Kwak, A. R. (1991). Solving the problem of how medical students solve problems. M. D. Computing, 8(1), 13-20.

Sweller, J., & Cooper, G.A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2(1), 59-89.

Tobias, S. (1989). Another look at research on the adaptation of instruction to student characteristics. Educational Psychologist, 24(3), 213-227.

Vygotsky, L.S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Wetzel, C.D., Radtke, P.H., & Stern, H.W. (1994). Instructional effectiveness of video media. Hillsdale, NJ: Erlbaum.

Weiner, B. (1986). An attribution theory of motivation and emotion. New York: Springer-Verlag.

White, B. Y. (1992). A microworld-based approach to science education. In E. Scanlon & T. O’Shea (Eds.), New directions in educational technology (pp. 227-242). Berlin, Germany: Springer-Verlag.
Footnote

1 The third edition of Anderson's (1976) book titled "Selecting and Developing Media for Instruction" appeared in 1991 (Reynolds & Anderson, 1991), but it does not count as a new model.

Figure 1. Aspects of training influenced by media and methods.






Access

Cost

(Development and Delivery)


Efficiency

(Time to Learn)

Learning

and Motivation


Media


X


X


X






Methods








X



X


Download 240.76 Kb.

Share with your friends:
1   2   3   4   5




The database is protected by copyright ©ininet.org 2024
send message

    Main page