Published as: Sugrue, B. & Clark, R. E. (2000), Media Selection for Training. In S. Tobias & D. Fletcher (Eds.), Training & Retraining: a handbook for Business, Industry, Government and the Military. New York: Macmillan



Download 240.76 Kb.
Page3/5
Date18.10.2016
Size240.76 Kb.
#2649
1   2   3   4   5

In addition to matching media to types of trainee response during practice, Cantor matched media to the form, content, and timing of feedback. Romiszowski did not offer specific guidelines for matching media to feedback variables. Cantor's finer distinctions among feedback variables are confusing; for example, the difference between diagnostic data and correct response data is not clear. Neither is it clear why "branching printed material" would permit feedback based on "correct response data", but would not permit feedback based on "diagnostic data". Could not correct (and by implication, incorrect) response data serve as diagnostic data?

Romiszowski's and Cantor's models are useful in that they represent a first step toward a new generation of media selection models that will focus on matching media to components of training. What they lack is a theory of cognitive process, instructional method and media attributes that could guide and justify links between media and functions of training.



Selecting media based on cost and other practical considerations. In existing media selection models, the final stage occurs once one has narrowed the options based on the requirements of the tasks, trainees, and training events for a particular training program. A final selection is made from the short-list based on the practicalities of development, delivery, and maintenance -- practicalities such as time, resources, and budget. Existing media selection models vary in the level of guidance they give regarding compilation and analysis of information relating to practical factors. Some models (e.g., Reiser & Gagne, 1983; Romiszowski, 1988) merely list questions or factors that should guide final selection. Other models (e.g., Braby, Henry, Parrish, & Swope, 1975; Collins, Hernandez, Ruck, Vaughn, Mitchell, & Rueter, 1987) provide worksheets and computerized tools to aid data compilation and analysis of the relative cost of media mixes on the short-list of candidates. Since monetary value can be attached to development and delivery time as well as resources, the final selection of media boils down to an analysis of the relative cost of each alternative media mix.

Any analysis of practical factors is based on projected rather than actual costs since the analysis occurs before development and implementation. The reliability of estimates of time, resources and costs will increase as an organization gains experience with development and delivery via different media configurations. However, when considering a newer set of media, it is difficult to make reliable estimates, and to make valid comparisons with an older set of media for which mechanisms and facilities are already in place for development and delivery (Carnoy & Levin, 1975). It is also difficult to provide context-independent guidelines on the relative time and costs of development and delivery of different media mixes. Variables such as the content and design of a particular program, size and location of audience, life span of the program, suitability of existing facilities, experience of personnel, and salaries of developers and trainees will influence the cost for a particular program. The cost of producing and delivering the same training program for the same set of media will vary from company to company. The cost of producing and delivering two different programs with the same set of media within the same company may also vary.

Given that costs must be estimated on an individual case basis, the most useful approach is Levin's (1983) ingredients method. Ingredients are resources required to develop and deliver a particular training program to a designated trainee audience. This method involves identifying all ingredients in the development and delivery of the training, for each alternative media configuration. Then a cost or value is attached to each ingredient (even those that appear to be "free"). Finally, the costs are summed for each media mix. Levin advocates categorizing ingredients as personnel, facilities, equipment, materials and supplies, and all other resources. The "all other resources" category can include maintenance of hardware, updating of materials, and training of personnel. Equipment costs, Levin suggests, should be amortized over their projected lifetime. Lost opportunity costs are also included as ingredients; for example, a value should be placed on classroom space since it could have alternative uses were it not being used for training. In addition, cost of trainees' salaries for the training period should be included as an opportunity cost since trainees are not working during this period.

Head and Buchanan (1981) suggested a method similar to Levin's ingredients method, whereby student costs, instructor costs, facilities costs, administrative costs, and development costs are calculated separately, then totaled to obtain an estimate of the overall cost of a particular training program. One can arrive at a total cost for the program given a particular number of trainees, or one can generate a cost per trainee, as is done by Hewlett Packard in a computerized package that prompts the user to enter development and delivery resource costs for alternative media (D. Blair, personal communication, April 17, 1995).



Summary of Existing Media Selection Models

Existing media selection models divide media selection into two stages. The first stage is based on the assumption that different training tasks, trainees, and training components require different media. The second stage is based on the assumption that different mixes of media will deliver training with equal effectiveness but with different costs. In the first stage of the process, these models have attempted to simplify and mechanize decisions that require complex chains of reasoning. The outputs of the selection process have been confused, and decisions have been made contingent on unanswerable questions. There are many gaps and inconsistencies in the models. These gaps and inconsistencies stem from the impossibility of making direct links between media and tasks, trainees, and training events. Attempts to match media to tasks with different cognitive demands, or to trainees with different cognitive and affective characteristics, assume that it is possible to link media directly to cognitive consequences.

The only reason one medium might be more suitable than another for a particular training task is because the task requires the presentation of some information or practice activity that calls for particular media attributes. The only reason some media might be more appropriate than other media for some trainees is because those media have attributes that can deliver the particular level of external support those trainees need to engage in the cognitive processes necessary for learning. The only way to justify selection of one particular medium or media mix over another (beyond relative costs and practicalities of implementation) is in terms of the media’s ability to support the kind of cognitive activities deemed necessary to attain the targeted task performance.

The best approach to media selection, in our view, is one that advocates selecting media based on their relative abilities to perform different instructional functions. This approach has been obscured in past models by a focus on matching media to task types and trainees. In the next section, we describe a system for media selection which operationalizes a "training function" approach. Functions of training are defined in terms of the cognitive components of learning they support. We recommend that media be selected based on their ability to deliver the level of external support required to compensate for trainees' inability or unwillingness to engage in the cognitive processes necessary for learning. Focusing on support for cognitive activity during training can not only drive the process of media selection, but can also help clarify what exactly is being selected at various stages of the process.


A Three-Stage Cognitive Approach to Media Selection

Our approach to media selection involves three stages:



  1. selection of methods to support the cognitive processes necessary for trainees to acquire the task performance that is the target of the training;

  2. selection of a set of media attributes that can support the type, amount, timing, and control of methods selected for the training; and

  3. selection of the most economical and convenient set of media that possess all of the required attributes.

This three-stage process is depicted graphically in Figure 3. The first stage of this process is the concern of much theory and research in the field of instructional technology -- theory and research focused on the effects of instructional variables on cognition, motivation and subsequent performance. The second stage requires that categories of media attributes be aligned with characteristics of methods.

----------------------------

Insert Figure 3 here

----------------------------

One problem with this approach is that our knowledge of the cognitive processes involved in learning and performance is neither stable nor integrated. Research continues to address both macro and micro-level processes. There are, for example, models of the stages of skill acquisition (Anderson, 1983, 1993; Shiffrin & Schneider, 1977); models of the acquisition and structuring of declarative knowledge in memory (Rumelhart, 1980); theories of multiple learning mechanisms (Rumelhart & Norman, 1981; Kyllonen & Shute, 1989); theories of the components and development of expertise (Chi, Glaser, & Farr, 1988; Ericsson & Smith, 1991); models of the acquisition of procedural knowledge (Anderson, 1993; Schank, 1994); models of situated cognition (Brown et al., 1989); models of metacognitive processes that monitor and control the processes involved in knowledge acquisition and performance (Corno & Mandinach, 1983; Pintrich & De Groot, 1990); models that describe mechanisms that underlie motivation (Bandura, 1989; Dweck, 1986; Weiner, 1986); and models that attempt to integrate a variety of processes in a person-situation interactionist paradigm (Snow, 1994).

Any model of the cognitive processes involved in learning that one might adopt, as a basis for the design of instruction and media selection, will be incomplete. However, any attempt toward a theoretically grounded approach to media selection must adopt some model of cognition. Such a model should at least include the three main categories of processes related to learning and performance: motivation, metacognition, and knowledge acquisition/construction. We will now describe a six-part model of the cognitive components of learning which we will then relate to the selection of instructional methods, media attributes and media.



A Six-Part Model of the Cognitive Processes Involved in Learning

To drive the selection of methods, media attributes, and media, we suggest a six-part conceptual division of the cognitive processes involved in learning:

Interpretation of the targeted performance goal

Encoding of task-relevant declarative knowledge and/or retrieval of task-relevant declarative and procedural knowledge

Compilation and execution of new procedural knowledge, that is, production rules relating sequences of actions and decisions to task goals and conditions

Monitoring of performance

Diagnosis of sources of error in performance

Adaptation of goal interpretation, retrieval/encoding of declarative, or retrieval/compilation of procedural knowledge to improve performance.

This model is based on Glaser's (1992) model of the cognitive components of expertise, Anderson's (1993) theory of learning, and theories of the components of self-regulated learning (Corno and Mandinach, 1983; Flavell, 1979; McCombs, 1988; Pintrich & DeGroot, 1990; Salomon, 1984). A trainee can perform all six cognitive processes without external aid, or the external environment can compensate for weaknesses in any or all of the processes. When a trainee is able and willing to control his or her own learning, he or she will first make an interpretation of a given training goal, or will select a goal. The trainee will also make an initial estimation of the value and demands of the task and the amount of effort that will be required to achieve it (based in part on a perception of his or her own abilities in relation to the perceived demands of the task). The trainee's interpretation of the goal will drive selection of declarative knowledge, either retrieved from long-term memory or encoded from some external source. Interpretation of the goal will also trigger retrieval of procedural knowledge already in memory. Procedural knowledge cannot be directly encoded from external sources since, by definition, it must have gone through an internal process that has resulted in automation (Anderson, 1993). A list of the steps required to perform a procedure is NOT procedural knowledge; rather, a list of steps is declarative knowledge about the procedure. Thus, declarative knowledge can come from either external sources or from memory.

Having accessed declarative and/or procedural knowledge, a trainee will attempt to compile a new procedure for the current task by attempting to perform the task. The trainee will monitor his or her first attempt at the task, and analyze that performance to diagnose sources of error. Sources of error could be inaccurate estimation of the value or demands of the task , gaps in declarative knowledge, or gaps in knowledge of prerequisite procedures. The trainee will attempt to correct errors in performance by reinterpreting the goal, consulting/retrieving additional pieces of declarative knowledge, and/or practicing a prerequisite procedure before attempting the original task again. The trainee will keep cycling through this process until he or she is satisfied that the goal has been reached.



Six Categories of Instructional Methods

At a minimum, the external training environment will need to provide a description/representation of the initial task goal, sources of information (from which declarative knowledge can be encoded), and opportunities, either real or simulated, in which the student can practice the task. Such minimal support assumes that the trainee is capable of monitoring his or her own performance, diagnosing the sources of errors, and correcting those errors unaided. Figure 4 illustrates the six-part model of internal cognitive processes involved in learning and the minimum external resources required to support them (goals, information, and practice opportunities).

----------------------------

Insert Figure 4 here

----------------------------

The training environment can also compensate for weaknesses in trainees' ability to execute the other three cognitive processes involved in learning: monitoring, diagnosis, and adaptation. Thus, a fully-supportive training environment would



  • elaborate on the goal of the task and its demands

  • provide information related to the task

  • provide practice tasks and contexts

  • monitor trainee performance

  • diagnose sources of error in performance

  • adapt goal elaboration, information and practice tasks.

Since we defined an instructional method as an external support for an internal cognitive process, these six types of external support become six categories of instructional methods. We label the six categories of instructional methods as follows: Goal Elaboration, Information, Practice, Monitoring, Diagnosis, and Adaptation. We use these labels to refer to the six instructional methods for the remainder of this chapter.

Selecting Training Methods to Support Cognitive Processes

Each of the six instructional methods can vary in type, amount, timing, and locus or distribution of control. Locus of control is the extent to which a method is controlled by the external environment/system and/or is under the trainee's control. For example, different types and amounts of information about a task can be provided, the timing of presentation of that information can vary, and presentation of that information can be controlled by the system or by the trainee, or control can be shared. Depending on the type of information, the amount of it, the timing of its delivery, and the distribution of control, different media attributes will be required.

Before outlining which characteristics of methods call for particular media attributes, we will first discuss media-independent options for types, amount, timing, and control of each of the six methods already defined. We will suggest some criteria for selecting the type, timing, amount, and control of each method, but only if there is a sound theoretical or empirical basis for such criteria. More often than not, we will conclude that there is no empirical evidence that any of the alternatives differ in their effectiveness or efficiency, making the choice arbitrary. Figure 5 summarizes the main options for the type, timing, amount and control of each of the six methods. For all methods, the amount of support can be fixed in a range from low to high, or can be flexible so that different amounts of support could be prescribed for or selected by different trainees. Timing of support can be fixed or flexible; and support for diagnosis and adaptation can be immediate or delayed. For all methods, control of their deployment can rest solely with the trainee or solely with the system, or control can be shared. We will now describe in more detail the options for types of methods.

----------------------------

Insert Figure 5 here

----------------------------



Instructional methods for goal elaboration. The goals of training are tasks that some group of persons (trainees) should be able to perform but currently cannot perform. There are two types of methods for communicating task goals to trainees: description methods and demonstration methods. Description methods include providing a description of the outcome or product of the task, a description of the process of accomplishing the task, or a description of the criteria used to judge attainment of the goal of the task. Demonstration methods involve demonstrating how the task goal is accomplished or showing the outcome or product of the task. For example, one could tell trainees (i.e., describe in written text or orally) that the goal of a training session is that they will be able to use the style function in the word-processing program Microsoft Word. Alternatively, one could show trainees (i.e., demonstrate) a document in which the style function was used, or one could demonstrate how a style is created and used to produce a piece of text. In addition to describing or demonstrating the goal of the task, the value and demands of the goal can also be highlighted/elaborated.

One can provide a fixed high amount of information about the goal of a task, or a fixed low amount, or the amount can be varied for different trainees. Elaboration of the goal can occur before and/or during training, either when requested by a trainee or when deemed necessary by the system (based on analysis of trainee performance). Control over support for goal interpretation can reside entirely with the system, or can be shared between the system and the trainee.



Instructional methods for information. A variety of methods can be used to support the encoding and retrieval of declarative knowledge relevant to a task. Information about the context of the task and analogies to other tasks can activate prior knowledge that is relevant to the task. Each trainee enters a training situation with a different amount and structure of declarative knowledge stored in memory. Thus it is difficult to predict what information will be needed from the external environment. There are two basic types of information that can be stored and accessed externally by trainees either before or during task practice: descriptions and demonstrations. Descriptions of, for example, procedures, rules, processes, definitions, examples, cases, or solutions can be provided. Demonstrations of procedures, processes, worked examples, or cases can be provided. When information is provided during practice, that information is generally referred to as "help", "feedback" or "hints". When information is provided in a form that can be accessed and used on the job it is often called a "job aid", "electronic performance support system", or an "expert system".

Traditional instructional design models make specific recommendations about the type of information appropriate for particular kinds of tasks. For example, for tasks that involve distinguishing among objects or events that belong to different categories (called concept identification tasks), most instructional designers would recommend the presentation of definitions and examples of each category/concept (Merrill, Tennyson, & Posey,1992). For tasks involving the performance of a sequence of steps, most instructional designers would recommend provision of a list and demonstration of the steps (Smith & Ragan, 1993). For tasks requiring the selection and novel application of a set of rules and procedures, the common instructional design recommendation would be to present rules and worked solutions to a set of prototype problems in which those rules and procedures apply.

Current constructivist approaches to instruction are less prescriptive regarding the types of information that should be presented during training. Schank (1994) is the most explicit about what types of information should be provided. He advocates that information should be given in the form of stories from experts who have taken the action or decision that the trainee has taken. These experts comment on the potential success of that action or decision, and the reasons why it is or is not a good approach in the context of the current scenario. In addition to story-like responses to trainee actions, Shank also builds into his programs help systems where a trainee can at any time ask for information on what to do next, how to do it, and why it should be done, which effectively separates information about a task into three types.

The amount of information provided/available during training can be fixed (anywhere from low to high) or can vary based on the individual needs of trainees. It may be difficult to anticipate how much information will be needed; therefore, one may need to have more information available than will be used. It is not clear who needs more information: more expert or more novice trainees. Giving novices too much information may make it difficult for them to select what is most relevant for the task at hand; however, by limiting their access to information, one may discourage them from becoming more self-directed learners. Experts may require less information in order to perform a new task in the domain; however, a training task might be made more challenging for more expert trainees by having them select relevant information from a large body of information. Ultimately, the amount of information required by particular trainees will be dictated by the type and number of errors they make during task performance, or by trainees' own interests and perceived needs.

The timing of information presentation can vary. Information can be given before, during, or after a practice activity. Traditional instructional design approaches stress the up-front provision of information. Models of instruction based on theories of situated learning combine both up-front and just-in-time information giving. For example, in the cognitive apprenticeship approach (Brown et al., 1989), the modeling of performance before trainees try it themselves is an up-front provision of information; the coaching that the expert gives trainees as they attempt the task on their own is just-in-time information. Hints and feedback given to trainees during or after practice constitute information that trainees will use to improve their performance. The conditions under which different timing of information may be most effective or efficient are not clear. Like most other instructional variables, the effects of timing of information delivery may depend on the cognitive and affective composition of the trainee at the time of training.


Download 240.76 Kb.

Share with your friends:
1   2   3   4   5




The database is protected by copyright ©ininet.org 2024
send message

    Main page