**THE CONCEPT OF SIMULATION **
**IN ORGANIZATIONAL RESEARCH***
J. Richard Harrison
School of Management
University of Texas at Dallas
P.O. Box 830688
Richardson, TX 75083
USA
Phone: (972)883-2569
Fax: (972)883-6521
E-mail: harrison@utdallas.edu
**THE CONCEPT OF SIMULATION IN ORGANIZATIONAL RESEARCH **
**Abstract**
This paper examines the concept of computer simulation, with emphasis on its use in studying complex organizational systems. Simulation is presented as a form of science and is formally defined. Several research uses of computer simulation are discussed and illustrated with examples from research in organization theory. Research issues related to the use of simulation are also considered.
**THE CONCEPT OF SIMULATION IN ORGANIZATIONAL RESEARCH**
Research in organization theory based on computer simulations has a long history, beginning with Cyert and March’s (1963) simulation of firm behavior and Cohen, March, and Olsen’s (1972) simulation of garbage can decision processes. But only in the 1990s has simulation-based research become common in leading organizational journals. With the increasing legitimation of computer simulation as an acceptable research methodology, the rise in simulation-based journal articles, and the expanding number of newly-trained scholars using simulation techniques, computer simulation promises to play a major role in the future of organization theory.
This paper addresses the concept of computer simulation in organization theory: what simulation is, how it is used, and what issues are associated with its use. While experienced simulators will hopefully find this discussion interesting, it is intended primarily to make simulation understandable to researchers without extensive backgrounds in computer programming or mathematical analysis. The paper is based predominantly on my personal views and experiences with computer simulations and draws mostly on my own work, not because my work represents any sort of standard for simulation research, but simply because I am most familiar with it.
My focus is on computer simulations of organizational processes using stochastic simulation models with discrete-time designs. I will not consider simulations of individual behavior, even though much work has been done at this level. The term “stochastic models” refers to models of processes that are probabilistic rather than deterministic, so that the behavior of a model in any particular instance depends to some extent on chance, corresponding to what I believe to be the case for organizations. While it is possible to develop simulations in continuous time, the dominant simulation design is based on discrete time, where the simulation uses predetermined time intervals (e.g., a simulation day, month, or year) with the state of the simulated system updated each time interval as the simulation “clock” advances during the computer run.
**Computer Simulation: A Third Form of Scientific Inquiry**
Historically, scientific progress has been based on two approaches: theoretical analysis or deduction, and empirical analysis or induction. In the deductive form of science, a set of assumptions is formulated and then the consequences of those assumptions are deduced. Often the assumptions are stated as mathematical relationships and their consequences deduced through mathemathical proof or derivation. This strategy has led to some extraordinary successes, particularly in physics; the general theory of relativity is the prime example. A major problem with this approach, however, is that derivation can be mathematically intractable – mathematical techniques may be inadequate to determine the consequences of assumptions analytically. This problem seems to be common in the social sciences, perhaps due to the complexity and stochastic nature of social processes, and has led researchers to choose assumptions (such as perfect rationality, perfect information, and unlimited sources of funds) on the basis of their usefulness for deriving consequences rather than because they correspond to realistic behavior. And even when elegant results can be obtained in the form of mathematical equations, sometimes these equations can be solved only for special cases; for example, the equations of general relativity can be solved for the case of spherical symmetry, but no general solutions are known.
The inductive form of science proceeds by obtaining observations or measurements of variables (data) and then examining or analyzing the data to uncover relationships among the variables. This approach has also been highly successful; one example is the development of the periodic table of the elements before atomic structure was understood. A variant of this approach has been used to test the predictions of theoretical analysis. A major problem with empirical work is the availability of data. Variables may be unobservable (e.g., secret agreements) or difficult to measure (e.g., the power of organizational subunits); the problems are compounded by the need for comparable measures across a sample or, in the case of dynamic analysis, across an extended time frame. Consider the prospects for obtaining data on subunit power across a sample of organizations over a period of decades.
Computer simulation is now recognized as a third way of doing science (Waldrop, 1992; Axelrod, 1997). It renders irrelevant the deductive problem of analytic intractability – mathematical relationships can be handled computationally using numerical methods. It also overcomes the empirical problem of data availability – a simulation produces its own data. (Of course, simulation has its own problems, which will be addressed at various points in this paper.) The first well-known computer simulation involved the design of the atomic bomb in the Manhattan Project during World War II. The complex systems of equations used in the design process couldn’t be solved analytically, and data were impractical – besides the unknown risks of attempting to set off atomic explosions, there was not enough fissionable material available at the time for even one test. Over the decades following the war, simulation became an accepted and widely used approach in physics, biology, and other natural sciences. But despite the early work of March and colleagues, the use of computer simulation in the social sciences has lagged behind the natural sciences.
**What Is A Computer Simulation?**
A computer simulation begins with a model of the behavior of some system the researcher wishes to investigate. The model consists of a set of equations or transformation rules for the processes through which the system variables change over time. The model is then translated into computer code and the resulting program is run on the computer for multiple time periods to produce the outcomes of interest. (Actually, the model could consist of a single process, although simulations are usually used to study systems in which multiple processes operate simultaneously. Also, one could use a static model – for example, to generate a probability distribution for some variable, as in Harrison and March, 1984 – but most simulations in organizational research are dynamic.)
__Definition__
Formally, I define a computer simulation as a *computational model* of system behavior coupled with an *experimental design*. The computational model consists of the relevant system components (variables) and the specification of the processes for system behavior (changes in the variables). The equations or rules for these processes specify how the values of variables at time t + 1* *are determined, given the state of the system at time t. In stochastic models, these functions may depend partly on chance; the equation for the change in a variable’s value may include a disturbance term to represent the effects of uncertainty or noise, or a discrete process such as the turnover of an organizational member may be modeled by an equation that gives the probability of turnover. Computationally, these stochastic processes are simulated using numbers produced by random number generators. Random numbers with different statistical distributions can be produced using different generators, and it is crucial to choose generators that yield distributions appropriate for the process being modeled. For example, a disturbance term representing noise may be simulated with a generator that yields numbers that are normally distributed with a mean of zero, and organizational foundings can be simulated with a negative binomial generator to match the empirically observed distribution of foundings.
The model’s functions typically require the investigator to set some *parameters* so that computations can be carried out. For example, in a simulation of cultural transmission in organizations (Harrison and Carroll, 1991), one process is the arrival of new members of the organization at time t + 1. These new members arrive at a certain rate with certain enculturation (fitness) scores. The arrival rate and the mean and standard deviation of the enculturation scores of the pool from which new members are selected are all parameters of the process.
The experimental design consists of five elements: the *initial conditions*, the *time structure*, the *outcome determination*, *iterations*, and* variations*. The computational model specifies how the system changes from time t to time t + 1, but not the state of the system at time 0, so initial conditions must be specified. For example, in the cultural transmission simulation, initial conditions include the number of members in the organization at the beginning of the simulation and their enculturation scores.
The time structure sets the length of each simulation time period and the number of time periods in the simulation run. Once the time period is determined, the number of time periods to be simulated can be set to obtain the desired total duration of the simulation run, or a rule may be established to stop the run once certain conditions (e.g., system equilibrium) are met.
The outcomes of interest are often some function of the behavior of the system, and need to be calculated from system variables. Outcomes may be calculated for each time period or only at the end of the run, depending on the simulation’s purpose. In the cultural transmission simulation, the outcomes of interest were the mean and standard deviation of the enculturation scores of the organizational members and the number of periods it took the system to reach equilibrium.
In stochastic models, the simulation outcomes will vary somewhat from run to run depending on the random numbers generated, so the results of one run may not be representative of the average system behavior. To assess average system behavior as well as variations in behavior, iterations are necessary – that is, the simulation run must be repeated many times (using different random number streams) to determine the pattern of outcomes.
Finally, the entire simulation process described above may be repeated with different variations. Both the parameter values and the initial conditions can be varied. There are two reasons for this. First, the behavior of the system under different conditions may be of interest; the examination of such differences is often a primary reason for conducting simulation experiments. In the cultural transmission simulation, for example, turnover rates of organizational members were varied to examine differences in system behavior under conditions of low turnover and high turnover. The second reason for variations is to learn how sensitive the behavior of the system is to the choices of parameter settings and initial conditions. If the behavior doesn’t change much with small variations in conditions, then the system’s behavior is robust, increasing confidence in the simulation process. This type of variation is called sensitivity analysis.
After the simulation runs are completed, the results may be subjected to further analysis. Simulations can produce a great deal of data for each variation, including the values of system variables and outcomes for each time period and summary statistics across iterations, as well as the parameter settings and initial condition settings. These data may be analyzed in the same manner as empirical data.
__Example__
A simple example may be instructive at this point. Suppose you desire to use a simulation to find the probability of getting first a head and then a tail in two independent coin tosses. The components of the computational model are coin tosses. The process consists of determining whether a toss is a head or a tail. Computationally, we can define a parameter *p* as the probability of a head and set it to some value between 0 and 1 (not necessarily assuming that the coin is “fair”). The simulation program can then call a uniform random number generator, which will yield any number between 0 and 1 with equal probability, to produce a number. If this number is less than *p*, the program concludes that the toss was a head, otherwise a tail. (To see why this works, say we have a biased coin with *p* = .4;* *the probability that the generator will produce a number less than .4 is precisely .4, since all numbers between 0 and 1 are equally probable.)
In the experimental design, no initial conditions need be specified since the outcome of the first toss depends only on the parameter *p*. The time structure is two periods, one for each toss (although in this example their length doesn’t matter). The program can determine the outcome by examining the results of the run to see if the first toss was a head and the second a tail. The run can be repeated many times with different random numbers supplied by the generator – say for 10,000 iterations – to determine the percentage of head-then-tail outcomes. Finally, variations can be introduced by changing the parameter *p* and repeating the entire process. Further analysis could consist of plotting the percentage of head-then-tail outcomes for different values of p to produce a graph of the relationship.
__Comparison__
The three forms of scientific inquiry can also be illustrated with the subject of the above example. The question can be addressed theoretically by using probability theory to derive the answer. It may be addressed empirically by performing a coin-toss experiment with many trials; this procedure is simple for *p* = .5, assuming that a normal coin is fair, but it may be difficult in practice to obtain coins with different *p* values. Or a simulation can be used to address the question numerically.
Simulation is similar to theoretical derivation in a very fundamental way. Both approaches obtain results from a set of assumptions. The results are the logical and inevitable consequences of the assumptions, barring errors. If one accepts the assumptions, then one must also accept the results; put another way, the results are only as good as the assumptions. So a simulation may be thought of as a numerical proof or derivation.
**The Uses of Computer Simulation in Organizational Research**
Computer simulations are usually used in organizational research to study the behavior of *complex systems*, or systems composed of multiple interdependent processes. Each of the individual processes is usually simple and straightforward, and is often well understood from previous research or at least well supported theoretically. But the outcomes of the interactions of the processes are not obvious. Simulation enables the examination of the simultaneous operation of these processes.
The cultural transmission simulation, for example, involves three basic processes. New members enter the organization, current organizational members undergo socialization, and some members exit the organization. Each of these three processes has been researched and is fairly well understood. But research on organizational culture has focused on the socialization of current organizational members. It is reasonable to expect that additional insights into organizational culture would be gained by studying the behavior of a system that includes entry and exit as well as socialization. The simulation made it possible to study this expanded model.
Simulations can be used for a variety of research purposes. Axelrod (1997) identified three research uses for simulations:
1. __Prediction__. Analysis of simulation output may reveal relationships among variables. These relationships can be viewed as predictions of the simulation model, or hypotheses that can perhaps be subjected to empirical testing. Even if some variables in the computational model cannot be easily observed, often the output variables can be. In a simulation of the dynamics of dominant coalitions in organizations (Harrison, 1997), the output revealed a relationship between executive turnover, environmental turbulence, and organizational performance, which are all readily observable (although the subunit power variables in the model are not). Empirical confirmation of a simulation’s predictions provides indirect support for the simulation model of the underlying (unobserved) processes.
2. __Proof__. Axelrod discussed the issue of proof in terms of “existence” proofs; a simulation can show that it is *possible *for the modeled processes to produce certain types of behavior. For example, a simulation of organizational growth (Harrison, 1998) demonstrated that some growth models are capable of producing industry size distributions consistent with empirical observations. This strategy can be used to examine the feasibility of models, and to demonstrate that the resulting system behaviors meet certain conditions (such as boundary conditions).
3. __Discovery__. Simulations can be used to discover unexpected consequences of the interaction of simple processes. In a simulation of competition between populations of organizations (Carroll and Harrison, 1994), we discovered path-dependent effects that sometimes made it possible for “weaker” populations to win out over populations that were competitively superior. In a related vein, simulations can be used to explore scenarios; the organizational growth simulation explored the consequences of various growth scenarios for industry structure.
Axelrod’s list can be complemented by three additional uses for simulations:
4. __Explanation__. Frequently behaviors are observed but it isn’t clear what processes produce the behaviors. Specific underlying processes can be postulated and their consequences examined with a simulation; if the simulation outcomes fit well with the observed behaviors, then the postulated processes are shown to provide a plausible explanation for the behaviors. A simulation of R&D investment in innovation and imitation (Lee and Harrison, 1998) showed that the process of adaptive firm search over a stochastic landscape for returns to innovation and imitation can explain the emergence of strategic groups in an industry under some conditions. A simulation of organizational demography and culture in top management teams (Carroll and Harrison, 1998) revealed that the strength of the linkage between demography and culture varied by organizational conditions, potentially explaining the inconsistent findings of the research program in organizational demography. The explanatory use of simulations is related to the use of simulation as proof, but typically goes beyond just showing that it is possible for the model to produce certain outcomes by illuminating the conditions under which such outcomes are produced.
5. __Critique__. Simulations can be used to examine the theoretical explanations for phenomena proposed by researchers, and to explore more parsimonious explanations for these phenomena. This is similar to the explanatory use of simulation, except that in this case simulation is used to assess preexisting explanations and possibly to find simpler explanations. In the top management team simulation, we showed that the frequently observed relationship between team demography and turnover is an automatic consequence of duration-dependent turnover rates, so that the social process explanations for team turnover advanced by researchers in this field were unnecessary. In the cultural transmission simulation, we showed that increasing cultural strength in declining organizations results naturally from the three basic processes of entry, socialization, and exit, without the need to invoke psychological aspects of decline as researchers in this field have done.
6. __Prescription__. A simulation may suggest a better mode of operation or method of organizing. Many simulations in operations research – queuing simulations, for example – indicate more efficient ways of organizing the work flow, which sometimes serve as a basis for changes in organizational procedures.
This description of the purposes of simulation provides insight into the various ways that simulation is used to study complex systems in organizational research. Of course, as is clear from some of the examples, it is possible for a simulation to serve multiple purposes.
**Issues in Simulation Research**
The process of conducting simulation experiments was discussed in general terms earlier in this paper. I will now discuss several issues involved in the simulation research process.
#### Model development
Construction of the model to be simulated involves a tension between simplicity and elaboration. When I give talks on my simulations, a frequent question from the audience is, “Why don’t you add variable X to the model?” Undoubtedly, a model can be made more realistic by adding more variables or processes. At the same time, the more complex the model becomes, the more difficult it is to understand what is really driving the results. So the objective is to develop a model based on a simplified abstraction of a system that retains the key elements of the relevant processes without unduly complicating the model (which is easier said than done).
Axelrod suggests the KISS principle – keep it simple, stupid. The simpler the model, the easier it is to gain insight into the causal processes at work. This is sound advice, but is more appropriate for some simulation uses than for others. The downside of this approach is that important elements may be inadvertently excluded from the model, limiting the usefulness of the insights for understanding the system’s behavior. One strategy for addressing this dilemma is to start with a very simple model, and then add components and observe changes in the system’s behavior. If the new component makes a significant difference then its role may be important; otherwise perhaps it can be excluded. Because the system components may interact in a complex manner, however, it is often difficult to be sure that a component is not important or may not produce significant interactions with still other possible components, and at some point the simulated system becomes too complex to infer causality. So a variety of model structures can be tested, but eventually choices must be made.
Probably the best way to keep a model relatively simple is through careful theoretical analysis prior to actual model construction. This can improve the chances of focusing on relevant simplifications, but because of the complexity of the system interactions, there are no guarantees. The possibility of unwittingly omitting important system elements is one limitation of simulation research.
The theoretical rigor introduced by formal modeling is one of the strengths of simulation work. A process may appear to be well understood, but attempting to specify an equation for the operation of the process over time often exposes gaps in this understanding. Organizational culture researchers, for example, address cultural change but usually don’t develop formal models. Perhaps the empirical work from which the operation of a process was inferred was based on cross-sectional data, leaving the dynamic specification unclear. Formalizing processes imposes a discipline on theorizing, forcing researchers to come to grips with thorny issues that have previously been dealt with by “handwaving” or were not even recognized. Whether or not the formalization is “correct” in some sense, at least it promotes scientific advancement by forcing cloudy areas to be addressed, resulting in a clear specification that can be subjected to testing and subsequent refinement.
Model specification also requires parameter values to be set. Sometimes empirical work can provide information on parameter values – for example, extensive empirical work in organizational ecology has resulted in estimates of the parameters of founding and mortality functions for a variety of organizational populations (and also refinement of the functional forms); see Hannan and Carroll (1992). In many cases, however, there is no reliable empirical guidance, so the simulator must enter uncharted territory in determining parameter settings. Various techniques are available to examine how reasonable the settings are for the behavior of the
model, and sensitivity analysis can assess their robustness.
#### Experimental design
The length of the simulation time period is important because overly long time periods can obscure fine-grained interactions of interdependent processes, leading to spurious results. In developing the simulation of competition between populations of organizations, we discovered that the interdependent founding and mortality functions behaved improperly when the time periods were too long. The length of the time period is also important because it may be interdependent with some parameter values in the computational model. In the R&D investment simulation, organizations adapt by adjusting their investment mix in the direction of the type of investment (innovation or imitation) from which they are realizing the greatest returns. The proportion of adjustment each time period is a model parameter; the adjustment each period, and hence the adjustment parameter, should be smaller if simulation time is in months rather than years. So the time period may need to be scaled to the behavior of the model, and some parameter values may need to be scaled to the time period.
The number of iterations for a given set of conditions must be large enough to reveal the tendencies in system behavior; too few iterations may not yield representative results. The necessary number of iterations may vary with the model, however, so some experimentation may be required to determine what is needed to produce stable outcome patterns.
A major design consideration is what variations to run for parameters and initial conditions. A common design that systematically varies these factors is a *block design*. For example, in the cultural transmission simulation, three values each (low, medium, and high) could be run for entry rates, socialization intensity, and turnover rates, giving 27 conditions to provide information on the range of system behavior and how the behavior varies by condition. This number would go up multiplicatively as more factors, such as initial condition settings, were varied. More complex models have more factors to vary, and the total number of variations in a block design can easily become unmanageable. If a strategy of starting with a simple model and then adding components is followed, the variations should be carried out for each model structure. Although the block design is appealing, the design for a particular simulation project can be tailored to the specifics of the project. Whatever the design, a sensitivity analysis should be performed for any configuration of conditions deemed worthy of emphasis in reporting results.
#### Built-in results #### In the processes of model development and experimental design, simulators must be careful to avoid the problem of built-in results. The model and design determine the simulation outcomes, and it is actually pretty easy to construct simulations to guarantee specific outcomes. (I suspect that simulators sometimes do this unintentionally, and are then surprised by their “discoveries.”) In the R&D investment simulation, it would have been possible to model the landscape for investment returns to guarantee that strategic groups would form, and it would also have been possible to specify initial conditions to guarantee strategic group formation (for example, by having half the firms enter the industry as pure innovators and half as pure imitators). For a simulation to be meaningful, the results should emerge from the interaction of the processes in the model rather than because of a specific modeling or design choice that overrides the intended operation of the processes and automatically determines the outcome. Avoiding the problem of built-in results requires careful attention in the modeling and design stages; testing is also important – for example, in a well-constructed simulation one should be able to find *some* set of conditions for which the main results disappear. #### Computer operationalization
The structure and coding of the computer program can be a major source of errors in
simulation research. Random number generators need to be tested to be sure that they produce random number streams and the expected distributions. The code for each process needs to be tested independently to assure that it is working properly, and the process interfaces within the system model need to be carefully constructed. Bugs in the program may not be obvious and can produce spurious results, so great care is needed to eliminate errors. Various tests can be performed to attempt to uncover errors; some parameters can be set to zero to see if the resulting simplified system behaves as expected, and outcomes can be examined to be sure they fall within meaningful boundaries (e.g., organizational size should never be negative). The ultimate test, however, is whether other simulators can replicate the simulation findings. This requires that sufficient detail of the simulation be provided by the original researchers, which is often not the case, and unfortunately, incentives to attempt replications are lacking.
#### Analysis
The type of analysis conducted on the simulation outputs depends on the purpose of the simulation and the nature of the output variables. Common forms of analysis include graphical and statistical techniques, and are often directed toward determining how variations in model conditions affect outcomes. Many simulations are analyzed using linear regression analysis. This approach has the benefit of comparability to regression-based empirical analysis, but is somewhat curious in the sense that the outcome variables of complex interactive systems would not be expected in general to exhibit linear relationships. Methods designed for the analysis of outcomes produced by complex interactive systems would be more useful but need to be developed further.
#### Grounding
Since simulation experiments are “artificial” – they are based on computer models and their data are generated by a computer program – the question naturally arises of how the simulation relates to real-world behavior. There are several possibilities. The model’s processes could be based on empirical work, as was the case for the competing population simulation; both the model’s functional forms and their parameter settings were based on empirical studies. The only ungrounded parameters were the competition coefficients, which were systematically varied to demonstrate that the basic findings of the simulation did not depend on the specific settings. In many cases, formal models with empirical estimates are not available, but empirical work can still provide much information for model construction, as in the cultural transmission simulation,
and variations and sensitivity analysis can be used to examine the robustness of the results.
Empirical grounding can also be established through the results of the simulation. The results can be compared to empirical work, as was the case with the organizational growth simulation, where the resulting industry size distributions were compared with empirical findings. Alternatively, the simulation results can serve as a basis for subsequent empirical work to assess their correspondence with observable behavior.
The type of grounding may differ with the purpose of the simulation. If the simulation is used for discovery, grounding of the processes can be important, since the results are a logical consequence of the model. For predictive purposes, empirical testing of the results is the appropriate form of grounding. Still other uses may involve grounding of both the processes and the results; for explanatory purposes, the results should fit observations, but the explanation is more convincing if the processes are also grounded.
In my opinion, however, simulation can also be a valuable research tool even when grounding is not possible. Simulations can be used to explore the consequences of theoretically derived processes, for example, even if the outcomes cannot be readily assessed empirically. This may be viewed as a form of discovery, and is characteristic of much theoretical work in both the natural and social sciences. For example, Wolfgang Pauli predicted the existence of the neutrino in 1931 using theoretical methods, although there was no realistic prospect at the time of observing this hypothetical particle. One would hope, of course, that theoretical work would eventually lead to some empirical validation (the neutrino was discovered in 1956 using advanced experimental methods, at which time Pauli was awarded the Nobel Prize for his prediction). Purely theoretical simulation work should not be avoided simply because grounding is not available; it is still a legitimate scientific endeavor.
###### Conclusion
Computer simulation can be a powerful alternative approach to doing science. Simulation makes it possible to study problems that are not easily addressed, or may be impossible to address, with other scientific approaches. Because organizations are complex systems and many of their characteristics and behaviors are often inaccessible to researchers, especially over long time frames, simulation can be a particularly useful research tool for organization theorists.
Simulation analysis offers a variety of benefits. It can be useful in developing theory and in guiding empirical work. It can provide insight into the operation of complex systems and explore their behavior. It can examine the consequences of theoretical arguments and assumptions, generate alternative explanations and hypotheses, and test the validity of explanations. Through its requirement for formal modeling, it imposes theoretical rigor and promotes scientific progress.
Simulation research also has problems and limitations. The value of simulation findings rests on the validity of the simulation model, which frequently must be constructed with little guidance from previous work and is prone to problems of misspecification. Experimental designs are often inadequate. Simulation work is technically demanding and highly susceptible to errors in computer programming. The data generated by simulations are not “real” and techniques for their analysis are limited. So claims based on simulation findings are necessarily qualified.
The role of simulation is not well understood by much of the organizational research community. Simulation is a legitimate, disciplined approach to scientific investigation, and its value needs to be recognized and appreciated. Properly used and kept in appropriate perspective, computer simulation is a useful research tool that opens up new avenues for organizational research. The computer simulations discussed in this paper provide a sample of a future direction in organizational research, and many samples in the future of organizational research are likely to be generated by computer simulations.
##### REFERENCES
Axelrod, Robert. 1997. “Advancing the Art of Simulation in the Social Sciences.” In Rosaria Conte, Rainer Hegselmann, and Pietro Terna (eds.), *Simulating Social Phenomena*. Berlin: Springer.
Carroll, Glenn R., and J. Richard Harrison. 1994. “On the Historical Efficiency of Competition between Organizational Populations.” *American Journal of Sociology*, 100: 720-749.
Carroll, Glenn R., and J. Richard Harrison. 1998. “Organizational Demography and Culture: Insights from a Formal Model.” Forthcoming in *Administrative Science Quarterly*.
Cohen, Michael D., James G. March, and Johan P. Olsen. 1972. “A Garbage Can Model of Organizational Decision Making.” *Administrative Science Quarterly*, 17: 1-25.
Cyert, Richard M., and James G. March. 1963. *A Behavioral Theory of the Firm*. Englewood Cliffs, NJ: Prentice-Hall.
Hannan, Michael T., and Glenn R. Carroll. 1992. *Dynamics of Organizational Populations*. New York: Oxford University Press.
Harrison, J. Richard. 1997. “Dominant Coalition Dynamics: The Politics of Organizational Adaptation and Failure.” Paper presented at the International Conference on Computer Simulation and the Social Sciences, Cortona, Italy.
Harrison, J. Richard. 1998. “Simulating Organizational Growth in Ecological Models.” Paper presented at the European Group for Organisational Studies Colloquium, Maastricht, The Netherlands.
Harrison, J. Richard, and Glenn R. Carroll. 1991. “Keeping the Faith: A Model of Cultural Transmission in Formal Organizations.” *Administrative Science Quarterly*, 36: 552-582.
Harrison, J. Richard, and James G. March. 1984. “Decision Making and Postdecision Surprises.” *Administrative Science Quarterly*, 29: 26-42.
Lee, Jeho, and J. Richard Harrison. 1997. “Innovation and Industry Bifurcation: The Emergence of Strategic Groups.” Paper presented at the International Conference on Complex Systems, Nashua, NH.
Waldrop, M. Mitchell. 1992. *Complexity: The Emerging Science at the Edge of Order and Chaos*. New York: Simon & Schuster.
**Share with your friends:** |