Date of submission: 12


Land Use/Cover Change Models



Download 0.72 Mb.
Page5/17
Date20.10.2016
Size0.72 Mb.
#6129
1   2   3   4   5   6   7   8   9   ...   17

3.4 Land Use/Cover Change Models


Models are used as a way of providing a simplified representation of a complex system (Chang, 2010). The use of models in the context of Land Use/Cover Change (LUCC) opens up opportunities to understand the factors and interactions behind land cover change and can serve as useful tools in projecting future environmental and economic impacts of land use change (Alig, 1986). Attempts have been made to develop spatially explicit models to predict urban land cover change, including the SLEUTH cellular automata model which has been used in studies in Tampa and also Baltimore-Washington DC (Xian and Crane, 2005, Goetz et al, 2003). In recent years, agent based models have been increasingly investigated, with these models focusing on human actions in defining landscape transitions. Parker et al (2003) describe the use of agent based models in studying LUCC, noting a number of applications for these in the urban environment. Ligtenberg, Bregt, and van Lammeren (2001) look at modelling spatial planning in the Netherlands and Torrens (2001) inspects residential location dynamics.

In terms of predicting urban sprawl specifically, various modelling approaches have been suggested, from landscape metrics (Sudhira et al, 2004), cellular automata (Xian and Crane, 2005) to neural networks. Artificial neural networks are powerful tools that use a machine learning approach to quantify and model complex behaviour and patterns. These have been applied in Grand Traverse Bay, Michigan and also Dongguan, China (Pijanowski et al, 2001), (Li and Yeh, 2002). The common theme amongst the various different model types is the choice of ‘predictor variables’ – which are the elements which are interacting to drive the change. Given the complexity of urban sprawl a number of predictor variables have been used, with varying degrees of success. Two common variables which are amongst others used by Xian and Crane in the study in Tampa and Sudhira et al in Mangalore, India include:



  • Current urban extent/previous urban sprawl

  • Distance to roads

Depending on the complexity of the model used, a number of other variables can be included. Past studies have included factors such as, terrain and slope, density of urban centre, distance from lakes and annual population growth rates.

For this study, the NDVI differencing change detection method has been employed as described in section 3.3. However, to produce a future prediction of land cover change IDRISI’s Land Change Modeler (LCM) was employed. I will now introduce this and some of the features in modelling changes in land cover using the Multi-Layer Perceptron Neural Network.


3.5 IDRISI Land Change Modeler


IDRISI’s Land Change Modeler (LCM) provides a unique tool to analyse and predict land cover change. It has its roots in biodiversity and the problem of habitat loss and conservation, although also has applications in any type of land cover transformation, including that of urban growth. It allows a user to analyse patterns and trends in past land cover by change detection methods and also provides a modelling and prediction environment to create future landscape scenarios. This is accomplished by an ability to integrate user defined drivers of change to a model that will impact upon the change. The three major parts of the model used in this study were tools to

These options are all separated into a set of tabs within the model which provide the user with a set of operations that should be followed in sequence (Eastman, 2009).

3.6 Multi-Layer Perceptron Neural Network

The Land Change Modeler uses land transition information and variables that might drive or explain such change and creates a transition potential—the likelihood that land cover will change in the future (Clark Labs, 2009). Each transition is modelled with either Logistic Regression or a Multi-Layer Perceptron neural network. The recommended model for use in the LCM is the Multi-Layer Perceptron (MLP) neural network as this can calculate multiple transitions at a time, whereas the logistic regression can only model a single transition at a time. The MLP was selected for this study.

Neural networks are predictive models based loosely on the action of biological neurons (DTREG, 2012). The unit of an artificial neural network is a neuron which receives inputs and calculates outputs on the basis of a simple function. A simplified view of an artificial neural network is as below:

Hidden Layer


Figure 2: Neural Network diagram. Source neuroph.sourceforge.net
Information is stored in the weights of the connections between neurons. These weights among neurons change according to a process of learning or adaptation, using a process or trial and error to observe relationships in data. Rosenblatt (1958) is credited with developing one of the first artificial neural networks when he created his ‘‘perceptron’’, which consisted of a single node, receiving weighted inputs and categorising the results according to a defined rule. (Pijanowski et al, 2001)

The multi-layer perceptron (MLP) neural network is one of the most widely used Neural Networks. The MLP consists of three layers: input, hidden, and output and is able to solve problems which are not linearly separable. The neural network algorithms calculate weights for the values and nodes and introduce the input in a feed forward manner, which propagates through the hidden layer and the output layer (Pijanowski et al, 2001). The signals disseminate across the nodes and are modified by the weights between each connection. The receiving node sums the weighted inputs from all of the nodes connected to it from the previous layer. The output of this node is then computed as the function of its input and the data moves forward from node to node with multiple weighted summations occurring before reaching the output layer. This type of network is trained with the back-propagation learning algorithm. This algorithm randomly selects the initial weights, then compares the calculated output for a given observation with the expected output for that observation.




Download 0.72 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   17




The database is protected by copyright ©ininet.org 2024
send message

    Main page