In this model, five experimental settings are populated in different ways to describe the heterogeneity in the preference weights.
The first case is the Uniform case, which means the preference weights are drawn from the uniform distribution. In the Homogeneous case, we used the means value over the population of the three factors as the preference weights for all the agents. The Normal Case means the preference weights of the three factors were drawn randomly from normal distribution based on the means and standard deviation (S.D.) over the population as the input. In the Group Mean case, preference weights are drawn from the means of different groups. Preference weights in the Group Normal case are drawn from the normal distribution, which is created by the means and S.D.s of seven groups. Based on Brown and Robinson’s (2006) article, all experiments should have same values of landscape smoothness and sprawl boundary size. The smoothness should be set as 1 and the sprawl boundary is set at a 31-cell radius from the initial service center, located at the center of the landscape. In addition, the ‘luab’ need to be switched to ‘off’.
Our first experiment represents agents who have no knowledge and backgrounds about the object of study, and it does not introduce any information from any survey data. Go to the SOME model or click on an existing browser tab where you have the model open. Once the new webpage opens click on the Exp#1. Uniform Case Button, the value of slider ‘flag’ will be set as 1, and then the SETUP button.
The landscape should be initialed with aesthetic quality values, a sprawl boundary, a coarse landscape, luab off, and a sampling of fifteen locations (i.e., numTests) for each residential household agent. In this experiment, the residential household agents do not look at many locations before choosing a place to live. In fact, they consider only fifteen random locations, with in a landscape of 5625 locations represents a sample size of less than 1%. The sample size limitation reflects the imperfect or incomplete information that residents have when they choose a residence. Since there are several parameters that can be changed, you may want to create a worksheet in Excel and record the different parameter values and model results as you run the model multiple times and you may want to compare among model runs or between experiments.
When you are ready you can press the GO button and the model will run. If it seems like the model is sunning too fast to watch you can either run through the model one step at a time using the NEXT STEP button or you can move the ‘speed’ slider at the top of the NetLogo model. Run the model a couple times by repeating the procedure listed above to get a feel for how the SOME model works and how the process represented in the model produces the final outcomes you observe. Run the model at least five times and measure the amount of sprawl by recording the number of developments outside the radius, DOR, POR, MRU and VRU at the end of the model run. You can read the numbers off the graph by pointing your mouse at the appropriate location on the graph. This will allow you to compare model settings for the degree of sprawl they produce. You can calculate the average and standard deviation for each model setting if you like for comparison with the other experiments.
Question 1. Describe the behavior of the model and why you observe the produced settlement patterns for one or two typical model runs?
Question 2. Does each run produce the same number of settlements beyond the urban growth boundary? Discuss why the number of settlements beyond the boundary do not change (or do change) each time the model is run. What is required to make a model stochastic or deterministic and how could we modify the SOME model to be one or the other?
Question 3. Run the model 20 times and record the number of settlements beyond the sprawl boundary. Compute the average and standard deviation. Discuss when you should report the average and standard deviation of model outputs and how this information might be used to determine how many times you should run a model with the same set of parameters.
It is easy enough to run and watch the model 20 times for each experiment because it is a simple model. It would be more efficient if you were able to run the model multiple times using the same parameter settings and have the software automatically record the results of those individual runs for you. It would be even better if you could have the model iterate a number of times for each of several parameter settings! NetLogo has a tool to do this, which is called Behavior Space. If you are feeling keen and want to try to set up Behavior Space for your experiments then you will have to download and install NetLogo on your computer, export the code for the SOME model from the SOME model page, and under the Tools menu item you can find the BehaviorSpace tool. The openABM initiative has a useful Behavior Space tutorial that may be of interest.
Question 4. What mechanisms in the model act as centripetal forces (cause increasing clustering) and which ones act as centrifugal forces (cause more sprawl)?
Our second experiment is the homogeneous case, in which there is no difference in preference weights among residents. Hence, we can use it as a reference to compare the influences of different heterogeneity. The settings like the smoothness value, the numtests value, and the radius size do not need to be changed. Go to the SOME model or click on an existing browser tab where you have the model open.
Once the new webpage opens click on the Exp#2. Homogenous Case Button, the value of slider ‘flag’ will be set as 2, and then the SETUP button. Again, you should record the different parameter values and model results as you run the model multiple times and you may want to compare among model runs or between experiments. When you are ready, click the GO button and the model will run. Run the model 10-20 times and record the amount of sprawl at the end of each model run. Calculate the average and standard deviation for comparison with the other experiments.
Question 5. Describe the behavior of the model and how the parameter settings influenced the settlement patterns that were produced?
Question 6. How do the settlement patterns from this experiment differ from those in Experiment 1, were the patterns more or less clustered? Use both quantitative (mean and standard deviation) and qualitative (visual interpretation results).
Question 7. Why is the graph initially flat? Is the number of settlements beyond the urban growth boundary increasing linearly, exponentially, or taking some other form? Can we expect this trend to continue in the modelled landscape (why or why not)?
Question 8. The decision-making approach used by the residential household agents in the model is called bounded rationality. Each residential household agent is making a rational decision to maximize its’ utility from a subset of study area locations (i.e., numtests). Because the residential household agents are basing their rational decisions on a subset of locations the decision-making process can be referred to as bounded rationality. Describe what we can learn from changing the level of information available to an agent and relate it to the concepts of bounded rationality and rational decision making, which is sometimes referred to as homo-economicus.
The third experiment is called the “Normal” case, the agent preferences were drawn randomly from a normal distribution described by the overall mean and standard deviation, introducing variability into the agents’ preference weights. Go to the SOME model or click on an existing browser tab where you have the model open. Once the new webpage opens click on the Exp#3. Normal Case Button, the value of slider ‘flag’ will be set as 3, and then the SETUP button. When you are ready, click the GO button and the model will run. Run the model 10-20 times and record the amount of sprawl at the end of each model run. Calculate the average and standard deviation for comparison with the other experiments.
Question 9. Compare and contrast this model behavior to the two previous experiments. Describe the differences or similarities among the experiments using the number of settlements beyond the sprawl boundary, the landscape patterns, and the shape of the graph.
Question 10. “All models are wrong but some are useful” (Box 1979.pg. 2) highlights the fact that by definition a model is a simplification of reality and inherently has flaws in the representation of reality within the structure of the model. However, this does not exclude the fact that a model can be useful and that models may ultimately be judged based on how useful they are. Conceptualize two or three uses that could demonstrate how or why this model, its results, or the agent-based modelling approach are useful. If you do not think the model is useful then argue why it is not. In either case try to provide some literature justification for your argument.
Box, G. E. P. (1979), “Robustness in the strategy of scientific model building”, in Launer, R. L.; Wilkinson, G. N., Robustness in Statistics, Academic Press, pp. 201–236.
For the following experiments, the agents are firstly divided into seven groups proportionally based on the frequency of agents in each cluster as the following figure shows.
The 4th experiment is called “Group Means” case, which involves assigning the mean preference weights from each of the seven clusters of residents. Go to the SOME model or click on an existing browser tab where you have the model open. Once the new webpage opens click on the Exp#4. Group Mean Button, the value of slider ‘flag’ will be set as 4, and then the SETUP button. When you are ready, click the GO button and the model will run. Run the model 10-20 times and record the amount of sprawl at the end of each model run. Calculate the average and standard deviation for comparison with the other experiments.
Question 11. This case introduced categorization without variability (Brown and Robinson, 2006). Try to understand what ‘categorization’ and ‘variability’ mean with the help of Brown and Robinson’s paper. Record the value of all measurements (i.e., DOR, POR, MRU and VRU) of each group so that the spatial pattern of sprawl in this experiment could be compared with the next experiment.
The last experiment is the “Group Normal” case, which combined categorization and variability, by drawing preference weights randomly from one of seven distributions, described by the mean and standard deviation of the normal distribution of preference weights on each factor for each cluster.
Question 12. Before you run the model make a hypothesis about how you think the pattern of settlement and the amount of sprawl will compare to the previous experiment. Include this hypothesis and your logical reasoning for it as an answer to this question.
Go to the SOME model or click on an existing browser tab where you have the model open. Once the new webpage opens click on the Exp#5. Group Normal Button, the value of slider ‘flag’ will be set as 5, and then the SETUP button. When you are ready, click the GO button and the model will run. Run the model 10-20 times and record the amount of sprawl at the end of each model run. Calculate the average and standard deviation for comparison with the other experiments.
Question 13. Describe why you think the model behaved as you hypothesized or behaved differently from your hypothesis in Question 12.
Question 14. Compare and contrast this model behavior to the previous experiments. Describe the differences or similarities among the experiments using the mean and standard deviation in the number of settlements beyond the growth boundary and the shape of the graph.
If you download the model and run it in NetLogo then you can add a couple lines of code to get NetLogo to create output files of the spatial pattern. You could then measure those patterns and compare them using landscape pattern indicies (LPIs), which are also sometimes referred to as landscape metrics. What would this enable you to do that you can’t do visually?
What is a random seed, how could it be used in the SOME model, and why would it be useful to record a random seed?
You could run a regression on your results and model parameters with your dependent variable being the amount of sprawl and your independent variables being the parameter values. If you applied this to all the data you created from the experiments above, what would it tell you?
Experiment 5 asks you to make a hypothesis about the model outcome before running the model. This challenges the model user to think critically about the model and their conceptual understanding of the system being modelled. For me, I find it useful in that 1) when the model acts as I hypothesized I typically don’t believe I can be correct about these sorts of things and it forces me to interrogate the model further to be sure the outcome is correct (e.g., there are no coding errors, incorrectly set parameters, other modelling artifacts), and 2) if my hypothesis is incorrect then I get worked up because I was wrong and find myself driven to figure out why I was wrong. Either way, I end up double and triple checking everything, which makes for a robust outcome. This story highlights an important question, can you use this type of model for predictive purposes? Discuss why this model can or should be used for predictive purposes or why it should not.
Think about the factors driving your preferences for a good location to settle at or why you like some places more than others. What other drivers of residential location are essential and missing from this model and the papers that expand on this model? How would you collect data to empirically inform and justify the inclusion of these additional drivers.