Quickstart Guide Part 4: Models With More Than One Condition¶
Part 2 of this guide introduced conditions and states - the structures that define how agents behave during a simulation. You explored a model named alive that had a single condition - ALIVE
- made up of two states, Asleep
and Awake
. Agents moved between those two states over the course of the simulation, depending on the time of day.
In this lesson, you will examine an updated version of the alive model that includes two conditions - ALIVE
and WATCH_TV
. By introducing multiple conditions - and a few new features of the FRED modeling language - we can create richer, heterogeneous behaviors for the agents in our models.
By the end of this notebook, you should be able to:
1. Describe how agents are subject to multiple conditions during a simulation.
2. Know that, at any given time step, agents will be in one state (and one state only!) within each condition.
3. Use the if
predicate to select agents based on some criterion that you define.
4. Have agents randomly select an item from a list using the sample_with_replacement
command.
5. Open a file in the startup
block and have agents write information to it using the print_csv
command.
As always, start by importing the Python packages that you'll use for this lesson:
import pandas as pd
import time
# Epistemix package for managing simulations and simulation results in Python
from epx import Job, ModelConfig, SynthPop
4.1 Conditions and States Recap¶
As you first encountered in Part 2, a condition is a set of related rules, behaviors, and actions that agents are subject to during a FRED simulation. These component parts are described by states. Conditions track aspects of individuals and of the population that are of interest to your business need or research question.
Conditions are defined in condition
blocks, and each condition can include one or more state
blocks. Each condition
block identifies a start_state
, in which agents will be placed at the start of the simulation.
Each state
block defines the three types of rules that describe the state: action rules describe what agents should do while in that state, wait rules describe how long they will remain in the state, and transition rules describe which state within the condition each agent should move to next.
In Part 2, we defined an ALIVE
condition that looked like this:
condition ALIVE {
start_state = Asleep
state Asleep {
# Action rules
# Wait rules
wait(until(7am))
# Transition rules
default(Awake)
}
state Awake {
# Action rules
# Wait rules
wait(until(10pm))
# Transition rules
default(Asleep)
}
}
In this simple model, agents were placed in the Asleep
state when the simulation started at midnight. At 7am, they transitioned to the Awake
state. Then, agents remained awake until 10pm, at which point they went back to sleep by transitioning to the Asleep
state. This cycle repeated each day of the simulation.
4.2 Models with Multiple Conditions¶
The agents in the model above aren't doing much of interest - they are simply awake or asleep depending on the time of day. We can build up richer agent behavior in a simulation by specifying multiple conditions in our model.
During each time step of the simulation, agents are assigned a state within each condition defined in the model. Conditions can be completely independent of each other, or they can be written in a way that enables behavior in one condition to influence an agent's state in another condition.
When the simulation begins, agents are placed into the start_state
specified in each condition. Agents can only be in one state per condition at any given time. Additionally, agents are subject to all conditions specified in a model at any given time step.
At every time step, an agent's status in the simulation will be described by one state, and one state only, in each condition defined in the model.¶
Click on the alive_and_tv.fred
file to the left to examine the model. Note that we now have two conditions - ALIVE
, which is the exact same as in our previous model, and WATCH_TV
, which includes rules that describe how (certain) agents decide whether to watch TV at 5pm each day. Here is what this model looks like in diagram form:
In this diagram, we have represented state transitions that are guaranteed with solid arrows and state transitions that are probabilistic or only apply to a subset of agents with dotted arrows.
You will dig into the details of this new condition in a second, but for now execute the cell below to run the model:
# create a ModelConfig object
tv_config = ModelConfig(
synth_pop=SynthPop("US_2010.v5", ["Loving_County_TX"]),
start_date="2022-05-10",
end_date="2022-05-17",
)
# create a Job object using the ModelConfig
tv_job = Job(
"alive_and_tv.fred",
config=[tv_config],
key="tv_job",
fred_version="11.0.1",
results_dir="/home/epx/qsg-results"
)
# call the `Job.execute()` method
tv_job.execute()
# the following loop idles while we wait for the simulation job to finish
start = time.time()
timeout = 300 # timeout in seconds
idle_time = 3 # time to wait (in seconds) before checking status again
while str(tv_job.status) != 'DONE':
if time.time() > start + timeout:
msg = f"Job did not finish within {timeout / 60} minutes."
raise RuntimeError(msg)
time.sleep(idle_time)
str(tv_job.status)
Now, take a look at some of the outputs of this simulation. For example, you can see the number of new agents entering the starting Asleep
state in the ALIVE
condition with the following command:
This output is identical to the output from the single condition model that you ran in Part 2. Recall that on the first day all agents enter the Asleep
state twice: once at midnight when the simulation starts, and then again at 10pm. This is why the new count is 140 on the first day.
Next, examine the agent counts for the start condition in WATCH_TV
, which is called NotWatching
:
You can see here that the pattern of agent counts entering the start state looks very similar for both conditions, except that three agents don't seem to be participating in the WATCH_TV
condition (more about that shortly). The simulation tracks which states agents are in for each separate condition at each time step. Agents are subject to both conditions at the same time, unless a rule is written that excludes them. Let's take a detailed look at the WATCH_TV
condition to understand what is happening here.
4.3 A Detailed Look at the WATCH_TV Condition¶
The start state in the WATCH_TV
condition is called NotWatching
, and looks like this:
state NotWatching {
# Action rules
# Wait rules
wait(until(5pm))
# Transition rules
if (age() <= 1) then next(Excluded) # Take babies out. They don't watch TV!
default(DecideToWatch)
}
All 70 agents in Loving County enter this state at the start of the simulation. The wait rule here specifices that agents will wait until 5pm before transitioning to the next state. Once the simulation clock reaches 5pm, each agent will follow the transition rules to decide which state to move to next.
You might notice something new here. This model uses a conditional statement for the first time to control which agents to include in the condition. Take a closer look at this line of code:
The use ofif
here tells the FRED simulation engine that the rest of the line should be completed if the condition inside the parentheses is true - hence the term "conditional." Conditional statements of this form are called predicates in the FRED modeling language.
The age of each agent is evaluated in the predicate statement. If the agent's age is equal to 0 or 1, they are moved to the special state we first encountered in Part 3 called Excluded. Recall that you can think of Excluded as a "does not apply state" that is included in every condition by default and cannot be changed. It is functionally equivalent to an infinite wait state, so agents in the Excluded state no longer participate in that condition for the rest of the simulation.
Here, the model is written to exclude babies of age 0 or 1, because we're assuming that they don't watch TV. This may not be a great assumption, but the nice thing about agent-based modeling is that the assumption is explicit - others can see it, critique it, or even modify the model to rely on different assumptions, if they so choose.
The default transition for all agents not satisfying the predicate statement (i.e., all agents age 2 or older) is to a state named DecideToWatch
. Since there are three agents whose age is at most 1 in Loving County, TX, the number of agents moving to the next state is 67. You can see this if we examine the counts of agents entering the DecideToWatch
state:
Once the agents who are at least 2 years old enter the DecideToWatch
state, the model includes two additional if
statements to introduce different behavior for agents, depending on their age:
state DecideToWatch {
# Action rules
# Wait rules
wait(0)
# Transition rules
if (age() < 65) then next(WatchingTV) with prob(0.25)
if (age() >= 65) then next(WatchingTV) with prob(0.75)
default(NotWatching)
}
First, it's assumed that there is some chance that each agent will decide not to watch TV on a given day, perhaps because they are doing something else. Second, it is further assumed that agents 65 and over are more likely to watch TV than those under 65. The two predicate statements here introduce a probabilistic state transition. The statement
means that each agent under the age of 65 has a 25% probability of transitioning to theWatchingTV
state. Agents over 65, in contrast, transition to WatchingTV
with a probability of 75%. The default(NotWatching)
transition rule means that any agent that doesn't transition to WatchingTV
is placed back in the NotWatching
state, where they wait until the clock again reaches 5pm. Then, the cycle repeats.
These probabilistic transition rules introduce stochasticity, i.e., randomness, into the model. If you examine the number of agents entering the WatchingTV
state, you will see that it differs each day:
Allowing for probabilistic behavior of this kind is one of the strengths of agent-based modeling - it allows you to develop models that can explore the range of outcomes that emerge as agents make different decisions.
4.4 Choosing a Channel to Watch¶
In the WatchingTV
state of the WATCH_TV
condition, the agents finally take an action: they choose which channel to watch. The code to implement this action is included in the state
block for the WatchingTV
state, starting on line 72 of the alive_and_tv.fred
model file.
Take a closer look at the first action rule defined in this state:
This rule instructs agents to randomly select a channel from the channels
list, and assign it to their agent variable my_channel
. This is achieved using the sample_with_replacement
function in FRED, which creates a list of randomly selected numbers from an input list. The second argument - in this case , 1 - specifies that the function should return a list containing a single value.
The channels
list is defined in the variables
block at the start of the model file. It contains 5 channel numbers that are available to the residents of Loving County; 5, 23, 40, 48, and 76. Using sample_with_replacement
, each agent will randomly select a channel to watch each time they enter this state.
This data set can now be processed and visualized using standard Python tools. For example, you can see the total number of times that agents of each race watched each channel during the simulation using the groupby
method for pandas DataFrames:
channels[['id','race','my_channel']].groupby(['race','my_channel']).count().rename(columns={"id": "count"})
A note about the startup block¶
This model introduced a new code block you haven't seen before: the startup
block.
Here is what it looks like in this model:
Thestartup
block is a powerful feature of the FRED modeling language that can be used to set up agents, variables, and files for use during the simulation. In this case, the model uses the startup
block to open the channels.csv
file and create column headings to describe the data that will be written to the file during the simulation.
A file must be open in the computer's memory in order to write output to it. By including this line in the startup
block, which is executed at the very start of the simulation, before the agents do anything, you can ensure that the file is open and ready for the agents to access to when they enter the WatchingTV
state.
You will learn more about startup
blocks in a later lesson.
4.6 Lesson Recap¶
In this lesson, you examined and ran a model with two conditions. You also encountered a few ways to introduce probabilistic agent behavior into your model. The most important takeaways are:
1. Models can define more than one condition to which agents are subject during a simulation.
2. At every time step, an agent's status in the simulation will be described by one state, and one state only, in each condition defined in the model.
3. Predicates are conditional statements that can segment agent behavior based on a property of the agent, like age or sex.
4. Predicates can be used in combination with action, wait, and transition rules to control agent flow in the simulation and to introduce randomness.
5. The FRED modeling language has built-in functions to select items randomly from a list and to have agents record information in a CSV file.
6. The startup
block helps you set up aspects of your simulation before it starts to run, like opening a file that agents will write to during the course of the simulation.
In Part 5, you will use a multi-condition model to explore the places that are defined as part of the Epistemix synthetic population. You will also make a visualization that allows to you display these places on a map. Onward!
4.7 Additional Exercises¶
-
Try changing the agent bedtime in the
ALIVE
condition. How does this affect theWatchingTV
state in theWATCH_TV
condition? -
Rewrite the alive_and_tv model so that agents can select their bedtime from a list of options (e.g., 9, 10, or 11pm). Ensure that each agent's bedtime behavior is echoed by their TV watching behavior.
Exercise Solutions¶
- Interacting conditions
Changing the time at which agents transition from the Awake
to Asleep
state has no effect on their behavior in the WATCH_TV
condition! Neither the model that we have described using the FRED language, nor the software that actually executes the simulation, would consider an agent that is simulataneously in the ALIVE.Asleep
state and the WATCH_TV.WatchingTV
state to be contradictory in any way.
It is up to you, the model designer, to account for unrealistic agent behaviors like this.
- No TV after bed
Below, there is an updated variables
block and an updated condition
block for the ALIVE
condition. This updated model makes use of the set_state
function. When agents enter the Asleep
state of the ALIVE
condition, there is an action rule that changes their state in the WATCH_TV
condition.
variables {
shared list channels
channels = list(5, 23, 40, 48, 76)
shared list time_awake
time_awake = list(14, 15, 16)
agent list my_channel
agent list my_awake_hours
agent numeric race
}
condition ALIVE {
start_state = Asleep
state Asleep {
# Action rules
set_state(WATCH_TV, WatchingTV, NotWatching)
# Wait rules
wait(until(7am))
# Transition rules
default(Awake)
}
state Awake {
# Action rules
my_awake_hours = sample_with_replacement(time_awake, 1)
# Wait rules
wait(my_awake_hours[0])
# Transition rules
default(Asleep)
}
}
Note that the variant of the set_state
function used above only moves agents who are in the WatchingTV
state into the NotWatching
state. This prevents babies (agents who are at most 1 year old) from re-entering the WATCH_TV.NotWatching
state from the Excluded state. It also prevents agents who were already not watching TV from re-entering the NotWatching
state. These re-entries would be inconsquential in this model, but that is not always the case!