Quickstart Guide Part 6: Agent-Agent Interaction¶
An important feature of agent-based modeling is the ability for agents to interact with each other and to make decisions or execute actions based on the outcome of those interactions. As the effects of individual encounters accumulate over time, interesting behavior can emerge at the population level. In this model, you will explore the impact of agents exchanging currency with each other on the overall wealth distribution of the residents of Loving County, TX.
By the end of this lesson you should be able to:
1. Know that the FRED modeling language has built-in functions that enable agents to interact with each other.
2. Have agents determine and then alter other agents' (agent) variables using the ask
and tell
functions.
3. Make use of the sample_with_replacement
function to randomly select another agent from the population.
4. Use predicates alongside the today()
function (and a shared numeric variable) to control when agents take an action.
5. Define emergent behavior - a key outcome of agent-based models and often the goal of running the simulation in the first place.
As always, start by loading the required Python modules below by clicking in the cell and hitting 'Shift' + 'Enter':
import pandas as pd
import plotly.express as px
import time
# Epistemix package for managing simulations and simulation results in Python
from epx import Job, ModelConfig, SynthPop
# local Python module that encapsulates a useful helper function
from methods import calculate_wealth_percentiles
# Formatting plotly visualizations using the Epistemix template
import plotly.io as pio
import plotly.graph_objects as go
import requests
# Use the Epistemix default plotly template
r = requests.get("https://gist.githubusercontent.com/daniel-epistemix/8009ad31ebfa96ac97b7be038c014c0d/raw/320c3b0ca3dfbf7946e49c97254fa65d4753aeac/epx_plotly_theme.json")
if r.status_code == 200:
pio.templates["epistemix"] = go.layout.Template(r.json())
pio.templates.default = "epistemix"
6.1 The Simple Economy Model¶
First, open the simple_economy.fred
file in the navigator bar to the left, and take a look at the model you'll be running.
The idea behind the simple_economy model is very straightforward. All agents in the population are given $100 each at the start of the simulation. At each time step, every agent determines whether they have a positive cash balance. If they do, they pick an agent at random and give them a dollar. Then, they update their own balance and record it in their personal daily ledger.
If an agent has no more money, they wait and check again at the next time step to see if they received a dollar from another agent. This continues each day for the duration of the simulation. On the last day, each agent reports their balance history to a shared list_table, which is then recorded in a CSV file.
6.2 Setting Up the Model¶
Let's work through how the steps of the simple_economy model are written in the FRED modeling language.
The initial allocation of cash to each agent is achieved by creating an agent numeric variable in the variables
block called my_balance
. We then set my_balance
to $100 in the agent_startup
block:
agent_startup
block is used to initialize agent variables prior to the first time step of the simulation. This guarantees that variables are available to use in the first time step, already set to the correct value.
The variables
block also defines an agent list variable called my_balance_history
that each agent uses to record their balance on each day of the simulation - you will use this later to make a visualization of the output from this simulation.
The simple_economy model is comprised of a single condition named DISTRIBUTE_DOLLARS
. Agents begin in the start state by writing their current balance to their my_balance_history
list. Then, they go on to check whether it is the last day of the simulation. This is achieved using the predicate on line 44:
last_day
), the model instructs agents to record their balance history in a shared list_table, which is saved in a file as output at the end of the simulation.
6.3 Distributing Money Through Agent Interaction¶
If it is not the last day of the simulation, the agents are sent to the AssessBalance
state. There, agents evaluate the predicate on line 55 to determine whether they have any cash to give away. If they have money, they are sent to the GiveADollar
state. If their balance is zero, they are sent to a waiting state called WaitOneDay
, where they remain until the next day when they are sent back to the Start
state and begin the cycle again.
The GiveADollar
state is where the exciting action takes place! In this state, you can see your first example of agent-agent interaction.
First, each agent randomly picks another agent from the population using the sample_with_replacement
function:
As we discussed in Part 4 of this guide, the sample_with_replacement
function takes a list as its first argument and a number of elements to return as its second. Since we want each giving agent to choose one other agent to give to, we pass set_difference(get_population(), list(id()))
as the first argument and 1 as the second argument. The get_population()
function returns a list of the IDs of all the agents that were loaded into the simulation from the location files. This list includes the ID of the calling agent, so - to make sure that we pick a different agent to give to - we use the set_difference
function to remove the calling agent's ID.
The result of running:
is a list variableother_agent
, which has length 1. The single element of this list is the ID of an agent drawn at random from the simulation population (excluding the calling agent), who is now the receiving agent.
Next, the giving agent interacts with the receiving agent using the tell
function. The tell
function allows one agent to change the value of another agent's (agent numeric) variables:
In this case, the giving agent tells the receiving agent to add 1 to the value of their my_balance
variable. Note we use other_agent[0]
to pass the ID of the first (and only) agent in the other_agent
list.
How does the giving agent know the receiving agent's current balance? Here, the model makes use of a second interaction command, ask
, in order to determine the current value of the receiving agent's my_balance
variable. This is then passed to the tell
command with the additional instruction to increment the value by 1.
Once this exchange has taken place, the giving agent subtracts 1 from their own balance. Then, they wait 24 hours before returning to the Start
state and beginning the cycle again.
6.4 Running the Model and Examining Output¶
You will notice that this model runs for a long time period - 20 years. This long running time allows for enough time steps in the simulation to see an interesting trend emerge in the data. Start by executing the cell below to run the simulation:
# create a ModelConfig object
econ_config = ModelConfig(
synth_pop=SynthPop("US_2010.v5", ["Loving_County_TX"]),
start_date="2022-05-10",
end_date="2042-05-10",
model_params={"last_day": 20420510},
)
# create a Job object using the ModelConfig
econ_job = Job(
"simple_economy.fred",
config=[econ_config],
key="econ_job",
fred_version="11.0.1",
results_dir="/home/epx/qsg-results"
)
# call the `Job.execute()` method
econ_job.execute()
# the following loop idles while we wait for the simulation job to finish
start = time.time()
timeout = 300 # timeout in seconds
idle_time = 3 # time to wait (in seconds) before checking status again
while str(econ_job.status) != 'DONE':
if time.time() > start + timeout:
msg = f"Job did not finish within {timeout / 60} minutes."
raise RuntimeError(msg)
time.sleep(idle_time)
str(econ_job.status)
# read in the list_table containing the agent balance history
agent_wealth_history = econ_job.results.list_table_var('agent_balances')[["key", "value"]].rename(columns={'key':'agent_id'})
# get the dates of the simulation to append to agent balance history
simdates=econ_job.results.dates()['sim_date']
# # add the sim dates to the DataFrame. Each agent's output is date ordered
nagents = len(agent_wealth_history['agent_id'].unique())
sim_date=[]
for i in range(nagents):
for i in simdates:
sim_date.append(i)
agent_wealth_history['sim_date'] = sim_date
agent_wealth_history = agent_wealth_history.pivot(index='sim_date',columns='agent_id').astype(int)
The next cell runs a function to calculate the total wealth held by the top 10% and bottom 50% of agents. This code is included in the methods.py
file if you'd like to take a look.
# Calculate total wealth of top 10% and bottom 50% at each time step
calculate_wealth_percentiles(agent_wealth_history)
Lastly, run the next cell to create a plot that compares the values we just computed over time:
fig=px.line(
agent_wealth_history.iloc[90:].reset_index(), # skipping the first 90 sim days, when the percentiles are not super meaningful
x='sim_date',
y=['top_10_rolling_percent','bottom_50_rolling_percent'],
title='Random Redistribution Leads to Highly Unequal Wealth',
)
# update the names of the lines to be easier to read
value_map = {
'sim_date':'Date',
'top_10_rolling_percent':'Top 10%',
'bottom_50_rolling_percent':'Bottom 50%',
}
for trace in fig.data:
trace.name = value_map.get(trace.name, trace.name)
# update x- and y-axis titles
fig.update_layout(
xaxis_title='Date',
yaxis_title='Share of overall wealth (%)',
legend_title='Agent group',
)
fig.show()
6.5 Wealth Inequality: An Emergent Property of the Simulation¶
The result of this simulation is interesting - over time, an unequal distribution of wealth arises. Despite each agent choosing other agents at random, and despite every agent being eligible to receive money at every time step, some agents accumulate wealth over time.
In fact, at a certain point in the simulation (typically around 15 years, depending on the random seed of the simulation run), the wealth held by the top 10% richest agents is close to (and sometimes exceeds) the wealth held by the bottom 50%.
This perhaps unexpected result is a good example of emergent behavior - an overall trend in the data or structure at the population level that emerges despite no rules being made specifically to generate that trend. Here, all agents are subject to the same rules, and every agent has the same probability of receiving a dollar from another agent at every time step, but still some agents accumulate more wealth than others.
Emergent behavior is a primary reason people pursue agent-based modeling in their work. The ability to explore outcomes at the population level that aggregate up from individual behaviors and interactions is a powerful tool. It enables scenario exploration that can empower decision makers to make better, more informed decisions. You'll see additional examples of emergent behavior in the next two lessons.
6.6 Lesson Recap¶
In this lesson, you explored a model that demonstrated agent-agent interaction for the first time. In a simple model of economic exchange between agents, an unequal wealth distribution emerged despite rules that treated all agents identically.
- You instructed agents to interact using the
ask
andtell
functions in the FRED modeling language, which allow agents to query and change (respectively) each other's agent variables. - You again made use of the
sample_with_replacement
function to have agents select another agent at random with which to interact. - You used the
get_population
command to get a list of the agents that were loaded into the simulation. - You made a plot of some summary statistics over time that demostrate the emergent behavior of this simulation.
In the next lesson, we'll see a model of the spread of an infectious disease, which makes use of a built-in agent interaction feature in the FRED modeling language - transmission. See you in Part 7!