covid-19, featured, Notebook

Prediction in complex systems using agent-based models

Guest post by Corinna Elsenbroich & Gary Polhill

Should we ask people to stay at home during a pandemic?
Or just let the disease run its course?

The COVID-19 crisis forced governments to make difficult decisions at short notice that they then had to justify to their electorate. In many cases, these decisions were informed by computer simulations.

An advanced kind of computer simulation, known as agent-based modelling, proved particularly helpful in evaluating different options where it was used. In agent-based models, there is a virtual representation of an artificial population of human beings, each so-called ‘agent’ going about its simulated daily life, and, critically, affecting, and being affected by, other agents.

So, if one agent becomes “infected”, and spends too long near another agent not yet immune, then the computer simulation can “infect” the other agent. Furthermore, agent-based models can simulate social networks, families, friends, work colleagues, and take into account which people are likely to spend too long near another to transmit infections. Agent-based models can also simulate interactions with wider social environments. If one agent not wearing a mask finds themselves in an area where all the other agents are wearing masks, the simulated agent can decide whether to put their mask on (by allowing themselves to be influenced by the social norm), or remain mask-free (because their identity outweighs the norm, or because they cannot wear a mask for medical reasons).

Each agent has their own ‘story’, and the computer can simulate how these stories intertwine to form the narrative of the artificial population’s interaction with a communicable disease and measures to prevent its spread.

The pandemic was a vivid example of the challenges of governing complex systems. Complex systems are studied by scholars in various disciplines, including mathematics, physics, economics, sociology, computer science, geography, ecology and biology. They are fundamental to life, from the cellular to international relations levels, and as fascinating as they are challenging. The reasons why they are called ‘complex’ are the reasons that make them difficult to govern. Some of these reasons include:

  • They are ‘nonlinear’. Using some made-up numbers for the purposes of illustration, nonlinearity means that if a government spends £1Bn to save the first 100,000 lives, they might have to spend £5Bn to save the next 100,000, but only £500M for the 100,000 after that. Nonlinearity is challenging mathematically; a lot of ‘classical’ mathematics (including a 200-year-old algorithm now laughably rebranded as ‘machine learning’) assumes linearity. It is from nonlinearity that we get the concept of a ‘tipping point’: the difference in habitability between 1C and 1.5C of global warming is not the same as the difference between 1.5C and 2C.
  • They have ‘fat-tailed’ distributions. A mathematical law called the ‘central limit theorem’ is often used to justify assuming everything has a normal distribution. Because of this, a lot of statistics is focused on working with that distribution. In complex systems, however, the law of large numbers, on which the central limit theorem depends, does not always apply. Distributions can have ‘fat-tails’, meaning that the probabilities of extreme events are higher than if a normal distribution is assumed. Underestimating the probability of an extreme event is risky for a government, and potentially fatal to some of its population.
  • They are sensitive to local circumstances. Mathematicians call this ‘non-Markovian’ or non-ergodic, and again, find themselves unable to rely on a large body of work that can be applied very successfully when there is not such sensitivity. The practical outcome is that a policy that works in one place may not work in another.
  • They are not at equilibrium. Even now, for some ecologists and economists, the assertion that living systems are not at equilibrium is controversial. Systems apparently remaining in similar (or cycling) states is instead referred to in complex systems language as ‘homeostasis’. The important difference with equilibrium is that homeostasis requires energy, and so by definition is not at equilibrium. For example, your body tries to maintain its blood temperature at the same level (around 36.5C), but has different mechanisms to do this depending on whether the weather is hot or cold, and dry or humid. Mathematically, not being at equilibrium means that calculus becomes a less useful tool. For government, it may mean that after a perturbation, a society will not necessarily return to the way it lived before.
  • They are evolutionary. Complex systems can adapt, innovate and learn. This means that a measure that worked historically may not work now. Indeed, even the language used to describe what people do and how they differ can change. In medical circles, we no longer speak of ‘humours’ or ‘miasmas’, but of white blood cells, bacteria and viruses, and their mutations and variants.

Agent-based modelling grew out of studying complex systems as a way of helping scientists understand them better. But that has not led to the community of practitioners being as willing to use their agent-based models to make predictions. Quite the opposite, in fact. Many practitioners, on the basis of their understanding, regard prediction in complex systems as impossible, and point to other important and useful applications of agent-based models.

All these challenges to classical mathematics make prediction in complex systems much harder. Even those who don’t regard prediction as impossible use guarded language like ‘rough forecasting’, or ‘anticipated outcomes’.

However, claiming that prediction is impossible does not help the policy-maker decide what to do about a pandemic, nor to justify the expense and curtailment of liberties to the people. Worse, there is still a significant community of researchers quite willing to ignore complexity altogether, and to apply methods to make predictions and claim them as such that rely on assumptions that are false in complex systems. (In some circumstances, over short time periods, these methods can work because complex systems don’t always behave in complex ways.) Agent-based models have been argued to have an important role in helping people make decisions in complex systems.

It might be that agent-based modellers need to find ways of participating in discussions about governing complex systems, in circles where prediction is part of the narrative, while still being true to their understanding. Rather than remaining a taboo, prediction is something agent-based modellers need to face. In a special issue of the International Journal of Social Research Methodology, we have collected contributions that aim to open up a conversation about prediction with agent-based models. They reflect a diversity of opinion as varied as the backgrounds of people in the community of practitioners.

Our beleaguered global governments, wearily emerging from the pandemic, find themselves facing an escalated war in Europe, polarized societies, economic instability, persistent misinformation spread on social media, a sixth mass-extinction, and ever-more frequent extreme weather events. Each of these issues is complex, multidimensional and multi-scale, and any solution (including doing nothing) has uncertain, unintended, cascading consequences. If agent-based modelling can help with such challenging decision-making, then it should.

The full editorial Agent-based Modelling as a Method for Prediction for Complex Social Systems is freely available International Journal of Social Research Methodology

Corinna Elsenbroich is Reader of Computational Modelling in Social and Public Health Science at University of Glasgow. Follow @CElsenbroich on Twitter and read more research via ORCID

J. Gareth Polhill (known as Gary Polhill) is a Senior Research Scientist in the Information and Computational Sciences Department at The James Hutton Institute. Follow @GaryPolhill⁩ ⁦on Twitter and read more research via ORCID

featured, Notebook

Measuring Measures during a Pandemic

by Paul Romanowich & Qian Chen,

The spring 2020 semester started like many others before – frantically preparing class materials, finalizing research proposals, and trying to squeeze in one last getaway trip. However, by mid-March 2020 that normalcy had fallen by the wayside. Like it or not, classes were now all remote, disrupting both data collection and plans for any meaningful travel during the summer. But what about that data that was collected? Was it any good, considering what our participants were experiencing? Not surprisingly, little research has focused on the impact major environmental disruptions have on data reliability, given how rare and unpredictable those disruptions are (have you ever experienced a pandemic before 2020?!?). However, we were fortunate to be collecting repeated-measure impulsivity data throughout the spring 2020 semester. Thus, this research note focuses on whether data obtained in the immediate aftermath of the beginning of the COVID-19 pandemic is reliable, from a test-retest perspective.

Our original research question centered around whether decreasing one aspect of impulsivity, delay discounting, would have a positive effect on test scores for Electrical and Computer Engineering students. Like many personality traits, delay discounting rates have been shown to be relatively stable via test-retest data (i.e., trait-like). However, there is also a growing literature that episodic future thinking (EFT) can decrease delay discounting rates, and as a result decrease important impulse-related health behaviors (e.g., smoking, alcohol consumption, obesity). Thus, delay discounting also shows state-like properties. We hypothesized that decreasing delay discounting rates via EFT would also decrease impulse-related academic behaviors (e.g., procrastination), resulting in better quiz and test scores. To accurately measure temporal aspects of delay discounting, EFT, and class performance students completed up to 8 short (27-items) delay discounting tasks from January to May 2020. Multiple EFT trainings significantly decreased delay discounting rates relative to a control group (standardized episodic thinking – SET). However, the impact of EFT on academic performance was more modest.

Although the data did not support our original hypothesis, we did still have repeated-measure delay discounting data throughout the semester, which included data from March 2020 when classes were switched from in-person to fully remote. This repeated-measure data set up a series of Pearson correlations throughout the semester between delay discounting rates at two points in time (e.g., delay discounting rates at the beginning of the semester in January 2020 and end of the semester in May 2020). Importantly, students in the EFT group completed a delay discounting task on March 22, 2020 – 11 days after the official announcement that all classes would be fully remote for the remainder of the semester. In terms of test-retest reliability, the data collected on March 22, 2020 stood out as not like the other. Whereas delay discounting task test-retest reliability was high throughout the semester (supporting previous studies), most correlations using the March 22, 2020 data was nonsignificant, suggesting poor test-retest reliability. Thus, it appeared that the COVID-19 pandemic had significantly, but only temporarily, decreased test-retest reliability for delay discounting rates.

The EFT data also afforded us a way to look at changes more qualitatively in behavior before and after March 22, 2020. As a part of the EFT trainings, students came up with three plausible positive events that could happen in the next month, 6 months, and one year. We coded these events as either having COVID-19 content or not for all students. Predictably, events containing COVID-19 content did not appear until March 22, 2020. However, this event content changed as the semester progressed. On March 22, 2020, most (6 of 7 events) of the content was for the 1-month event. By May 7, 2020 only two students included COVID-19 content, and this was for the 6-month event. Thus, students were more concerned with COVID-19 in March 2020 and as a closer temporal disturbance, relative to May 2020. Perhaps this focus on COVID-19 in the near future disrupted delay discounting rates. We can’t be sure from this data, but the idea is intriguing.

Although this research note was not a rigorously controlled experiment to explicitly examine test-retest reliability for delay discounting, there are still some important points to take from the obtained data. First, it does appear that large environmental disruptions in participants life can significantly change test-retest reliability on standardized measures. Social and behavioral science researchers should be aware of this when interpreting their data. It may also be worthwhile to include a brief measure for significant life events that may be occurring concurrently with their participation in the task. Second, the change in test-retest reliability we observed was only temporary. This is actually good news for researchers, in that even significant environmental disruptions seem to have a minimal impact on test-retest reliability one month later. Perhaps we are more resilient as a species than we typically give ourselves credit for. Lastly, we have no doubt that other social and behavioral science researchers collected similar repeated-measure data throughout the spring 2020 semester. One way to be more confident that our results are not an outlier is through replication. Although we can’t (and don’t want to!) replay the beginning of the COVID-19 pandemic, researchers around the world could profitably begin to combine their data for specific well-validated measures to examine how this large environmental disruption may have systematically affected their results. The same could be done for other large environmental events, such as earthquakes or wars. The end result would be a better understanding of how these environmental disruptions impact those measurement tools that we base many of our theories and treatments off of.

Read the full article in IJSRM here.

featured, Notebook

Bringing “context” to the methodological forefront

By Ana Manzano & Joanne Greenhalgh,

In his latest methodological writings, Prof Ray Pawson (2020) noted that the Covid-19 pandemic:

 “covers everything from micro-biology to macro-economics and all individual and institutional layers in between”.

The current global pandemic could be considered the mother of all contexts. Many will conclude that we could not reduce the impact of Covid-19 in our lives to a limited number of contextual factors such as disease, bereavement, home working, schools closures, travel bans, etc. Covid-19 was and continues to be a force that impacts everything through a complex combination of omnipresent uncertainty, fears, risk management and materiality (masks , PCR tests, and hydrogenic gel). Our paper Understanding ‘context’ in realist evaluation and synthesis (Greenhalgh & Manzano, 2021), just published in the International Journal of Social Research Methodology, reflects precisely on how methodologically complex context is and reviews how context is conceptualized and utilized in current realist evaluation and synthesis investigations.

Perhaps, the most useful of all the quotes mentioned in our paper is one of French sociologist, Raymond Boudon’s (2014, p. 43) who reminds researchers that in the social sciences, it is impossible to talk about context in general terms, since context is always defined specifically:

The question as to “What is context?” has actually no general answer, but answers specifically adapted to the challenging macroscopic puzzles the sociologist wants to disentangle.

Context is somehow everything and, in some ways has become “nothing” with many methodological writings on causality focusing on the more attractive concept of “mechanisms”. Our paper projects context from its eternal background position in peer-reviewed papers, trials and research results, to the foreground.  Although context is a key concept in developing realist causal explanations, its conceptualisation has received comparatively less attention (with notable exceptions e.g. Coldwell (2019)). We conducted a review to explore how context is conceptualised within realist reviews and evaluations published during 2018. We purposively selected 40 studies to examine: How is context defined? And how is context operationalised in the findings? We identified two key ‘narratives’ in the way context was conceptualised and mobilized to produce causal explanations: 1) Context as observable features (space, place, people, things) that triggered or blocked the intervention; assuming that context operates at one moment in time and sets in motion a chain reaction of events. 2)  Context as the relational and dynamic features that shaped the mechanisms through which the intervention works; assuming that context operates in a dynamic, emergent way over time at multiple different levels of the social system. 

We acknowledge that the use of context in realist research is unlikely to be reduced to these two forms of usage only.  However, we argue that these two narratives characterise important distinctions that have different implications for the design, goals and impact of realist reviews and evaluations.  Seeing context as a ‘thing’, that is, as a  ‘feature that triggers’ suggests that one can identify and then reproduce these contextual features in order to optimise the implementation of the intervention as intended.  This reinforces a view that it is possible to isolate ‘ideal’ contexts that determine the success of an intervention. 

On the contrary, seeing context as a dynamic interaction between contexts and mechanisms implies that contexts are infinite, embedded and uncontrollable. Knowledge gained about how contexts and mechanisms interact can be used to understand how interventions might be targeted at broadly similar contextual conditions or adapted to fit with different contextual conditions.  This latter approach eschews the idea that there are ‘optimal’ contextual conditions but argues that successful implementation requires a process of matching and adapting interventions to different evolving circumstances. 

Our paper will disappoint those who seek a practical definition that will help the ever impossible task of distinguishing mechanisms from contexts in causal explanations. We have some sympathy with Dixon-Woods’ claim (2014, p. 98) about distinguishing mechanisms from contexts in realist studies:

 I am inclined towards the view that discussions of what constitutes a mechanism rapidly become unproductive (and tedious), and that it is often impossible, close up, to distinguish mechanism from context.

Since much methodological thinking focuses on mechanisms and funders are (typically, though not exclusively)  interested in outcomes, contexts are, if anything,  rather “annoying”. Context, with its symbiotic relationship with mechanisms,  confuses and distracts researchers in their most important search for mechanisms ‘holy grail’. Our paper demonstrates that the answer to that holy grail pursuit is precisely in that symbiotic relationship, in which contexts are relational and dynamic features that shape the mechanisms through which interventions work. Context operates in a dynamic, emergent way over time at multiple different levels of social systems.

Finally, we are mindful that, in Pawson’s own words when we discussed this paper with him, ‘context’ can mean ‘absolutelybloodyeverything’ and so it is very difficult to perceive that its usage in realist research is reduced to the two forms identified in our review.

Read the full IJSRM article here.


Boudon, R. (2014). What is context? KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie 66 (1), 17-45

Coldwell, M. (2019). Reconsidering context: Six underlying features of context to improve learning from evaluation. Evaluation, 25 (1), 99-117.

Dixon-Woods, M. (2014). The problem of context in quality improvement. Perspectives on context. London: Health Foundation. 87-101

Greenhalgh, J. and Manzano, A. (2021) Understanding ‘context’ in realist evaluation and synthesis. International Journal of Social Research Methodology. Pawson, R. (2020). The Coronavirus response: A realistic agenda for evaluation. RealismLeeds Webinar July 2020.

covid-19, Notebook

Adapting research with men during COVID-19: Experiences shifting to mobile phone-based methods

By Joe Strong, Samuel Nii Lante Lamptey, Richard Nii Kwartei Owoo, and Nii Kwartelai Quartey

It is impossible to understand masculinities without social research methods. Speaking and interacting with men is the fundamental cornerstone of the project Exploring the relationships between men, masculinities and post-coital pregnancy avoidance. Conducting these methods through ‘non-social’, distanced means, as a response to COVID-19, presents new challenges and opportunities and ethical considerations.

The original research sample frame was men aged 16 and over, who slept (proxy for ‘resident’) for at least some of their time in the study area. The research team were predominantly based / resident in the study area [a suburb of Accra], and all were living in Ghana prior to the declaration of a pandemic in 11 March 2020.

Response to COVID-19

The original research design necessitated close contact between respondents and the research team, using a household survey, focus group discussions and in-depth interviews. This proximity was quickly deemed unacceptable when compared to public health best practice (social distancing, limited movement, etc). Such methods endanger the respondents and the research team.

As it became evident that the pandemic was long-term, the team discussed potential mechanisms through which to continue the research in a safe and responsible manner. Mobile phone technology emerged as the only feasible way to ensure that social distancing and limited movement would be required for the research project to continue.

In the study area, mobile phone use is relatively high, reflecting broader trends in Ghana. However, these mobile phones were not all ‘smart’, i.e., it could not be assumed that respondents would have access to data or internet on their mobile devices. As such, continuing person-to-person survey interviews was the most feasible way, so as not to limit the sample to a) access to smart technology and b) ability / desire to navigate an online survey.

Thus, focus group discussions were removed entirely from the research design, as these could not be facilitated meaningfully through non-smart mobile phones. The survey questionnaires and in-depth interview schedule could remain the same, with additional questions on the impact of COVID-19. These had been tested prior to the pandemic in person to check for consistency, comprehension and relevance.


Obtaining equipment for the team in a timely and safe manner was essential – this included a mobile phone and three sim cards for each of the major telecommunication networks in the area. Fortunately, the team each had smart phone technology that allowed for communications to continue over WhatsApp.

Ethical amendments were submitted to account for consent being provided verbally, as written consent required inappropriate close contact. A huge outcome of the ethical amendment was the removal of anyone who could not consent for themselves. This has serious implications for the inclusivity of and representativeness of this research. The nature of the gatekeeping could not be observed or accounted for over the mobile phone. For example, it would not be clear if the parent of an adolescent – who required parental consent – would be in the room listening in. Critical voices, such as adolescents, people who need assistance with communication, e.g. sign language interpreters, are also not able to be incorporated into the survey.

The household listing conducted prior to the pandemic did not collect mobile phone information, as retrieving mobile numbers for each household member would be cumbersome and invasive. Thus, no sampling frame was available for the survey. To mitigate this, the study uses respondent driven sampling, whereby each survey respondent is asked to recruit three people from their personal network to be surveyed next and is compensated per successful recruit as well as for their own survey.

The experience of new methods

The use of mobile phones allows the respondents to decide when and where they want to be surveyed, providing them with greater autonomy than a household survey. In many ways, it empowers the respondent to have much more control over the survey. However, this also can make it harder, as the lack of physical presence makes distraction / missing a call much easier.

Moreover, the element of “out of sight, out of mind” hinders the efficiency with which respondents might recruit their friends, and the additional effort of conducting this recruitment through mobile phones might not help. We created regulations – no calling the same person twice in one day if they picked up, no more than three times in one week, end contact if asked – to try and mitigate overburdening respondents with reminders that might feel harassing.

We are finding that some respondents are reticent to be interviewed over the phone, preferring face-to-face interviews so that they might see the interviewer and build trust through sight. Despite the easing of lockdown in Ghana on 20 April 2020, the decision was made to maintain strict protocols of distancing between data collectors and respondents. This reflects the causes behind the ease of lockdown and that our research is non-essential, and we have a duty to avoid risking ourselves and the respondents. 

Responses to the lack of face-to-face cues were mixed. It makes it harder to use e.g. body language to gauge the respondent experience of the survey. On the other hand, it preserves a greater sense of anonymity for the respondent. It is necessary that data collectors “check-in” on respondents during an interview to ensure that the interview questions are not causing undue harm or stress, and that respondents be reminded that they are in control of the interviews. It is important that we acknowledge that the mobile phone becomes a part of the “context” of the research and it is essential to reflect on the impact of this.

Such experiences provide important opportunities for learning. Generally, we are finding that men are not afraid to talk to us over the phone. But we must acknowledge how many more men will be excluded through these methods and consider opportunities for their future inclusion. The greater control respondents have in arranging interviews to suit themselves is an important reminder of the need for patience and respect for respondents’ priorities and the (non-)essentialness of research.

At the time of writing (30 July 2020), 73 respondents have completed interviews, not including 22 seeds. For ongoing data visualisations and sneak peaks, visit the project website at:

covid-19, featured, Notebook

Are novel research projects ethical during a global pandemic?

By Emily-Marie Pacheco and Mustafa Zaimağaoğlu

The global pandemic has inspired a plethora of new research projects in the social sciences; scholars are eager to identify and document the many challenges the COVID-19 situation has introduced into our daily lives, and explore the ways in which our societies have been able to thrive during these ‘unprecedented times’. Given the wide acknowledgement that life during a global pandemic is often more difficult than in our pre-pandemic circumstances, researchers must consider whether asking those in our communities to donate their time and energy to participating in our research is acceptable. Does recruitment for research which seeks to explore the psychological wellbeing and adjustment of those living through uniquely challenging circumstances during COVID-19 really reflect research integrity?

There is no simple answer to whether asking people to share their stories and experiences of COVID-19 is ethical or improper. Many would argue that social research has the potential to contribute many vital insights about life during a global pandemic which are unique to the humanistic lens and approach often reserved for the social sciences; such investigations could propel scholarly dialogue and manifest practically in recommendations for building resilient societies. However, social scientists have a responsibility to protect their participants from any undue harm they may experience as a result of their participation in a study. Thus, while social research may be especially important during a global pandemic, traditional study designs need to adapt to the circumstances of the pandemic and be held to higher ethical expectations by governing bodies and institutions.

Ethical social research during a global pandemic is reflected in research methods which demonstrate an awareness that we are asking more of our participants than ever before. Simple adaptations to existing projects can go a long way in bettering the experience of participants, such as by providing prospective participants additional information on what is expected of them if they choose to participate in a study – whether it be an online survey or an interview. Projects which aim to collect data using qualitative or interpersonal methods should be especially open to adaptation. These studies may be more ethically conducted by offering socially distant options, such as online focus groups or telephone interviews; adopting multimethod approaches and allowing participants the opportunity to contribute to projects in a medium which is most suitable for them may also be an ideal approach, such as by allowing participants the option to participate in online interviews or submitting audio-diaries conducted at their own discretion.

Attention should also be given to the various details of the research design which pertain to participant involvement more specifically. Does that online survey really needto include fifteen scales, and do they really need to ask all thosedemographic questions? Do online interviews really need to exceed thirty minutes and is it really necessary to require participants to turn their cameras on (essentially inviting you into their homes)? The ‘standard procedures’ for collecting data should be critically re-evaluated by researchers in consideration of the real-world context of those from whom they wish to collect data, with the aim of upholding their commitment to responsible research practices. Ethics boards should also aid researchers in identifying areas of their research designs which may be adapted to protect participants. This additional critical perspective may highlight participation conditions that may be arduous for participants, but which may have been overlooked as part of a traditional research design. 

Research during unprecedented times should also aim to provide a benefit to participants who generously donate their time and energy despite experiencing various transitions and changes in their own personal lives. While some researchers may need to devise creative solutions to meet this aim, many research methods in the social sciences have the inherent potential to serve as an activity which provides a benefit to those who engage in their process. For example, researchers may opt to collect data through methods which have a documented potential for promoting psychological wellbeing, or which are also considered therapeutic mechanism. Such approaches include methods which ask participants to reflect on their own experiences (e.g., audio-diaries, reflective entries, interviews with photo-elicitation) and those which focus on positive thoughts or emotions (e.g., topics related to hope, resilience, progress). Beyond these recommendations, researchers should also consider whether they really need participants at all. There are many options for conducting valuable research with minimal or no contact with participants, such as observational methods, content analyses, meta analyses, or secondary analyses. Some may argue that research during a global pandemic should only be conducted with either previously acquired or secondary data; others may argue that primary data collected voluntarily from willing participants is entirely ethical. Either way, respecting participants and their role in our research is always necessary. Beyond the requirements of doing so to uphold institutional research integrity expectations, it is our individual responsibility to ensure we, as researchers, are protecting those who make our work possible by assessing vulnerability, minimizing risk, and enhancing benefit, of participation – to the full extent of our capabilities.