featured, Notebook

Seeking alternatives: A reflection on conducting online interviews with disabled young people during the COVID-19 pandemic

Angharad Butler-Rees and Stella Chatzitheochari

While scholars have increasingly documented and reflected on their approaches to conducting research during the pandemic, little is still known about the impact of social distancing measures on qualitative research with disabled young people.

Our new paper provides a methodological reflection on undertaking qualitative research with disabled young people as part of the Educational Pathways and Work Outcomes longitudinal study. Our study started in March 2021 during the third national lockdown in England. Due to social distancing measures in place at the time of commencing, it was necessary to revise our original plans to conduct face-to-face interviews with disabled young people and conduct online interviews instead. We conducted a total of 35 online interviews with autistic, dyslexic, and physically disabled young people aged 15-16 years old.

Ensuring Accessibility

The internet has long been deemed as a potentially empowering platform for disabled people, connecting isolated individuals and ensuring access to social, civic and community life. Our focus on young people was particularly useful, as this population tends to be very comfortable with the use of technology. The extensive periods of enforced home-learning during the covid-19 pandemic had further increased young people’s familiarity with online communication platforms, rendering the idea of online interviews a far less daunting prospect. However, it is worth noting that online tools can also present a number of accessibility barriers e.g., poor text layout, little colour contrast and limited keyboard functionality. These were important factors to consider when designing online interviews for our project.

Accessibility has to be incorporated into every part of the research process when working with disabled people. To put participants at ease prior to their interview, we sent them participant information packs as well as a short video of the interviewer introducing themselves and the study. Familiarity with the researcher was greatly valued for autistic young people, making them feel more at ease. Previous literature has suggested that autistic young people may be disconcerted or unresponsive in encounters with strangers, so building a degree of initial trust and rapport was of upmost importance for successful interviewing. In line with this, we also arranged online pre-interview meetings with participants and their parents to build rapport.

Pre-interview meetings also helped us ensure that any accessibility requirements were put into place. We asked participants to choose their preferred communication platform. Several participants opted to use assistive software during their interview e.g., enabling captioning, magnification or modifying volume. Other adjustments included allowing participants to sit off screen or to keep their cameras off while the interviewer remained visible. This made interview far less intrusive and anxiety provoking and was greatly valued by autistic participants. Other adjustments included the presence of a guardian that could provide practical assistance or emotional support, simplification of interview questions, as well as collection of data over several interviews as opposed to one. Overall, we felt that these adjustments made interviews considerably more accessible for disabled young people, ultimately giving voice to a population who may not always be amenable to conventional face-to-face interviewing methods that can be experienced as more restricting and demanding.  

Challenges during Interviewing

While some young people were very comfortable in engaging with the interview process and narrating their lived experiences, others were far more hesitant, requiring regular prompting and reassurance. The online medium made this slightly more challenging for the interviewer, with prompting and encouragement occasionally leading to cross-talking. It was also notably more difficult to interpret emotion and body language online, while the loss of internet connection at times affected the flow of the interview.

Another challenge was the difficulty in maintaining participants’ attention. We sometimes felt that the lack of physical presence meant that participants were far easier distracted by being in their homes, e.g., checking their mobiles, playing with family pet. However, we also recognise that this may be interpreted in a different manner: Indeed, it may be indicative of a greater share of power afforded to disabled young people in online settings. Overall, we did not feel that such distractions affected the quality of our data collection and think that the physical distance may have aided disclosure of personal experiences. A feedback survey confirmed that participants enjoyed the use of the online medium, with the vast majority requesting online interviews for the future waves of data collection.

A final note on accessibility

Our reflections may not speak to studies seeking to interview disabled young people with different accessibility needs such as speech or communication difficulties (e.g., stammering). These participants may find online communication more difficult due to possible misunderstanding and difficulties in lip reading and interpretation. Similarly, it is worth noting that interviews can be experienced as particularly exhausting by some disabled young people, whether face to face or online, preventing them from taking part. Researchers may consider offering alternatives such as email interviews alongside conventional online interviews.

Looking ahead

Our overall experience with online interviews was very positive. We were privileged to be able to access disabled young people’s lived experiences during an unprecedented period of global disruption. Notwithstanding the challenges mentioned above, we feel that online interviewing is a valuable tool that should not be viewed as second best to face-to-face conversational methods. We therefore encourage researchers to explore the use of online methods, especially with regards to young and disabled populations.

Read the full article here: Giving a Socially Distanced Voice to Disabled Young People: Insights from the Educational Pathways and Work Outcomes Longitudinal Study

Notebook

An image of China in Africa through the lens of mixed-methods

by Jarosław Jura & Kaja Kałużyńska

The increasing number of digital and digitized content sources (online versions of traditional media, news portals, various websites on myriads of topics, and, of course, social media) has started to influence empirical social research. Huge amounts of easily accessible and almost ready-to-analyze datasets seem to be a dream coming true for social researchers, especially those who prefer to work with unobtrusively-collected data.

Such large datasets ask for being analysed by mixed methods, to avoid wasting their potential by either choosing a sample or focusing on quantitatively obtained information only. Here come other tools that make the life of a contemporary researcher much more comfortable – software solutions. Of course, in the ideal situation, one could just ‘feed’ all the data to AI and wait for the results, but there are many limitations to such an approach, like usability in specific cases, its accessibility, and, of course, the researcher’s nightmare: a limited project’s budget. Moreover, in the case of smaller datasets, consisting of heterogeneous data, analysis’ results might prove unsatisfactory.

Our research project, an exploratory study on the image of China and the Chinese in Zambia and Angola, included also an analysis of textual media content, namely news articles published in these countries and mentioning China or the Chinese. We obtained a mid-sized dataset, consisting of 2477 articles; the material was very heterogeneous, because of the wide scope of topics covered by the texts and the fact that we analysed content from both English- and Portuguese-language media.

In the course of analysis, we realized that a new method would be needed to obtain the best possible results on the basis of the collected data. After a series of trial-and-error approaches, we managed to develop MIHA – Mixed Integrative Heuristic Approach. The application of this method allowed us to create an exhaustive, contextual and precise keyword dictionary for automated classification of text units as well as a set of sentiment indexes.

We have to admit, that even though we did our best to utilize all the possibilities of the software (Provalis QDA Miner and Wordstat), the dictionary creation process was a time-consuming task since it included reviewing each word of frequency higher or equal to 10 in the whole database.

Our classification, similar to the initial conceptualization of theoretical categories within the grounded theory approach, aimed to explore the most frequent contexts in which China was depicted in African e-media. Each examined word was either added to an exclusion list (words irrelevant from the point of view of the research) or assigned to a chosen – sometimes a newly created – category, together with other words of the same root and all the synonyms.

In the next step, we examined the already categorized keywords in their context to refine the categorization results, mainly by removing those keywords that appeared within the text in unexpected contexts. Most of the categories were re-coded, and some of the keywords were re-assigned in the next steps. This heuristic approach resulted in a set of categories, including ‘emotional’ ones, positive and negative, that later on were used to design sentiment indexes. Our indexes are based on a comparison of the results of quantitative and qualitative analysis and coding. They could be used as a tool for improving dictionary-based sentiment analysis by comparing the results of sentiment analysis performed on the basis of automated coding with manually-coded samples.

We believe that MIHA constitutes a conceptual approach applicable by researchers of various backgrounds in projects focused on investigating the general image presented in textual content, especially in case of mid-sized, heterogeneous data sets. We do not overlook the fact that soon, automated machine learning coding methods will constitute the main approach towards text analysis. However, since such procedures are still imperfect and context-sensitive, we presume that MIHA, consisting of a contextualized dictionary, manual coding of chosen parts of the database and index measurements, could be useful for analysis of data sets related to less common study areas (social groups, languages, geographical areas, subcultures, etc.), in which machine learning-based research would contain a low level of construct validity.

Both the dictionary-creation process and the indexes are described in detail in our paper.

Read the full article in the IJSRM here.

featured, Notebook

Using objects to help facilitate qualitative interviews

by Signe Ravn

Doing empirical research on imagined futures is a methodological challenge. As scholars have argued, generating rich insights into how such futures might look can be difficult as participants may produce somewhat generic or stereotypical accounts of what the future might hold or even refuse to engage in such tasks (which of course provides other insights). Over the past decade, these challenges have led many qualitative researchers to explore different forms of creative, arts-based and/or participatory methods to approach the topic in new ways. In some cases, these approaches have been productive, and in other cases they lead to new questions about how to then interpret the findings. And sometimes they don’t really generate more concrete insights after all.

In my longitudinal research on the everyday lives and imagined futures of young women with interrupted formal schooling, I also used various creative methods to break away from the traditional interview format and to seek to approach the ways in which participants imagined their futures from multiple different perspectives. This approach was inspired by Jennifer Mason’s work on facet methodology. In my recent paper for the International Journal of Social Research Methodology I explore one creative method that proved particularly fruitful, that is, an object-based method. In brief, this method was deployed in the third interview with my participants (after one year) and involved asking participants to bring ‘one thing (like a gift, some clothing, a thing you once bought, or something else) that reminds you of your past and a thing that you relate to your future’. Only one participant asked for a clarification of what these items could be, while the remainder were happy to do this task, and some even said right away that they knew exactly what to bring. On the day of the interview, some participants did say that deciding on a ‘future’ thing had been difficult, but nevertheless they all had chosen something. Towards the end of the interview I asked about their ‘things’ and we spoke about each object in turn, exploring why they had brought a particular object, how it related to their past/future, and whether and how this was something they used in their day-to-day lives.

Reflecting on the interviews I was wondering what made this particular exercise helpful for exploring and speaking about ‘futures’. Other scholars have successfully drawn on objects to study memories, but none have turned their attention to the potential of objects for studying futures. In the paper I argue that what makes the object-method productive is to do with materiality. More specifically, I argue that what makes this method unique is the combination of ‘materiality as method’ as well as the ‘materiality of the method’, and that this double materiality at play is what is producing elaborate future narratives. In other words, via the materiality of the objects, specific imagined futures become ‘within reach’ for participants, with the object serving as an anchor for these future narratives. The method suggests a temporal complexity as well: the future objects come to represent futures that the participants have already taken steps towards; they are ‘futures-already-in-the-making. Drawing on Jose Esteban Munoz, we can consider them ‘futures in the present’, that is, futures that already exist, perhaps just in glimpses, in the present.

To make this argument I draw on both narrative research, material culture studies and qualitative research methodology. One key source of inspiration was Liz Moor and Emma Uprichard’s work on material approaches to empirical research, where the authors argue for paying greater attention to the ‘latent messages’ of methods and data, for instance in the form of sensory and emotional responses but also, as I point to in the paper, the messages conveyed by a dirty and bent P plate and a carefully crafted name tag.   Due to limitations of space, the published paper focuses on the ‘future’ objects and the future narratives generated through these, and only briefly mentions the ‘past’ object that participants also brought to the interview. This is due to the paper’s ambition to highlight the potentials of using object methods, and a focus on materiality more generally, in research on futures. However, for a full analysis of the insights gained through this method, both in terms of the settled and unsettled future narratives and the normative dimensions shaping which objects became ‘proper’ objects for the interview situation, both ‘past’ and ‘future’ objects should be analysed together.

Read the full article in the IJSRM here.

featured, Notebook

Measuring Measures during a Pandemic

by Paul Romanowich & Qian Chen,

The spring 2020 semester started like many others before – frantically preparing class materials, finalizing research proposals, and trying to squeeze in one last getaway trip. However, by mid-March 2020 that normalcy had fallen by the wayside. Like it or not, classes were now all remote, disrupting both data collection and plans for any meaningful travel during the summer. But what about that data that was collected? Was it any good, considering what our participants were experiencing? Not surprisingly, little research has focused on the impact major environmental disruptions have on data reliability, given how rare and unpredictable those disruptions are (have you ever experienced a pandemic before 2020?!?). However, we were fortunate to be collecting repeated-measure impulsivity data throughout the spring 2020 semester. Thus, this research note focuses on whether data obtained in the immediate aftermath of the beginning of the COVID-19 pandemic is reliable, from a test-retest perspective.

Our original research question centered around whether decreasing one aspect of impulsivity, delay discounting, would have a positive effect on test scores for Electrical and Computer Engineering students. Like many personality traits, delay discounting rates have been shown to be relatively stable via test-retest data (i.e., trait-like). However, there is also a growing literature that episodic future thinking (EFT) can decrease delay discounting rates, and as a result decrease important impulse-related health behaviors (e.g., smoking, alcohol consumption, obesity). Thus, delay discounting also shows state-like properties. We hypothesized that decreasing delay discounting rates via EFT would also decrease impulse-related academic behaviors (e.g., procrastination), resulting in better quiz and test scores. To accurately measure temporal aspects of delay discounting, EFT, and class performance students completed up to 8 short (27-items) delay discounting tasks from January to May 2020. Multiple EFT trainings significantly decreased delay discounting rates relative to a control group (standardized episodic thinking – SET). However, the impact of EFT on academic performance was more modest.

Although the data did not support our original hypothesis, we did still have repeated-measure delay discounting data throughout the semester, which included data from March 2020 when classes were switched from in-person to fully remote. This repeated-measure data set up a series of Pearson correlations throughout the semester between delay discounting rates at two points in time (e.g., delay discounting rates at the beginning of the semester in January 2020 and end of the semester in May 2020). Importantly, students in the EFT group completed a delay discounting task on March 22, 2020 – 11 days after the official announcement that all classes would be fully remote for the remainder of the semester. In terms of test-retest reliability, the data collected on March 22, 2020 stood out as not like the other. Whereas delay discounting task test-retest reliability was high throughout the semester (supporting previous studies), most correlations using the March 22, 2020 data was nonsignificant, suggesting poor test-retest reliability. Thus, it appeared that the COVID-19 pandemic had significantly, but only temporarily, decreased test-retest reliability for delay discounting rates.

The EFT data also afforded us a way to look at changes more qualitatively in behavior before and after March 22, 2020. As a part of the EFT trainings, students came up with three plausible positive events that could happen in the next month, 6 months, and one year. We coded these events as either having COVID-19 content or not for all students. Predictably, events containing COVID-19 content did not appear until March 22, 2020. However, this event content changed as the semester progressed. On March 22, 2020, most (6 of 7 events) of the content was for the 1-month event. By May 7, 2020 only two students included COVID-19 content, and this was for the 6-month event. Thus, students were more concerned with COVID-19 in March 2020 and as a closer temporal disturbance, relative to May 2020. Perhaps this focus on COVID-19 in the near future disrupted delay discounting rates. We can’t be sure from this data, but the idea is intriguing.

Although this research note was not a rigorously controlled experiment to explicitly examine test-retest reliability for delay discounting, there are still some important points to take from the obtained data. First, it does appear that large environmental disruptions in participants life can significantly change test-retest reliability on standardized measures. Social and behavioral science researchers should be aware of this when interpreting their data. It may also be worthwhile to include a brief measure for significant life events that may be occurring concurrently with their participation in the task. Second, the change in test-retest reliability we observed was only temporary. This is actually good news for researchers, in that even significant environmental disruptions seem to have a minimal impact on test-retest reliability one month later. Perhaps we are more resilient as a species than we typically give ourselves credit for. Lastly, we have no doubt that other social and behavioral science researchers collected similar repeated-measure data throughout the spring 2020 semester. One way to be more confident that our results are not an outlier is through replication. Although we can’t (and don’t want to!) replay the beginning of the COVID-19 pandemic, researchers around the world could profitably begin to combine their data for specific well-validated measures to examine how this large environmental disruption may have systematically affected their results. The same could be done for other large environmental events, such as earthquakes or wars. The end result would be a better understanding of how these environmental disruptions impact those measurement tools that we base many of our theories and treatments off of.

Read the full article in IJSRM here.

featured, Notebook

Bringing “context” to the methodological forefront

By Ana Manzano & Joanne Greenhalgh,

In his latest methodological writings, Prof Ray Pawson (2020) noted that the Covid-19 pandemic:

 “covers everything from micro-biology to macro-economics and all individual and institutional layers in between”.

The current global pandemic could be considered the mother of all contexts. Many will conclude that we could not reduce the impact of Covid-19 in our lives to a limited number of contextual factors such as disease, bereavement, home working, schools closures, travel bans, etc. Covid-19 was and continues to be a force that impacts everything through a complex combination of omnipresent uncertainty, fears, risk management and materiality (masks , PCR tests, and hydrogenic gel). Our paper Understanding ‘context’ in realist evaluation and synthesis (Greenhalgh & Manzano, 2021), just published in the International Journal of Social Research Methodology, reflects precisely on how methodologically complex context is and reviews how context is conceptualized and utilized in current realist evaluation and synthesis investigations.

Perhaps, the most useful of all the quotes mentioned in our paper is one of French sociologist, Raymond Boudon’s (2014, p. 43) who reminds researchers that in the social sciences, it is impossible to talk about context in general terms, since context is always defined specifically:

The question as to “What is context?” has actually no general answer, but answers specifically adapted to the challenging macroscopic puzzles the sociologist wants to disentangle.

Context is somehow everything and, in some ways has become “nothing” with many methodological writings on causality focusing on the more attractive concept of “mechanisms”. Our paper projects context from its eternal background position in peer-reviewed papers, trials and research results, to the foreground.  Although context is a key concept in developing realist causal explanations, its conceptualisation has received comparatively less attention (with notable exceptions e.g. Coldwell (2019)). We conducted a review to explore how context is conceptualised within realist reviews and evaluations published during 2018. We purposively selected 40 studies to examine: How is context defined? And how is context operationalised in the findings? We identified two key ‘narratives’ in the way context was conceptualised and mobilized to produce causal explanations: 1) Context as observable features (space, place, people, things) that triggered or blocked the intervention; assuming that context operates at one moment in time and sets in motion a chain reaction of events. 2)  Context as the relational and dynamic features that shaped the mechanisms through which the intervention works; assuming that context operates in a dynamic, emergent way over time at multiple different levels of the social system. 

We acknowledge that the use of context in realist research is unlikely to be reduced to these two forms of usage only.  However, we argue that these two narratives characterise important distinctions that have different implications for the design, goals and impact of realist reviews and evaluations.  Seeing context as a ‘thing’, that is, as a  ‘feature that triggers’ suggests that one can identify and then reproduce these contextual features in order to optimise the implementation of the intervention as intended.  This reinforces a view that it is possible to isolate ‘ideal’ contexts that determine the success of an intervention. 

On the contrary, seeing context as a dynamic interaction between contexts and mechanisms implies that contexts are infinite, embedded and uncontrollable. Knowledge gained about how contexts and mechanisms interact can be used to understand how interventions might be targeted at broadly similar contextual conditions or adapted to fit with different contextual conditions.  This latter approach eschews the idea that there are ‘optimal’ contextual conditions but argues that successful implementation requires a process of matching and adapting interventions to different evolving circumstances. 

Our paper will disappoint those who seek a practical definition that will help the ever impossible task of distinguishing mechanisms from contexts in causal explanations. We have some sympathy with Dixon-Woods’ claim (2014, p. 98) about distinguishing mechanisms from contexts in realist studies:

 I am inclined towards the view that discussions of what constitutes a mechanism rapidly become unproductive (and tedious), and that it is often impossible, close up, to distinguish mechanism from context.

Since much methodological thinking focuses on mechanisms and funders are (typically, though not exclusively)  interested in outcomes, contexts are, if anything,  rather “annoying”. Context, with its symbiotic relationship with mechanisms,  confuses and distracts researchers in their most important search for mechanisms ‘holy grail’. Our paper demonstrates that the answer to that holy grail pursuit is precisely in that symbiotic relationship, in which contexts are relational and dynamic features that shape the mechanisms through which interventions work. Context operates in a dynamic, emergent way over time at multiple different levels of social systems.

Finally, we are mindful that, in Pawson’s own words when we discussed this paper with him, ‘context’ can mean ‘absolutelybloodyeverything’ and so it is very difficult to perceive that its usage in realist research is reduced to the two forms identified in our review.

Read the full IJSRM article here.

References

Boudon, R. (2014). What is context? KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie 66 (1), 17-45

Coldwell, M. (2019). Reconsidering context: Six underlying features of context to improve learning from evaluation. Evaluation, 25 (1), 99-117.

Dixon-Woods, M. (2014). The problem of context in quality improvement. Perspectives on context. London: Health Foundation. 87-101

Greenhalgh, J. and Manzano, A. (2021) Understanding ‘context’ in realist evaluation and synthesis. International Journal of Social Research Methodology. https://doi.org/10.1080/13645579.2021.1918484 Pawson, R. (2020). The Coronavirus response: A realistic agenda for evaluation. RealismLeeds Webinar July 2020. https://realism.leeds.ac.uk/realismleeds-webinar-series/