By Lynn Jamieson, University of Edinburgh
It seems appropriate to review the possibilities of secondary analysis of data that has already been gathered by face-to-face techniques, as the current pandemic closes down many such forms of research. The substitution of virtual means of data collection for face-to-face means, such as interviewing using internet telephony, is not the only possible response to barriers against tried and tested methods; researchers at or able to return to the design stage might consider the creative possibilities of drawing together existing archived qualitative data for new research.
Secondary analysis of qualitative data remains a relatively under used research strategy, despite the accumulation of anonymised, quality-assured and well-documented data that has been carefully curated in official archives having been generated by peer-reviewed, funded and published studies. Researchers seem less able to see secondary analysis as ground breaking and, in the case of qualitative research, heightened sensitivity to the creative connection between researcher and researched builds concerns about ethics and intellectual property. However, in our published work (Davidson, Edwards, Jamieson and Weller, 2019) we counter these claims and point to the ground breaking opportunities of merging data from several studies in a new data assemblage using a set of steps that iteratively combine breadth and depth. The way of proceeding that we advocate, helps the analyst to ask new questions, to make theoretical use of comparison and, in the process, extend the generalisability of qualitative research.
Our method is the outcome of a project under the umbrella of the National Centre for Research Methods http://bigqlr.ncrm.ac.uk/. We set out to develop materials that would assist other researchers to remain true to the principles of qualitative research while working with what could be called ‘big qualitative data’ or ‘big qual’ for short – a data assemblage that is much larger than the typical volume of a single project and too large to readily tackle solely by conventional qualitative analysis techniques. We have called our method of ‘big qual’ secondary analysis the ‘breadth-and-depth method’.
The four steps in the method are described using an analogy with different stages in an imagined archaeological project. At each step, it may be necessary to return to the starting point or a previous step.
- The researcher’s research questions shape the direction of an enquiry-led overview of archived qualitative research using meta data about the archived data sets. This is equivalent to an archaeologist using photographs taken in an aerial survey to select ground for further scrutiny.
- Computer-aided scrutiny using text searching means that are so-called ‘data mining;’ albeit that the techniques used are more like surface mapping of the breadth of the selected data collections. This is like the archaeologists’ ground-based geophysical survey on the surface of an area to assess what merits closer investigation by digging.
- Analysis of multiple small samples of likely data, equivalent to digging shallow ‘test pits’ to find an area worthy of deeper excavation.
- In-depth analysis of the selected sample, using techniques and processes drawn from the repertoire familiar to qualitative researchers. This is the equivalent to archaeological deep excavation.
Our own demonstration project worked with the Timescapes archive https://timescapes-archive.leeds.ac.uk/ Because we were interested in possible convergence by gender in the language and practice of care and intimacy over time, we re-assembled data from across four projects into age cohorts of men and women. This new data set is now available for research and teaching purposes