Can you tell me about your work with under-threes to support early reading development?
|
Can you tell me a little bit more about ….?
|
How would you define early reading?
|
Could you describe the environment you provide for babies, in particular to support their early reading development please?
|
How often do the children under-three see you engaged in reading activities?
|
After the interviews I conducted two focus group workshops with eleven volunteer participants. Focus groups are defined by Bryman (2012) as a type of group interview “used to develop an understanding about why people think the way they do” about a particular subject and also provide the opportunity for participants to “bring forward ideas and opinions not foreseen by the interviewer” (p. 501). I decided to conduct focus group workshops as a follow-on from the survey and interviews to respond to the research questions. I also sought to continue with the approach of the voice and viewpoint of the participant at the heart of the data, given that Bertram et al., (2016) suggest that it is imperative to “aim to distribute power between all participants as far as possible and in a way that allows all involved to actively have a voice in the research process and contribute equitably and appropriately to the research process” (p. v). I offered the resources of a flip chart paper, pens, paper, tablets and a laptop during the workshop. The intention of the focus group workshop was to set the scene, guided by the research questions, to agree on the time constraints and then not to be involved at all until the end of the workshop. I felt that there would be greater scope for the participants to interact with each other, yielding a collective rather than an individual view as discussed by Hyden and Bulow (2003) where the “participants’ rather than the researcher’s agenda can predominate” (Cohen et al., 2011, p. 288) to gain insights that may not otherwise have arisen from either the survey or the interviews. The workshop style of recording also meant that the data was presented in the participants’ own way. Their own viewpoints and opinions were recorded, with no requirement for any transcriptions from the researcher. This also became an advantage once I had acknowledgement from eleven interested participants who had not previously been involved in the interviews for the suggested workshop dates and times, therefore, I scheduled two focus group workshops. I also wished to take the opportunity to return to the survey data with these participants where possible, to ensure that the research questions were answered in as much detail as possible.
The next section of this chapter focuses upon the sequence and tools of the data analysis process.
3.8. Data Analysis
Data analysis is the central step in qualitative research. Whatever the data are, it is their analysis that, in a decisive way, forms the outcomes of the research.
(Flick, 2014, p. 3)
Given this centrality of the analysis and the complexity of engaging in quantitative and qualitative data analysis, the process of analysing the breadth of collected data (Corbin and Strauss, 2008) was both continuous and reflexive. May and Perry (2014) advise that “reflexivity is not a method, but a way of thinking, or a critical ethos to support with the interpretation of the data and representation” (p. 111), as a continuous characteristic of good research practice. To illustrate, the practitioners in this study reflected upon their practice and adapted their provision almost immediately, which was supplementary to the research agenda - experiencing “reflexive spaces” as “decision-makers to consider the challenges and to rethink current practice and preconceptions” developing “transformative outcomes”, as discussed by May and Perry (2014, p. 120) and Beck and Beck-Gernsheim (2002). Similarly, the concept of research as “intellectual and moral exchange” argued by Lassiter (2005) developed as practitioners shared their accounts.
Consequently, Horton Mertz (2002) argue that researchers should go “beyond the limits imposed [by others] in order to develop a more in-depth way of understanding and reporting the experience” (p. 150), which I believe the overall research design and variety of methods chosen sustained. Subsequently, in agreement with Newsome (2016), the overall process of analysis was essentially an examination of the data in order to better understand it. According to Willig (2014) it is “interpretation that is the challenge for researchers, as without interpretation it is difficult to make sense of the data” (p. 136). As the intention was to find out about the practitioners’ experiences, their views and beliefs about early reading and their practice and provision for under-threes, it was necessary to make the data meaningful; to make connections and to ask questions. During the analysis stage of the research journey, I displayed some key questions that were visible to me at all times, in order to support with the analysis and interpretation of the data:
What are the practitioners doing? What does this mean?
What are they trying to accomplish? How do they do this?
What assumptions are they making? What assumptions am I making?
What is this saying to me? What do I see going on here?
What is this an example of? What is happening here?
What is trying to be conveyed? What is missing?
What did I learn from these notes?
Why did I include them?
What strikes me?
(Cresswell, 2007, p. 153)
These questions were based on Cresswell’s (2007) suggestions for data interpretation. Early analysis after the initial survey presented a challenge with some key unexplained gaps in the survey data, which were later addressed with further data from the interviews, focus group workshops and Zines. At various points in this research project, I felt overwhelmed with the volume of data generated and drawing out key themes and issues and constructing the analysis has been an arduous task.
I grouped and examined each data set as soon as I received its sources. I utilised “computer assisted qualitative data analysis (CAQDAS)” (Welsh, 2002, p. 2), as Welsh (2002) proposed that “this serves to facilitate an accurate and transparent data analysis process, whilst also providing a quick and simple way of counting who said what and when, which in turn may provide a reliable, general picture of the data” (p. 3). The ‘search’ tool in NVivo offered the possibility to cross-examine the data, which I felt “improved the rigour of the analysis process by validating (or not) some of my own impressions of the data” (Welsh, 2002, p. 3). In contrast, Seidel (1991) points out that using software packages may guide researchers in one particular direction. Alternatively, Hahmed Hilal and Alabri (2013) suggest that “NVivo, as a qualitative data analysis computer software package, has many advantages and may significantly improve the overall quality of research” (p. 182). I decided to embrace the “value of both manual and electronic tools in the qualitative data analysis” (Welsh, 2002, p. 3) process and aimed not to rely on one method in particular, making use of the advantages of each. The analysis used for each of the methods will be discussed in the next few sections of this chapter.
3.8.1. Survey questionnaire (Appendix O)
The survey data, gathered from a cohort of EYTS trainees in September 2015, was entered into a Microsoft Excel Spreadsheet to create tables and graphs as required. The initial details of the survey respondents (ages, employment setting, degree status, etc.) were analysed using a section of the SPSS Statistics Software package, as I was familiar with using this software package and this familiarity, at this point in the analysis, was a motivating strength. I did not use, however, statistical analysis for this study.
The qualitative data from the survey was then entered into NVivo Pro 11 application to develop and code the emerging themes. I also manually coded and themed the data as a thematic analysis using post-it notes and images (Appendix E) to gather the experiences noted in the survey and matched this to the NVivo codes before interpreting the final data.
Clarke and Braun (2013) describe the process of thematic analysis as a technique to identify and analyse configurations within qualitative data. Similarly, Schreier (2012) defines qualitative content analysis (QCA) as a method for “systematically describing the meaning of qualitative material, by classifying as instances of the categories of a coding frame” (p. 5). Given that Schreier (2012) described QCA as systematic, flexible and a way of reducing the data, this was a strong rationale for this approach.
Figure 3.5 presents an extract from the first QCA coding frame. These initial thoughts and the scaffold questions from Creswell (2007) and Schreier’s (2012) QCA framework led to the themes that remained consistent throughout the data analysis.
A further coding frame sample is presented in Appendix N.
Figure 3.5: Initial coding frame ‘Survey’
Figure 3.6 illustrates one aspect of the systematic process of ‘familiarisation’ and ‘reviewing the themes’ undertaken after analysing all the data, as part of the thematic analysis process:
Share with your friends: |