Partnership
|
partnership
|
|
(TUIP)
|
Measurement scale made up of 8 items
|
|
Digital
|
Digital
|
Ease (accomplishing tasks easily) Performance improvement (for students) Productivity increase Effectiveness enhancement (in
online teaching)
|
|
|
Technology
|
technology
|
Acceptance
|
acceptance
|
(DTA).
|
measurement
|
|
scale
|
|
encompassing 6
|
|
items
|
|
|
Engineering training improvement Usefulness (of CAD/CAM)
|
Perceived
|
Perceived ease of
|
Ease (of training
|
ease of use
|
use measurement
|
with AI, robotics,
|
(PEU)
|
scale made up of
|
VR, etc.)
|
|
7 items
|
Accessibility (to
|
|
|
AI, robotics, VR,
|
|
|
etc.)
|
|
|
Clear interaction
|
|
|
(with digital
|
|
|
technology)
|
|
|
Clarity (in using
|
|
|
digital tools for
|
|
|
teaching and
|
|
|
learning)
|
|
|
Flexibility (of
|
|
|
digital tools for
|
|
|
training)
|
|
|
Ease (of becoming
|
|
|
skilled at using
|
|
|
CAD/CAM)
|
|
|
Ease of use (of
|
|
|
CAD/CAM
|
|
|
technology)
|
|
|
Training
|
|
|
requirement (for
|
|
|
AI, CNC, robotics,
|
|
|
VR, etc.)
|
User
|
User acceptance
|
Ease (of using
|
acceptance
|
for Training
|
digital
|
for Training
|
measurement
|
technologies)
|
(UAT)
|
scale comprising
|
Frequency (of
|
|
items
|
using digital
|
|
|
technologies)
|
|
|
Versatility (of
|
|
|
using digital
|
|
|
technologies)
|
|
|
Variety (of training
|
|
|
purposes)
|
Quality
|
Quality
|
Well-documented (quality assurance processes)
Well-implemented
(quality assurance
|
Assurance
|
assurance
|
Practices (QAP)
|
Practices measurement
|
scale made up of 7 items
|
and enhancement processes) Satisfaction (with work at the training institution) Feedback (on performance) Motivation Encouragement Communication (of procedures and processes to staff) Transparency (in decision-making
process)
|
Source: Author’s compilation (2023)
Data analysis
Survey Analytical Method
The data collected from the lecturers' and students' surveys would be analysed by using the IBM SPSS Statistics software package to conduct advanced statistical analysis of data and perform a range of tasks best-suited for research surveys. There are a variety of statistical methods that could be used to address the research questions and objectives, the rationale for choosing partial least square- based structural equation modelling (PLS-SEM) for this study is based on the following: PLS-SEM can handle multifaceted relationships among latent variables, where some constructs can be hypothetical or unobserved (Mehmetoglu & Venturini, 2021). In addition, PLS-SEM can estimate all the coefficients in the model at the same time and can account for multi-collinearity (Ramli et al., 2018) making it possible for valid coefficients to be obtained (Cogliano et al., 2022)
The model outcome will be measured; this includes construct reliability and convergent and discriminant validity. The reliability and validity of all the constructs would be demonstrated. The latent variables in the model would be measured for their reliability and validity. The statistical analysis thus includes analyses of the items to ensure that the items appropriately represent the latent variable and to eliminate the items that do not reflect the constructs. Descriptive
analysis techniques in the form of counts, frequencies, and percentages will be used to determine the important skills.
Confirmatory factor analysis (CFA) will be conducted to examine the measurement model fit, that is, the operationalization of the latent variables. Only when the measurement model fits the data, will the structural model be tested ((Bhat et al., 2022). In each model, and for each construct, the minimum value of the discrepancy function chi-square, degrees of freedom (DF), comparative fit index (CFI), normed fit index (NFI), and the root mean squared error of approximation (RMSEA) (Wahab et al., 2022)will be reported. The convergent validity of all the factors will be checked through the standardised regression weight of the observed variables, composite reliability (CR) values, and average variance extracted (AVE) of the factors (Shrestha, 2021). Furthermore, squared multiple correlations will be calculated to provide an estimate for the variance explained.
Interview data analysis procedures
Qualitative coding is the process of pointing out which parts of a set of data belong to or are examples of a larger idea, instance, topic, or category (Locke et al., 2022 : Cresswell et al., 2020) Coding looks at what the data shows, which may lead to the conclusion that more data needs to be collected (Linneberg & Korsgaard, 2019). Methods of data analysis revealed insightful findings from mechanical engineering industry experts, CTEVT, and teachers on the digital skills gap and teachers' acceptance and usage of digital technology for training. Instead of using a priori coding, open coding techniques were suggested and used Harrison et al., (2020) because the researcher could only guess what the data would show. We didn't suggest or use a priori coding (K. Cresswell et al., 2020), because it starts with pre-set codes that could be too restricting.
The researcher would have liked to let the interviews and numbers show what the themes were. The design's framework was set up by analysing the content and making comparisons over time (Harrison et al., 2020). A strategy called "constant comparison" was used to look at the data over and over (Azodo et al., 2020). During the study, the researcher also used memos a lot (Rahiem, 2020). One way to describe memo writing is as "the methodological link, the process of distilling evidence into theory by the researcher" (Suri, 2020). Researchers are more
productive when they keep their "minds on" by writing memos (Robinson-Hill, 2021).
A memo includes, but is not limited to, anecdotal information from interviews, thoughts that come to the researcher, questions that come up, timeframes based on what was said in the interviews, accounts, guesses, likely conclusions, and patterns that are starting to emerge. Researchers suggested that inductive content analysis be used to look at the interview data, and the study's promoters agreed. Content analysis is used on qualitative materials to "try to find basic themes and meanings" (Mayring, 2019). People always compare and contrast interview data with empirical literature. It went on like this until the researcher, who was overseen by the chair, decided there was enough information about the ideas to draw any conclusions.
The researcher used Hyper Research coding software as a text analysis tool to figure out and judge what the interviewees said. The researcher used both initial and line-by-line coding and made an electronic code book in Hyper Research. The field notes and data transcripts were kept in different places than the code book and the backup (Belgrave & Seide, 2019). First, the researcher wrote down the zip codes that were on the envelopes. After this first transcription, the researcher reread the interviews and used inductive reasoning to make a codebook with key terms, topics, and patterns that came up during the interviews (Radez et al., 2022 ; J. W. Creswell et al., 2011). The interviewees' stories gave the researcher a lot of information (Coyle et al., 2022; Candela, 2019; J. W. Creswell, 2014). The researcher did reflection-on-action by thinking about the interview process, the people who took part, and the themes that emerged from the data (Moghaddam et al., 2020). Each recorded interview was played back and listened to at least six times by the researcher. Each transcript was read by at least two people before it was sent to the person who was interviewed.
The transcripts were read over again, and any mistakes were fixed. The researcher looked at each interview transcript at least six times to find codes and themes that explained what was going on. (Margot & Kettler, 2019). Back and forth between analyzing field notes, reflective summaries, and interview themes, a storyline began to take shape (Margot & Kettler, 2019). The researcher has added
two new codes to the growing list of Hyper-RESEARCHTM codes to show how hard they are working to get the facts right and tell a good story. With the new "Quotes" category, the researcher can now find meaningful quotes from the people who were interviewed. After timestamps were added, the original file was checked to make sure it was still good. The researcher who did the first transcription and proofreading may have missed some typos, so a code called "Mistakes/Typos" was added. The researcher went through and fixed the mistakes in the transcripts. The dissertation committee suggested that Lincoln & Guba, (1989) criteria be used to judge the thoroughness of the qualitative data and make it more trustworthy. The criteria include trustworthiness, dependability, conformity, and transferability.
Validity in qualitative investigations was determined by using both inter- rater and intra-rater reliability checks (Lincoln & Guba, 1989). The reliability of data evaluations between researchers is known as interrater reliability (Kekeya, 2021; J. W. Creswell, 2014)). To achieve interrater reliability, the researcher asked two experienced researchers to view the codebook and code the transcript of participant with the pseudonym Question 6 from November 29, 2022 to December 10, 2022. The resulting coded transcripts were compared and discussed. Also, the researcher asked one peer examiner to review the coded transcripts and the resulting codebook in Hyper-RESEARC and provide feedback. One feedback discussion was conducted via Skype with short recorded videos from the peer-examiner and online through Google Docs. Two feedback discussions were conducted through email and Google Docs. To achieve interrater reliability, the researcher asked two experienced researchers to view the codebook and code the transcript of participant Q6 from November 29, 2022, to December 10, 2022.
The resulting coded transcripts were compared and discussed. Also, the researcher asked one peer examiner to review the coded transcripts and the resulting codebook in Hyper-Research and provide feedback. One feedback discussion was conducted via Skype with short recorded videos from the peer-examiner and online through Google Docs. Two feedback discussions were conducted through email and Google Docs. After talking to the experienced researcher and the peer examiner, it was decided that the data had been coded correctly, with just a few small changes. Through this process of peer review, discussion, and tweaking, inter-
rater reliability was found (M. D. Becker, 2021) Using strict validation methods and procedures helped make sure the study was valid and gave the data analysis process more quality and weight (Resnik & Elmore, 2016). "Member checking" means that participants can look over and give feedback on a draft of the final report of themes to make sure it is correct (Creswell, 2014).
The members' participation was checked through email conversations and shared transcripts. The transcripts of questions 4 and 6 from the interviews were sent back for review. The researcher fixed mistakes in the transcripts of four people who had sent them back with questions. Inter 1 and Inter 3 were given a full transcript and summary of their coded interviews to make sure that the researcher's codes and themes were correct. Participants were asked to confirm that the codes that were made based on their interviews were a true reflection of their experiences and views(Radez et al., 2022 ;Cresswell et al., 2019). Then I talked to the reviewers to find out what they thought and what they had to say. Other researchers double- checked the themes that were drawn out to make sure they were true to the interviews. During the peer debriefing, the participants checked on their own to see if there was any chance of bias in the data analysis phase. Two senior colleagues were given a briefing on the data and analysis that had been done so far. The researcher read and thought about each comment.
The researcher's field notes and summaries of the interviews afterward made sure that the information was reliable and consistent. Member checking or respondent validation, also occurred with the mechanical engineering managers in the industry and the CTVET officers through email. This technique was used by researchers to help improve the accuracy, credibility, validity, and transferability of a study on December 14, 2022. Participants were only known by their pseudonyms in the study. Both chances to check in with members were well received, and additional conversations will be scheduled. The managers of both industries and the CTVET requested that the researcher send them copies of the dissertation. Peer reviewers as well as study participants will have access to the dissertation.
One way to "balance out any of the potential weaknesses in each data collection method" is through data triangulation (Hammerton & Munafò, 2021). Information gathered from multiple sources, such as surveys and interviews, is
"triangulated" to yield more accurate results (Kankam, 2020;Hu et al., 2019 ; J. W. Creswell et al., 2011); This study used a survey, video interviews, and a focus group to gather information about teachers' attitudes toward technology integration in the classroom. With such a wide variety of data sources, we were able to triangulate with high precision (Katz‐Buonincontro et al., 2020 ; Ishtiaq, 2019; J. W. Creswell, 2014).
Any differences, inconsistencies, or contradictions that were found during the pilot study, peer interview review, member checking, or peer debriefing were discussed with the participant who brought them up in the survey, and every effort was made to fix the situation. We found no discrepancies or opposing opinions regarding the developing topics. The sponsors oversaw all aspects of the study. The dissertation promoters were always kept up-to-date through emails and Zoom meetings. The data collection method was summed up in a video the researcher made for the dissertation chair. Results from the validation procedures, survey data, and semi-structured interviews were presented and discussed with the dissertation committee.
Share with your friends: |