Table 2

Alignment of conceptual framework constructs with instrument measures

Conceptual framework constructInstrument section and methodDescription of measurement
Educator profile Part 1: Demographics Direct questions collected data on the primary predictor (training modality by cohort), along with age and years of ECE experience 
Actual use of technology Part 2: Technology Use (Conditional Logic) A multiple-choice, multiple-response question measured the type of technology (e.g. GenAI chatbots, AR/VR, IoT) used. Conditional questions then measured the frequency of use and the specific purpose for which it was used (e.g. lesson planning) 
Perceptual factors Part 2: Perceptions and Attitudes  
• Perceived usefulness (PU) • 13-item, 6-point Likert scale Assessed perceived benefits across various pedagogical domains (e.g. “Enhances student classroom engagement”) 
• Perceived ease of use (PEOU) • 11-point scale (from 0 to 10) and multiple-choice Measured the “perceived ease of integrating” technology. Also captured via the “challenges” question (e.g. “Complexity and difficulty”) 
• Self-perceived competence • 11-point scale (from 0 to 10) Measured participants' “self-assessed proficiency” with emerging technologies 
Facilitating conditions Part 2: Support and Challenges Direct questions asked if participants had received training and felt supported by their institutions. A multiple-choice question identified key challenges (e.g. “Lack of time,” “Lack of resources”) 
Qualitative insights Part 3: Open-ended Question A single open-ended question invited “any other comments or suggestions,” providing rich qualitative data to add context and depth to all constructs in the framework 
Conceptual framework constructInstrument section and methodDescription of measurement
Educator profile Part 1: Demographics Direct questions collected data on the primary predictor (training modality by cohort), along with age and years of ECE experience 
Actual use of technology Part 2: Technology Use (Conditional Logic) A multiple-choice, multiple-response question measured the type of technology (e.g. GenAI chatbots, AR/VR, IoT) used. Conditional questions then measured the frequency of use and the specific purpose for which it was used (e.g. lesson planning) 
Perceptual factors Part 2: Perceptions and Attitudes  
• Perceived usefulness (PU) • 13-item, 6-point Likert scale Assessed perceived benefits across various pedagogical domains (e.g. “Enhances student classroom engagement”) 
• Perceived ease of use (PEOU) • 11-point scale (from 0 to 10) and multiple-choice Measured the “perceived ease of integrating” technology. Also captured via the “challenges” question (e.g. “Complexity and difficulty”) 
• Self-perceived competence • 11-point scale (from 0 to 10) Measured participants' “self-assessed proficiency” with emerging technologies 
Facilitating conditions Part 2: Support and Challenges Direct questions asked if participants had received training and felt supported by their institutions. A multiple-choice question identified key challenges (e.g. “Lack of time,” “Lack of resources”) 
Qualitative insights Part 3: Open-ended Question A single open-ended question invited “any other comments or suggestions,” providing rich qualitative data to add context and depth to all constructs in the framework 
Source(s): Authors’ own work

or Create an Account

Close Modal
Close Modal