20251007 1400-1600 MOE RESEARCH PRE-FORUM Planning and monitoring research impact
Awesome—here’s a concrete, robust way to plan and monitor research impact for your EJS + AI simulations work. I’ve grounded this in widely used impact frameworks and translated them into practical, instrumentable steps you can run inside SLS/WebEJS/xAPI.
1) Anchor on a clear definition of “impact”
Use the REF definition to keep everyone aligned: impact is the effect on, change or benefit to society, policy/services, education, the environment or quality of life, beyond academia. (2021.ref.ac.uk)
2) Build your Impact Logic (pathway) up front
Adopt the Co-Produced Pathway to Impact (logic-model style): plan not just research → dissemination, but uptake → implementation → impact with real non-academic partners (teachers, schools, CPDD/ETD). This makes impact measurable at the partner level (e.g., school practices, SLS modules), not just publications. (jces.ua.edu)
Your EJS/AI pathway (sketch):
-
Inputs: WebEJS/EJS stack, AI generators, SLS integration, teacher co-design time.
-
Activities: Co-design workshops; produce “scorable” HTML5 interactives; teacher PD; deployment to SLS/MOE Library.
-
Outputs: Working interactives, teacher guides, SLS lesson packages, usage dashboards.
-
Outcomes (short/med/long):
-
Short: Teacher intention to use; student engagement signals; fit to syllabus.
-
Medium: Changes in teacher practice (e.g., more formative checks); improved student concept mastery; school adoption beyond pilot classes.
-
Long: Policy/program uptake (e.g., tagged exemplars in MOE Library), sustained use at scale across schools; contribution to national priorities (EdTech Masterplan).
-
3) Use an Impact Literacy lens to make it doable
Bayley & Phipps’ Impact Literacy workbooks help teams specify who is involved, what benefit they get at each stage, and how you’ll mobilize knowledge (methods, timing). Use these canvases to turn the pathway into concrete plans your team can execute. (Emerald Publishing)
4) Monitor with RE-AIM (simple, implementation-friendly)
RE-AIM is excellent for education tech rollouts because it checks whether good ideas travel and stick. Track all five dimensions: Reach, Effectiveness, Adoption, Implementation, Maintenance. (re-aim.org)
RE-AIM → EJS/AI examples & data sources
-
Reach: % of targeted teachers/schools using each interactive; student counts per SLS assignment. (SLS analytics, server logs)
-
Effectiveness: Learning gains (pre/post in-app checks), reduction in common misconceptions, time-on-task. (xAPI event streams, embedded quizzes)
-
Adoption: # of schools/modules integrating the interactive; # of teachers who reuse/remix; inclusion in MOE Library. (SLS/MOE Library metadata)
-
Implementation: Fidelity to the intended flow (did teachers use the formative checks?); tech reliability; support tickets. (xAPI sequences, helpdesk logs)
-
Maintenance: Repeat use across terms; % modules still active 6–12 months later; updates sustained by non-developer staff. (longitudinal logs, module versioning)
5) Instrument everything with xAPI (Experience API)
Your simulations already suit xAPI: emit fine-grained “actor–verb–object” statements (e.g., learner completed “diffraction-setup A” with score/time/mistake profile) to an LRS and aggregate at class/school levels. xAPI works across HTML5, simulations, and LMS contexts. (xapi.com)
Minimum xAPI event set for EJS/AI interactives
-
Session:
initialized
,terminated
-
Engagement:
launched interactive
,changed parameter
,completed checkpoint
,requested hint
-
Assessment:
answered
,passed/failed
, with misconception tags -
Pedagogical flow:
viewed teacher note
,skipped step
,revisited concept
-
Provenance: interactive version, model hash, prompt template ID (for AI-generated variants)
6) Evidence types you’ll need for REF-style narratives
When it’s time to prove impact (e.g., for awards, reports, MOE Library exemplars), assemble:
-
Quant: RE-AIM KPIs, learning gains, adoption curves, survival of use across semesters.
-
Qual: Teacher and student testimonies; classroom observations; case studies with before/after practice shifts.
-
External signals: Embedding in policy/guidance, MOE Library status, cross-school scaling, references in CPDD documents. (REF case studies show how to narrate this well.) (RAND Corporation)
7) Designs & attribution (credible, not over-claiming)
-
Rapid cycle A/B inside SLS classes (e.g., standard worksheet vs. EJS interactive) with equal teaching time.
-
Stepped-wedge rollout across classes/schools to estimate effect while being fair.
-
Contribution analysis to show your plausible contribution (amid many factors) without claiming sole causality. (Standard in impact evaluation literature; aligns with RE-AIM’s iterative use.) (PMC)
8) A ready-to-run KPI set (map to your dashboards)
Dimension | KPI | Target (pilot → scale) | Source |
---|---|---|---|
Reach | % teachers in target depts assigning at least 1 EJS/AI interactive this term | 30% → 70% | SLS analytics |
Reach | Median students per interactive per school | 80 → 200 | SLS analytics |
Effectiveness | Avg. normalized gain on concept checks | ≥0.25 → ≥0.40 | xAPI quiz events |
Effectiveness | Misconception rate (e.g., “gamma is a particle”) after activity | −30% | xAPI tagged errors |
Adoption | # MOE Library modules embedding the interactive | 10 → 60 | Library metadata |
Implementation | % lessons following the intended flow (no skipped checkpoints) | ≥75% | xAPI sequences |
Implementation | Mean time to fix critical bugs | <5 working days | Issue tracker |
Maintenance | % schools re-using in the next term | ≥60% | Longitudinal logs |
9) Practical workflow (90-day pilot → 12-month scale)
Weeks 0–2: Plan
-
Fill the Impact Literacy canvas: stakeholders, benefits, engagement plan, risks. (Emerald Publishing)
-
Freeze a theory of change and RE-AIM KPIs; finalize your xAPI schema & LRS.
Weeks 3–8: Build & instrument
-
Implement the xAPI minimum event set and a lightweight teacher fidelity checklist (auto-detectable where possible).
-
Prepare two lesson flows (control vs EJS/AI) and pre/post items aligned to the syllabus.
Weeks 9–13: Run the pilot
-
Stepped or parallel classes; collect xAPI and SLS analytics; weekly Iterative RE-AIM reviews to decide fixes (e.g., improve hints, reorder steps). (PMC)
Weeks 14–18: Analyse & package
-
Compute learning gains and adoption; summarize teacher change stories; package a short REF-style case vignette. (2021.ref.ac.uk)
Months 6–12: Scale & sustain
-
Move successful interactives to MOE Library; update PD materials; track Maintenance (repeat use, updates by others).
10) Lightweight templates you can lift
A. Impact Literacy prompt (one-pager per interactive)
-
Beneficiaries (teachers, students, CPDD reviewers): what concrete benefit?
-
Engagement timing (co-design, pilot, feedback, scale): who/when/how?
-
Risks & mitigations (tech failure, teacher workload): what’s your plan?
-
Evidence you’ll collect (RE-AIM KPIs + quotes + artifacts). (Emerald Publishing)
B. xAPI statement sketch (JSON)
-
actor
(student/teacher),verb
(“answered”, “completed”),object
(interactive step),result
(score, success, response),context
(class, school, module ID),timestamp
,extensions
(misconception tag, AI-variant ID). (xapi.com)
C. REF-style vignette (≤600 words)
-
Need → Intervention → Evidence of change → Significance/Reach (use your KPIs + stories). (2021.ref.ac.uk)
TL;DR (your playbook)
-
Define impact (REF) → 2) Plan with a co-produced pathway and Impact Literacy canvas → 3) Measure with RE-AIM → 4) Instrument with xAPI → 5) Evaluate & narrate with credible designs and REF-style vignettes. (2021.ref.ac.uk)
1. Impact Literacy Planning Template
(from Bayley & Phipps – Impact Literacy workbook)
Structure (canvas style):
-
Who benefits?
(Teachers, students, MOE HQ, policy, etc.) -
What type of benefit?
(Knowledge, practice change, curriculum alignment, efficiencies) -
How will we create that benefit?
(Co-design, pilot in schools, publish in MOE Library, PD workshops) -
When in the project lifecycle?
(Prototype → Pilot → Scale → Maintenance) -
Evidence you’ll collect
(xAPI logs, testimonials, SLS adoption data)
👉 Looks like a 5-column fillable canvas.
2. Pathway to Impact Logic Model
(used in REF case studies)
Boxes to fill:
-
Inputs → EJS/AI tools, team time, teacher partners
-
Activities → Workshops, simulation design, SLS integration
-
Outputs → Interactives, guides, SLS packages
-
Outcomes (short/medium/long) →
-
Short: student engagement, teacher uptake
-
Medium: changes in practice, cross-school use
-
Long: system-level adoption, policy alignment
-
-
Impact → measurable societal/educational change
👉 Usually visualised as a flow diagram from left → right.
3. RE-AIM Monitoring Dashboard
(education-friendly version)
Five panels you can fill in with indicators:
-
Reach → Who used it? (students, schools, % teachers)
-
Effectiveness → Did learning improve? (concept checks, misconception reduction)
-
Adoption → Which schools/departments took it up?
-
Implementation → Was it used as intended? (lesson fidelity, flow)
-
Maintenance → Is it sustained? (re-use over semesters, updates)
👉 Often drawn as a pentagon / star diagram with each RE-AIM dimension.
4. xAPI Data Capture Template
(Table to pre-plan which events to log) https://weelookang.blogspot.com/2025/10/20251007-1400-1600-moe-research-pre.html?m=0
Event Type | Example | Evidence of Impact |
---|---|---|
Engagement | “launched interactive” | Reach |
Learning | “answered Q3 (wave superposition)” | Effectiveness |
Practice | “completed checkpoint with hint” | Implementation |
System | “module reused in MOE Library” | Adoption / Maintenance |