Print

 

Awesome—here’s a concrete, robust way to plan and monitor research impact for your EJS + AI simulations work. I’ve grounded this in widely used impact frameworks and translated them into practical, instrumentable steps you can run inside SLS/WebEJS/xAPI.

1) Anchor on a clear definition of “impact”

Use the REF definition to keep everyone aligned: impact is the effect on, change or benefit to society, policy/services, education, the environment or quality of life, beyond academia. (2021.ref.ac.uk)

2) Build your Impact Logic (pathway) up front

Adopt the Co-Produced Pathway to Impact (logic-model style): plan not just research → dissemination, but uptake → implementation → impact with real non-academic partners (teachers, schools, CPDD/ETD). This makes impact measurable at the partner level (e.g., school practices, SLS modules), not just publications. (jces.ua.edu)

Your EJS/AI pathway (sketch):

3) Use an Impact Literacy lens to make it doable

Bayley & Phipps’ Impact Literacy workbooks help teams specify who is involved, what benefit they get at each stage, and how you’ll mobilize knowledge (methods, timing). Use these canvases to turn the pathway into concrete plans your team can execute. (Emerald Publishing)

4) Monitor with RE-AIM (simple, implementation-friendly)

RE-AIM is excellent for education tech rollouts because it checks whether good ideas travel and stick. Track all five dimensions: Reach, Effectiveness, Adoption, Implementation, Maintenance. (re-aim.org)

RE-AIM → EJS/AI examples & data sources

5) Instrument everything with xAPI (Experience API)

Your simulations already suit xAPI: emit fine-grained “actor–verb–object” statements (e.g., learner completed “diffraction-setup A” with score/time/mistake profile) to an LRS and aggregate at class/school levels. xAPI works across HTML5, simulations, and LMS contexts. (xapi.com)

Minimum xAPI event set for EJS/AI interactives

6) Evidence types you’ll need for REF-style narratives

When it’s time to prove impact (e.g., for awards, reports, MOE Library exemplars), assemble:

7) Designs & attribution (credible, not over-claiming)

8) A ready-to-run KPI set (map to your dashboards)

DimensionKPITarget (pilot → scale)Source
Reach % teachers in target depts assigning at least 1 EJS/AI interactive this term 30% → 70% SLS analytics
Reach Median students per interactive per school 80 → 200 SLS analytics
Effectiveness Avg. normalized gain on concept checks ≥0.25 → ≥0.40 xAPI quiz events
Effectiveness Misconception rate (e.g., “gamma is a particle”) after activity −30% xAPI tagged errors
Adoption # MOE Library modules embedding the interactive 10 → 60 Library metadata
Implementation % lessons following the intended flow (no skipped checkpoints) ≥75% xAPI sequences
Implementation Mean time to fix critical bugs <5 working days Issue tracker
Maintenance % schools re-using in the next term ≥60% Longitudinal logs

9) Practical workflow (90-day pilot → 12-month scale)

Weeks 0–2: Plan

Weeks 3–8: Build & instrument

Weeks 9–13: Run the pilot

Weeks 14–18: Analyse & package

Months 6–12: Scale & sustain

10) Lightweight templates you can lift

A. Impact Literacy prompt (one-pager per interactive)

B. xAPI statement sketch (JSON)

C. REF-style vignette (≤600 words)


TL;DR (your playbook)

  1. Define impact (REF) → 2) Plan with a co-produced pathway and Impact Literacy canvas → 3) Measure with RE-AIM → 4) Instrument with xAPI → 5) Evaluate & narrate with credible designs and REF-style vignettes. (2021.ref.ac.uk)

1. Impact Literacy Planning Template

(from Bayley & Phipps – Impact Literacy workbook)

Structure (canvas style):

👉 Looks like a 5-column fillable canvas.


2. Pathway to Impact Logic Model

(used in REF case studies)

Boxes to fill:

👉 Usually visualised as a flow diagram from left → right.


3. RE-AIM Monitoring Dashboard

(education-friendly version)

Five panels you can fill in with indicators:

👉 Often drawn as a pentagon / star diagram with each RE-AIM dimension.


4. xAPI Data Capture Template

(Table to pre-plan which events to log) https://weelookang.blogspot.com/2025/10/20251007-1400-1600-moe-research-pre.html?m=0

Event TypeExampleEvidence of Impact
Engagement “launched interactive” Reach
Learning “answered Q3 (wave superposition)” Effectiveness
Practice “completed checkpoint with hint” Implementation
System “module reused in MOE Library” Adoption / Maintenance