✨ Refined Version (Sharper, More Impactful)
Building a Better Science Class: From Prompt to Pedagogy with AI
In today’s EdTech landscape, one persistent challenge is finding interactive tools that do more than demonstrate concepts—they must actively teach thinking processes.
One such concept is Fair Testing: changing only one variable at a time to ensure valid conclusions. Rather than adapting an existing tool, we designed a custom interactive using generative AI—guided not by code-first thinking, but by pedagogy-first prompting.
The result was a Mould Growth Investigation simulation—a web-based interactive that does not just simulate science, but enforces scientific thinking.
🔹 Phase 1: From Idea to Simulation
We began with a structured prompt to define the core system:
| https://vle.learning.moe.edu.sg/community-gallery/module/edit/fe0f1a1b-dc09-4c57-97dc-96031d1c18f7/section/102355123/activity/102355127 |
-
Two bread samples (A & B)
-
Variables: Temperature, Moisture, Bread Type
-
7-day mould growth simulation
The AI generated a working sandbox.
But at this stage, it was interactive—but not instructional.
Students could click randomly and still “complete” the activity without understanding anything meaningful.
🔹 Phase 2: Embedding Scientific Thinking (Fair Test Logic)
We shifted the prompt from features to learning outcomes:
-
Add a Run Simulation gate
-
Detect number of variables changed
-
Provide immediate feedback on validity
Now, the system actively evaluates thinking:
-
❌ Multiple variables changed → “Not a fair test”
-
✅ One variable changed → identifies:
-
Independent variable
-
Controlled variables
-
Dependent variable
-
👉 This transformed the simulation into a formative assessment engine.
🔹 Phase 3: Scaffolding Inquiry (Hypothesis + Journal)
Next, we embedded scientific habits:
-
Require hypothesis before action
-
Add a Science Journal to track:
-
Completed fair tests
-
Variables investigated
-
This introduced:
-
Metacognition (“What do I predict?”)
-
Progression (“What have I already tested?”)
👉 The simulation now supports inquiry-based learning, not just experimentation.
🔹 Phase 4: Teacher Visibility & Evidence of Learning
Finally, we added teacher-facing affordances:
-
Real-time analytics log
-
Tracks student actions
-
Reveals misconceptions (e.g., repeated invalid tests)
-
-
Screen recording
-
Students submit evidence of thinking process
-
Supports assessment beyond final answer
-
👉 This bridges learning → evidence → assessment
🎯 The Final Product
The simulation now:
-
Forces hypothesis formation
-
Enforces fair testing principles
-
Provides immediate feedback on scientific validity
-
Tracks conceptual progress via journal
-
Captures learning data and student evidence
This is no longer a simulation—it is a guided scientific thinking environment.
💡 Key Insight for Educators
The breakthrough was not the AI.
The breakthrough was how the AI was prompted.
Instead of asking: “Build me a simulation”
We asked: “Build me a system that teaches thinking.”
👉 AI becomes powerful only when the teacher acts as:
-
Pedagogical designer
-
Assessment architect
-
Learning scientist
| https://antigravity.google/ |