Publication Info
| Type | Preprint |
| Status | Under review |
| Year | 2026 |
| Venue | arXiv (cs.CY) |
Related Projects
Quick Navigation
PEDAL: Open Infrastructure for Citable AI Prompts in STEM Education and Research
2026 — arXiv (cs.CY).
Citation (APA)
Abstract
The rapid integration of Large Language Models (LLMs) into educational practice has created an urgent need for infrastructure that treats AI prompts not as disposable instructions but as reproducible scholarly artifacts. This paper presents PEDAL (Pedagogical Evaluation, Design, \& Analysis Lab), an open-research platform that implements a three-tier \textit{Laboratory-to-Archive} pipeline for prompt engineering: (1)~an \textbf{Orchestration Layer} with AI-assisted prompt generation and a Scholarly Inject toolkit for automated metadata extraction; (2)~a \textbf{Laboratory Layer} supporting Git-style version control, LLM-as-a-Judge evaluation, and Mann--Whitney~$U$ statistical testing for champion variant identification; and (3)~a \textbf{Public Archive Layer} providing scholarly publication with per-version DOI minting via Zenodo, multi-format data exports (JSON, CSV, \LaTeX), and SEO-optimized discoverability through JSON-LD and HighWire Press metadata. PEDAL's Scholarly Sync~2 (SS2) framework attaches a 24+ field metadata envelope to every artifact, encoding Bloom's Revised Taxonomy, Webb's Depth of Knowledge, SAMR integration levels, 5E instructional phases, and NGSS standards alignment. A chemistry education exemplar demonstrates the full pipeline from Socratic inquiry scaffolding through statistical evaluation to public DOI-minted archival. We further present the NExAIE (Nexus AI \& Education) extension, which applies PEDAL's infrastructure to AI-augmented peer review through a 42-prompt evaluation matrix spanning six quality dimensions across seven publication types, introducing \textit{Radical Transparency} by publicly archiving all review rubrics with DOIs. Initial deployment data---4,684 artifact views from 1,810 unique researchers within one month of launch---indicates strong community demand for citable, quality-assured AI scaffolding in STEM education and research. The complete platform is released under CC-BY-4.0 (DOI: 10.5281/zenodo.19474709)
Keywords
BibTeX
@misc{cq,
title = {PEDAL: Open Infrastructure for Citable AI Prompts in STEM Education and Research},
author = {Kahveci, M.},
year = {2026},
howpublished = {arXiv (cs.CY)},
abstract = {The rapid integration of Large Language Models (LLMs) into educational practice has created an urgent need for infrastructure that treats AI prompts not as disposable instructions but as reproducible scholarly artifacts. This paper presents PEDAL (Pedagogical Evaluation, Design, \& Analysis Lab), an open-research platform that implements a three-tier \textit{Laboratory-to-Archive} pipeline for prompt engineering: (1)~an \textbf{Orchestration Layer} with AI-assisted prompt generation and a Scholarly Inject toolkit for automated metadata extraction; (2)~a \textbf{Laboratory Layer} supporting Git-style version control, LLM-as-a-Judge evaluation, and Mann--Whitney~$U$ statistical testing for champion variant identification; and (3)~a \textbf{Public Archive Layer} providing scholarly publication with per-version DOI minting via Zenodo, multi-format data exports (JSON, CSV, \LaTeX), and SEO-optimized discoverability through JSON-LD and HighWire Press metadata. PEDAL's Scholarly Sync~2 (SS2) framework attaches a 24+ field metadata envelope to every artifact, encoding Bloom's Revised Taxonomy, Webb's Depth of Knowledge, SAMR integration levels, 5E instructional phases, and NGSS standards alignment. A chemistry education exemplar demonstrates the full pipeline from Socratic inquiry scaffolding through statistical evaluation to public DOI-minted archival. We further present the NExAIE (Nexus AI \& Education) extension, which applies PEDAL's infrastructure to AI-augmented peer review through a 42-prompt evaluation matrix spanning six quality dimensions across seven publication types, introducing \textit{Radical Transparency} by publicly archiving all review rubrics with DOIs. Initial deployment data---4,684 artifact views from 1,810 unique researchers within one month of launch---indicates strong community demand for citable, quality-assured AI scaffolding in STEM education and research. The complete platform is released under CC-BY-4.0 (DOI: 10.5281/zenodo.19474709)}
}