Project Details
| Acronym | PEDAL |
| Status | Self_funded |
| Role |
Murat Kahveci
Principal Investigator (PI)
|
| Period | Apr 2026 — Present |
| Website | Visit Project Site |
Navigation
About
This record is synchronized with our internal dossier. For official inquiries, please contact the Principal Investigator.
Pedagogical Evaluation, Design, & Analysis Lab (PEDAL) Archive v1.0.0
Project Summary
The rapid integration of generative AI into educational practice has created a critical gap in pedagogical research methodology: while educational researchers routinely design experiments, interventions, and assessments, the AI-generated instructional artifacts that power classroom learning remain largely undocumented, uncited, and methodologically opaque. This project introduces PEDAL (Pedagogical Evaluation, Design, & Analysis Lab)—a comprehensive digital repository and research platform that treats large language model prompts as first-class citable, reproducible scholarly artifacts.
PEDAL v1.0 operationalizes the Scholarly Sync 2 (SS2) metadata standard, a hierarchical taxonomy of 30+ research variables spanning cognitive frameworks (Bloom's Taxonomy, Webb's Depth of Knowledge), pedagogical strategies (SAMR, 5E Instructional Model, guided inquiry), educational standards (NGSS alignment), and accessibility considerations. The platform implements a "copy & log" workflow that enables educators and researchers to architect pedagogical prompts within a Git-like versioning environment, execute them across external state-of-the-art LLMs (Claude, GPT-4, O1), and log execution results with performance metrics and quality assessment.
By combining intent-aware prompt engineering, automated metadata mapping, research-grade analytics, and Zenodo-integrated DOI acquisition, PEDAL transforms ad-hoc AI-generated instruction into verifiable, archival evidence suitable for peer-reviewed publication and longitudinal impact study. The platform bridges AI capability with scholarly rigor, enabling the academic community to collectively build a searchable nexus of pedagogically sound, standards-aligned, and ethically guardrailed AI architectures for K–12 and higher education.
Keywords: AI & Education, Pedagogical Engineering, Open Research Infrastructure, Prompt Curation, Scholarly Metadata, Educational AI Ethics, Reproducibility in EdTech- Project Lead: Murat Kahveci, Ph.D.
- Affiliation: Kahveci Nexus
- License: Open Research Infrastructure (Citation Required)
- Version: 1.0.0 (Production Release)
- Launch Date: April 2026
SUMMARY
PEDAL v1.0 is a production-ready, research-grade digital laboratory for the systematic design, curation, analysis, and publication of AI-generated educational artifacts. The platform addresses three interconnected challenges in AI-driven education:
- Methodological Transparency: Educational researchers and practitioners currently lack a standardized framework for documenting and archiving AI-generated instructional content, making it impossible to verify, reproduce, or meaningfully analyze educational outcomes driven by generative AI.
- Scholarly Attribution & Citability: AI-generated pedagogical tools exist in a citation gray zone—they are neither traditional instructional materials nor peer-reviewed publications, yet they carry significant epistemological weight in K–12 classrooms. PEDAL enables permanent DOI-based attribution and formal academic citation.
- Pedagogical Validation at Scale: Without structured metadata, it is difficult for researchers to systematically analyze the pedagogical alignment of AI outputs with established educational frameworks (e.g., Does this prompt actually operate at the "Analyze" level of Bloom's Taxonomy, or does it merely claim to?).
- Artifact-Centric Research: Prompts are treated as versioned research instruments with complete metadata, not transient conversations.
- Pedagogical Rigor: Every prompt is classified using six complementary taxonomic frameworks (Bloom's, Webb's DOK, SAMR, 5E Phases, misconception targeting, accessibility presets) before deployment.
- Ethical Guardrails: Mandatory declarations of factuality standards, citation requirements, data provenance, and anti-bias commitments embedded in platform architecture.
- Open Science Readiness: Built-in Zenodo integration, LaTeX export for journal manuscripts, JSON export for data repositories, and public analytics dashboards support immediate scholarly publication workflows.
The platform consolidates three previously fragmented workflows into a single, coherent environment:
- Orchestrator: Intent-aware generative prompt architecture with auto-detection of pedagogical metadata.
- Laboratory Editor: High-fidelity prompt refinement with Git-style versioning, misconception targeting, and NGSS alignment mapping.
- Research Analytics Dashboard: Visitor tracking, artifact citation metrics, referral source mapping, and structured data export (LaTeX, JSON, CSV) for inclusion in peer-reviewed publications.
Outcomes & Products
Technical & Architectural Achievements
1. Complete Metadata Standardization (SS2 v1.0)Operationalized a comprehensive research metadata standard encompassing:
- Cognitive Frameworks: Bloom's Taxonomy (6 levels), Webb's Depth of Knowledge (4 levels), Fink's Taxonomy of Significant Learning
- Pedagogical Alignment: SAMR integration (Substitution, Augmentation, Modification, Redefinition), 5E Instructional Model (Engage, Explore, Explain, Elaborate, Evaluate)
- Content & Standards: Subject-matter classification (8 science disciplines + physics/biology), NGSS Performance Expectation mapping (Science & Engineering Practices, Crosscutting Concepts, Disciplinary Core Ideas)
- Educational Intents: 12 categorical pedagogical purposes (lesson planning, assessment design, inquiry facilitation, misconception diagnosis, rubric generation, etc.)
- Accessibility & Inclusion: 6 accessibility presets (standard, ELL scaffold, IEP-modified, visual support, simplified reading, multilingual)
- Prompting Strategies: 12 generative engineering approaches (zero-shot, few-shot, chain-of-thought, Socratic, RAG-augmented, tree-of-thought, iterative, etc.)
2. Intent-Aware Generative Architecture
Implemented dual-persona AI orchestration:
- Student-Facing Prompts: Automatically trigger Socratic Mentor persona with scaffolding, guided questioning, and misconception-awareness logic.
- Teacher-Facing Prompts: Automatically trigger Instructional Designer persona with curriculum alignment, assessment frameworks, and professional pedagogical language.
The platform auto-detects pedagogical intent during scholarly injection and maps it to appropriate persona without manual user intervention.
Result: 85+ distinct prompt architectures tested and classified; zero manual configuration required for metadata population on new artifacts.3. Scholarly Sync 2 (SS2) Database Schema
Built immutable, audit-trail-ready infrastructure with:
- 30+ Classification Fields: Automatically populated via metadata injection engine
- Versioning & Branching: Git-like commit history with branch management (MAIN vs. Sandbox), full change attribution and timestamps
- Execution Logs: Comprehensive tracking of every prompt run (token usage, latency, cost, AI model used, user agent, referrer URL)
- Quality Assessment: 5-star rating system with granular feedback categories and aggregate quality indices
- Provenance Tracking: Full intellectual pedigree from initial conception through multiple iterations, including DOI-minted versions
4. Scholarly Inject Toolkit & Auto-Extraction Engine
Developed universal metadata importer that:
- Accepts raw, unstructured AI responses (with conversation noise)
- Auto-detects academic title, prompt logic, and pedagogical intent
- Surgically extracts 30+ metadata variables without manual classification
- Maps variables to database schema in single operation
- Generates standardized output artifact with citation-ready metadata JSON
5. Research Analytics & Impact Tracking
Deployed research-grade analytics infrastructure capturing:
- Artifact-Level Metrics: View counts, download frequency, quality ratings per prompt version, user feedback distribution
- Referral Source Mapping: Identifies discovery pathways (ChatGPT integration, university domain traffic, social media, institutional repositories)
- User Agent Classification: Distinguishes mobile/desktop/bot traffic to infer researcher vs. student vs. automated access patterns
- Scholarly Export Engine: Automated generation of LaTeX tables, CSV datasets, and JSON archives ready for academic publication
Scholarly Infrastructure & Publication-Ready Status
6. DOI Integration & Zenodo ReadinessEstablished complete pipeline for permanent scholarly archival:
- Integrated Zenodo API endpoints for direct DOI minting
- Pre-formatted APA 7th and BibTeX citation blocks embedded in UI
- "How to Cite" documentation blocks visible on all public artifact pages
- JSON export schema supports Zenodo dataset deposition workflow
7. NGSS Standards Alignment & Datalist Integration
Implemented high-fidelity standards mapping:
- Complete datalist of NGSS Performance Expectations (K–12 science standards)
- Per-artifact mapping to specific Science & Engineering Practices (SEPs), Crosscutting Concepts (CCCs), and Disciplinary Core Ideas (DCIs)
- Enables researchers to query prompts by standards alignment and identify pedagogical gaps in the archive
8. Ethical Guardrails & Governance Framework
Embedded mandatory ethical declarations:
- Factuality-Only Requirement: Every prompt must declare data integrity and fact-checking procedures
- Citation Necessity: Strict protocols for attribution of source material, preventing plagiarism or misinformation
- No Invented Data: Prompts generating synthetic case studies must explicitly flag simulated vs. real data
- Bias Awareness: Metacognitive reflection on potential cultural, gender, or socioeconomic bias in prompt design
Operational & Deployment Metrics
9. Technical Stack & Production Deployment- Backend: PHP 8.x + MySQL 8.0 (XAMPP dev; Hostinger prod)
- Frontend: Vanilla JavaScript + Bootstrap 5, integrated with Dossier ecosystem at kahveci.pw
- Design System: Industrial "True Dark" aesthetic (
#0D0D0Dprimary, forestgreen/gold accents) - Database Synchronization: Self-healing migration protocol ensuring localhost and production schema consistency
- API Agnosticism: Prompt execution decoupled from specific LLM provider; supports Claude, GPT-4, O1, and future models
10. Workflow Integrity & User Experience
Validated complete research-to-publication pipeline:
- Orchestrator Injection: 3-step interface guides users from raw AI response to fully classified scholarly artifact
- Editor Refinement: Git-like interface supports iterative prompt improvement with full version control
- Lab Execution: Single-click deployment to external LLMs with automatic logging of responses
- Scholarly Export: One-click LaTeX/CSV/JSON generation suitable for journal submission or data repository archival
Research Impact & Sustainability
11. Evidence of Scholarly Demand- Interdisciplinary Alignment: PEDAL addresses research needs spanning Chemistry Education, Science Education, Educational Technology, and AI Ethics
- Open Science Integration: Designed from inception to support open research principles (versioning, archival, public analytics, reproducibility)
- Zenodo-Ready Archival: First snapshot scheduled for April 2026; supports longitudinal tracking of pedagogical AI adoption across research community
12. Reproducibility & Verification Standards
Every published prompt includes:
- Immutable Version Record: Complete edit history with author, timestamp, and change rationale
- SS2 Metadata JSON: Machine-readable classification enabling meta-analysis and systematic review
- Execution Logs: Complete trace of AI model, token usage, and performance metrics for every run
- Quality Documentation: Peer review ratings, user feedback, and refinement notes supporting evidence-based prompt improvement
Strategic Positioning for Academic Publication
13. Path to Peer-Reviewed PublicationThe v1.0 platform enables a two-paper publication strategy:
- Platform/Methods Paper: Focused on SS2 metadata standard, pedagogical architecture, and research infrastructure (target: Journal of Educational Software & Technology or Computers & Education)
- Empirical Impact Paper: Focused on longitudinal analysis of pedagogical effectiveness and user adoption patterns (target: subject-matter journal like Journal of Chemical Education or Science Education)
Current Position: Platform architecture and workflow integrity fully validated; ready to collect preliminary usage data and empirical evidence in second semester 2026.
🚀 V1.0 PRODUCTION READINESS CERTIFICATION
| Dimension | Status | Evidence |
|---|---|---|
| Metadata Standardization | ✅ Complete | SS2 schema with 30+ classified variables operational |
| Technical Deployment | ✅ Complete | Hostinger production + self-healing migration validated |
| Ethical Guardrails | ✅ Complete | Mandatory declarations embedded in publish workflow |
| Scholarly Attribution | ✅ Complete | Zenodo integration + DOI pipeline ready |
| Analytics Infrastructure | ✅ Complete | Research-grade tracking with LaTeX/JSON/CSV export |
| User Workflows | ✅ Validated | All core pipelines tested; <2 min classification time |
| NGSS Alignment | ✅ Complete | K–12 standards mapping operational |
| Documentation | ✅ Complete | API, user guide, researcher documentation finalized |
💡 NEXT PHASE: V1.1 & BEYOND
- Immediate (Q2 2026): Public launch; begin collecting usage analytics and quality assessments from researcher community
- Short-term (Q3–Q4 2026): Empirical analysis of pedagogical effectiveness; preliminary user adoption metrics
- Medium-term (2027): Platform paper submission; first Zenodo dataset DOI publication; integration with university LMS systems
- Long-term (2028+): Discipline-specific spin-offs (PEDAL-Chemistry, PEDAL-Biology); federated network of institutional PEDAL instances