Research
MDU research is built for auditability: clear questions, reproducible protocols, data governance, and peer review. Outputs are not limited to papers—standards, tooling, datasets, and public templates count.
- Accessibility
- Causal Inference
- Civic Technology
- Climate Adaptation
- Clinical Data Ethics
- Data Governance
- Design Systems
- Evaluation
- Governance
- Learning Science
- Measurement
- Mentorship
- Mentorship Systems
- Mobility
- Monitoring
- Pedagogy
- Public Services
- Reproducibility
- Research Operations
- Responsible AI
- Risk Modeling
- Sustainability
- Sustainable Urbanism
- Tooling
- Typography
- Urban Futures
Themes overlap: supervised labs require governance; semantics work needs reproducible evaluation.
Civic Technology / Public Services / Evaluation
Reproducibility / Data Governance / Research Operations
Responsible AI / Evaluation / Governance
From question to output
Labs publish work that can be checked: evaluation protocols, decision records, risk tiers, and clear claims. Students often enter through course projects and grow into co-authored outputs.
- Scope: define constraints, stakeholders, and safety boundaries.
- Protocol: specify data handling, baselines, and evaluation plan.
- Build: implement with logging, versioning, and traceable decisions.
- Review: internal review + external partner feedback when applicable.
- Release: publish artifact, template, or paper with documentation.
Partnerships start small: capstone reviews, evaluation sprints, data governance audits, or co-designed templates.
| Year | Type | Title | Themes |
|---|---|---|---|
| 2026 | Publication | Open Benchmarks for Civic LLMs | Evaluation / Responsible AI / Public Services |
| 2025 | Publication | Procurement-Ready Evaluation: A Monitoring Playbook for Public Sector AI | Responsible AI / Public Services |
| 2025 | Publication | Safety Cases for ML in Health Systems | Clinical Data Ethics / Governance / Monitoring |
| 2024 | Project | Open Syllabus Tooling for Reproducible Curriculum | Learning Science / Reproducibility / Tooling |
| 2024 | Publication | Gentle Rigor: Feedback Loops for Adult Learners | Pedagogy / Mentorship |
| 2023 | Publication | Rhythm as Interface: Editorial Principles for Digital Products | Design Systems / Typography |
We welcome real problems under clear ethics and audit constraints. Partnerships often start with capstone reviews and grow into joint research tracks.
RA/TA roles are posted 6–8 weeks before term start. We value samples and habits: logging, revision, and boundaries.