Skip to content
MDU
Research

Research

MDU research is built for auditability: clear questions, reproducible protocols, data governance, and peer review. Outputs are not limited to papers—standards, tooling, datasets, and public templates count.

Research areas
Themes
  • Accessibility
  • Causal Inference
  • Civic Technology
  • Climate Adaptation
  • Clinical Data Ethics
  • Data Governance
  • Design Systems
  • Evaluation
  • Governance
  • Learning Science
  • Measurement
  • Mentorship
  • Mentorship Systems
  • Mobility
  • Monitoring
  • Pedagogy
  • Public Services
  • Reproducibility
  • Research Operations
  • Responsible AI
  • Risk Modeling
  • Sustainability
  • Sustainable Urbanism
  • Tooling
  • Typography
  • Urban Futures

Themes overlap: supervised labs require governance; semantics work needs reproducible evaluation.

Featured units
Lab
Computational Civic Lab

Civic Technology / Public Services / Evaluation

Center
Open Methods & Reproducibility Center

Reproducibility / Data Governance / Research Operations

Lab
Responsible Systems Lab

Responsible AI / Evaluation / Governance

How research runs

From question to output

Labs publish work that can be checked: evaluation protocols, decision records, risk tiers, and clear claims. Students often enter through course projects and grow into co-authored outputs.

  1. Scope: define constraints, stakeholders, and safety boundaries.
  2. Protocol: specify data handling, baselines, and evaluation plan.
  3. Build: implement with logging, versioning, and traceable decisions.
  4. Review: internal review + external partner feedback when applicable.
  5. Release: publish artifact, template, or paper with documentation.
Ways to collaborate

Partnerships start small: capstone reviews, evaluation sprints, data governance audits, or co-designed templates.

Latest outputs
YearTypeTitleThemes
2026 Publication Open Benchmarks for Civic LLMs Evaluation / Responsible AI / Public Services
2025 Publication Procurement-Ready Evaluation: A Monitoring Playbook for Public Sector AI Responsible AI / Public Services
2025 Publication Safety Cases for ML in Health Systems Clinical Data Ethics / Governance / Monitoring
2024 Project Open Syllabus Tooling for Reproducible Curriculum Learning Science / Reproducibility / Tooling
2024 Publication Gentle Rigor: Feedback Loops for Adult Learners Pedagogy / Mentorship
2023 Publication Rhythm as Interface: Editorial Principles for Digital Products Design Systems / Typography
Partnerships & joining
Industry & civic partnerships

We welcome real problems under clear ethics and audit constraints. Partnerships often start with capstone reviews and grow into joint research tracks.

Research roles

RA/TA roles are posted 6–8 weeks before term start. We value samples and habits: logging, revision, and boundaries.