Research Themes
We have envisioned a wide range of potential projects for the 2026 program which align with three high-level research themes. All of these research areas will be supported by our program mentors, reinforced through a range of professional development activities and technical support.
These projects are currently in progress, and additional projects aligned with the program themes listed below may also be available.
Re-Imagining the Lecture: Co-Designing AI-Augmented Learning Environments for Inclusive Pedagogy
Kyle Montague, Reem Talhouk, Colin Gray, Austin Toombs
This project explores how AI-augmented presentation systems can surface and address the invisible exclusions embedded in traditional lecture-based education—particularly the barriers facing multilingual learners, neurodivergent students, and those who struggle with real-time verbal processing. Current educational AI tools focus on optimizing content delivery or providing individualized tutoring after class, but they fail to address the live classroom moment where much learning happens socially, collaboratively, and spontaneously. Drawing on participatory design, learning sciences, and critical AI studies, this project investigates how real-time AI capabilities—transcription, translation, content generation, and audience interaction—can be orchestrated not to replace pedagogical expertise but to create more democratic, accessible, and responsive learning environments. We ask: How might AI serve as a collaborative partner that helps educators capture spontaneous insights, helps diverse learners engage on their own terms, and transforms classrooms from sites of knowledge transmission into spaces of collective knowledge construction?
Through co-design workshops with educators, students (including international and neurodivergent learners), and community learning organizations, we will develop design frameworks for participatory AI in educational settings. Rather than assuming AI should optimize for lecturer efficiency or content standardization, the project treats AI-augmented pedagogy as a site for investigating power, agency, and inclusion. Using design probes, low-fidelity prototypes, and evaluative studies in authentic educational contexts (university lectures, community workshops, adult education programs), we will examine how features like anonymous confusion-flagging, democratic question upvoting, per-learner caption translation, and AI-suggested content additions reshape classroom dynamics. The project aims to produce a Participatory AI Pedagogy Framework that foregrounds learner agency, educational equity, and the situated expertise of educators—challenging techno-solutionist narratives while exploring genuine opportunities for AI to support more inclusive, engaging, and human-centered learning futures.
Designing for Contentious Conversations: Building Resilience Across Generations
Lauren Scott, James Nicholson, Kyle Montague, Colin Gray
Description TBD
Civic Care 2.0: Designing Playful AI Systems that Notice and Nurture
Anna Carter, Nic Whitton, Austin Toombs, Colin Gray
In this project, we will explore how playful, speculative technologies can support community care in contexts of environmental and social precarity. Building on prior fieldwork in Stanley, a post-industrial town in County Durham, this project examines how everyday acts of care—such as sharing flood alerts, tending communal gardens, or checking on neighbors—can inform more humane, situated approaches to AI design. Rather than automating or formalizing care, the project uses playfulness as a mode of inquiry: how might games, provocations, and speculative prototypes reveal, coordinate, or sustain civic care without surveillance or extraction?
Working with local partners like the Wear Rivers Trust, participants will use a six-dimension framework—Embeddedness, Visibility, Reciprocity, Autonomy with Support, Coordination without Formalization, and Frustration as Data—as both an analytical lens and creative prompt. Through co-design, prototyping, and field-based workshops, students will develop speculative interventions that nurture empathy, solidarity, and mutual support across digital and physical spaces.
Recognizing Harm: Co-Designing Accountability in Facial Recognition Technologies
Kyle Montague, Reem Talhouk, Jamie Mahoney, Hugo Nicolau, Colin Gray
This project explores how community-led design can surface and document the harms experienced by people whose faces are not recognized—or misrecognized—by facial recognition technologies (FRTs). These harms are often invisible in mainstream AI discourse because they are systemic, normalized, and unequally distributed. Drawing on participatory design, accessibility research, and responsible AI, the project investigates how minoritized communities can shape the language, tools, and infrastructures used to hold FRT systems accountable.
Through co-design workshops, diary studies, and iterative prototyping, we are developing an open-source reporting platform and harm-classification framework known as the Harm Accountability and Reporting Matrix (HARM). Rather than focusing on algorithmic accuracy alone, this project treats FRT as a civic and ethical concern—foregrounding lived experience, social context, and the labour required to “work around” exclusionary systems. By collaborating with advocacy partners and international charities, we aim to build a participatory infrastructure for recognising harm, amplifying underrepresented voices, and supporting community-driven pathways toward inclusive technology governance..
Research Themes
We will focus on projects across three broad themes in the 2026 program year.

THEME 1
Communities, Democracy, and Society
Potential projects will pioneer innovative AI methodologies to enhance civic engagement and public participation in local governance. Projects may include:
- Designing AI-driven programs to match citizens with services from local councils, enhancing the efficiency and reach of public service delivery.
- Developing AI-enabled processes to increase public engagement in local government decision-making, such as by advertising such access to citizens most affected by a policy change or by using AI to simplify complex issues for broader comprehension.
- Utilizing Digital Civics approaches to design AI-powered dashboards for disseminating local public health data, balancing granularity with privacy concerns.

THEME 2
AI, Design, and Co-Creation:
Students working within this theme will focus on the role of AI as a collaborative actor in design processes. Projects may include:
- Developing frameworks for how AI tools could be used by citizens to co-create new policy proposals for local governance that align with their values.
- Analyzing the impact that AI tools used as design partners will have on the future workplaces of design agencies that rely upon these tools.
- Proposing speculative futures and critical design alternatives for how communities adjust to the ubiquity of AI tools and AI-embedded processes.

THEME 3
Identity, Privacy, Security, and Information
In this theme, students will tackle the complex role of AI in online environments. Projects may include:
- Documenting design strategies for AI personalization that respect user privacy, navigating the delicate balance between customization and confidentiality.
- Developing detection and response frameworks for AI-enabled spread of problematically-framed or false information, with a focus on community impact and remediation.
- Proposing solutions to counteract biases in AI systems, emphasizing the need for varied datasets to represent diverse identities.
Northumbria University Mentors
You will work with a core group of mentors at Northumbria, including the three below. Based on the actual projects we run in 2025, we will add additional mentors representing multiple disciplinary perspectives.

Kyle Montague
Professor, Human-Computer Interaction
Expertise: accessibility, participatory platforms, AI democratization, inclusive design

Pam Briggs
Professor, Psychology
Expertise: usable privacy and security, digital health, aging, cyber-psychology

Reem Talhouk
Associate Professor, Design
Expertise: participatory design, marginalized communities, AI literacy

Anna R. L. Carter
Research Fellow
Expertise: community care, care ethics, neurodiversity, feminist HCI

Nic Whitton
Professor, Computer Science
Expertise: playful learning, play studies, education, psychology

Jamie Mahoney
Lecturer, Computer Science
Expertise: social media and society, ethics

Hugo Nicolau
Assistant Professor, University of Lisbon
Expertise: human-computer interaction, inclusion, education, health

Dan Jackson
Senior Research Software Engineer
Expertise: ubiquitous/pervasive computing, health, wellbeing

Matt Wood
Assistant Professor, Human-Computer Interaction
Expertise: play, sexuality, qualitative methods, puppets

Program Resources
As part of the program, you will have access to extensive research training and technical expertise to enable you to engage in the socio-technical complexity of each project.
Mentorship and Training
- Professional Development
- A series of professional development workshops will cover topics like AI tools, human-centered design methods, design-focused research approaches, and academic writing.
- You will work closely with research mentors from Northumbria University, who will guide you throughout the program and provide valuable research and career insights.
- Collaborative and Cultural Learning
- The program includes at least 18 hours of cultural immersion, with visits to historic and cultural landmarks such as Durham Cathedral, Alnwick Castle, and Hadrian’s Wall.
- You will collaborate with students from the U.S., U.K., Germany, and Portugal, including participation in the annual Digital Civics Exchange (DCX) event.
- Continuous Support
- A dedicated Slack workspace will be available for all participants, mentors, and faculty, allowing you to stay connected, collaborate on projects, and build a professional network.
Technical Resources
- You will have access to state-of-the-art research facilities, including cyber labs, games labs, VR spaces, and 3D printing labs.
- Technical equipment, such as laptops, smartphones, wearable devices, and more, will be made available to support your research projects.
- Dedicated software engineers will provide oversight regarding technical aspects of the projects, from prototype development to deployment.
FAQs
Do I need previous research experience to apply?
No! We welcome students from all backgrounds. You don’t need to have prior research experience — curiosity, commitment, and a willingness to learn are what matter most. Some students come from design, computing, or social science backgrounds; others have never done formal research before. The program is structured to teach you the necessary skills and guide you through a full research process with close mentorship.
My previous research experience is very technical (e.g., machine learning, data science). Can I still apply?
Yes — absolutely. However, the focus of IRES is on human-centered and participatory research, not purely technical AI development. You’ll explore how AI and data intersect with people, communities, and society. If a project requires technical components, program staff will provide technical support, but your main role will involve design, engagement, analysis, and reflection, rather than coding or model development.
Can I take summer courses back at my home institution while participating in IRES?
Unfortunately, no. Because IRES is a full-time, immersive program, students are not permitted to enroll in summer courses (or other major commitments) during the program period. This ensures you can focus fully on your research project, community collaboration, and cohort activities.
What disciplines or majors are best suited for IRES?
We welcome students from all disciplines, including (but not limited to): design, communication, computer science, data science, social sciences, humanities, policy, and the arts. Projects often require mixed skills — from interviewing and workshop facilitation to visualisation, writing, and prototyping — so diversity of perspective is highly valued.
Will I work alone or in a team?
You’ll work as part of a small interdisciplinary team under the supervision of two or more faculty mentors. Many projects also involve collaboration with community partners, meaning you’ll engage with or have the potential to impact real organisations and social issues.
Do I need to know programming or data science?
No — programming or data science skills are not required. If your project uses AI, data, or technical tools, mentors will guide you and provide resources. What matters most is your ability to think critically and creatively about technology’s role in society.
What does a typical week in IRES look like?
You’ll spend time conducting research, meeting with your mentor and project team, attending workshops, and participating in cohort discussions. Some weeks focus more on design and data collection; others on analysis and presentation. It’s a blend of structured learning and self-directed exploration.
