Participation by design & the civic responsibility of AI

Terry Williams-Willcock | RUSH CCO headshot

Terry Williams-Willcock

Chief Customer Officer

August 11, 2025

6 mins

Artificial intelligence is doing more than powering the tools we use each day, it's shaping the systems we live within. From how we access public services to how we engage with politics, AI is increasingly making decisions that affect people’s lives. But if those systems are designed without care, without context, and without the communities they’re meant to serve, they deepen the divide.

Designers today are not just crafting interfaces, we are shaping the very conditions of participation. And in a world where AI determines what’s true, what’s read, what’s seen, and who gets heard, inclusive responsible design becomes a democratic imperative.

That means we need to reframe how we think about design. It’s deeper than just what product looks like or how it feels to use. We need to think about the invisible rules beneath the surface and the assumptions that get embedded into the data, the incentives driving system behaviour, the defaults that shape user choice. These decisions determine whether technology builds trust or erodes it, and whether it includes or excludes important members of our society.

When the compass for AI design points narrowly towards shareholder return or long-term engagement, for example, it risks creating systems that overlook those who need them most. Or worse, embed inequities at scale. Blind spots emerge not from intent, but from the limits of what a system is optimised to achieve.  

Take, for example, digital healthcare, that can scale in ways that expand access rather than restrict it. AI can make booking medical appointments easier, reduce costs, and meet the distinct needs of different communities. We should be incentivising design choices that surface concern, triage appropriately, and connect people with meaningful help. Without that, the drive to “stay ahead” becomes a shortcut to exclusion, leaving the most vulnerable further behind. When people’s needs come first, shareholder value follows as a natural outcome, not the other way around. And by setting the goal of meeting real human needs at the outset, such as making healthcare more accessible for every community, we create the conditions for technology to expand opportunity by design, not by accident.

At RUSH, we believe that inclusion cannot be something tacked on at the end. It must be embedded from the start. That belief has informed our partnership with ThoughtFull and the development of Chime - a voice-driven prototype designed to reconnect with disengaged voters by helping them access political information across language, literacy and trust barriers. Chime aims to restore confidence in the act of participation, and acknowledging that civic systems must work for the most underserved residents if they’re to work for anyone.

These are the design questions we face now. Not ‘How do we make it faster?’ but ‘Who is this for?’ and ‘What happens when it goes wrong?’ The impact of generative AI is unfolding in real time, from the spread of misinformation to sensationalism through algorithms on platforms like Facebook and TikTok. We’ve already seen what happens when design prioritises speed and novelty over care and context. It’s time to ask harder questions about the structures we’re building.

Designing at a systems level also requires us to go beyond screens. Multimodal AI (voice, gesture, ambient interactions) demands new thinking. Not just about accessibility, but also power dynamics. Whose language is understood and whose accent is recognised? What assumptions are built into these models, and what worldviews are left out?

The stakes are high, and growing, but so is our ability to act. Generative AI gives us the power to prototype faster, test smarter, and learn with users in real time. It can reduce the time to discovery without compromising values, if we choose to wield it that way.

Inclusion in this context is not a checklist, but an important principle of architecture. This means rethinking how we build, with greater accountability, transparency, and co-design principles.

New Zealand is proving small, nimble nations can lead global conversations about responsible AI implementation. And more importantly, we can offer something unique: a model for inclusive, democratic engagement with emerging technology that respects both innovation and human dignity.

As a country, we have the chance to prove that inclusive design, collaborative governance, and human-centred technology aren't just idealistic aspirations but a practical pathways to a more equitable future.

And we’re committed to that challenge. Not because it’s simple, but because it’s necessary. When technology touches every part of our civic lives from voting to healthcare and justice, it cannot be left to chance or convenience.

Designing for democracy means designing for accountability. It means creating systems that respect complexity, that speak with clarity, and that don’t sacrifice trust for speed. It means understanding that exclusion is rarely intentional, but always structural, and choosing to dismantle those structures before they calcify.

Ultimately, the way we design AI is a social contract. One that says everyone deserves to be seen, to be informed, and to belong in the future we’re building together.

Arrow right