Pointing AI at the problem, not the symptom

Alaina Luxmoore headshot

Alaina Luxmoore

Director of Marketing

March 7, 2026

10 mins

Three people standing in from of a screen which ays AI for Equity

There's a version of the AI conversation that's getting a little tired.

Productivity gains. Efficiency at scale. Automation of the mundane. These are real, and they matter but they're also the safe answers. The ones that get nodded through in boardrooms without anyone having to get uncomfortable.

But yesterday, I got to watch (and participate) in a room full of people getting uncomfortable in the best possible way.

At RUSH, we ran an internal hackathon with a single-minded, deliberately provocative brief: AI for Equity. Not AI for growth. Not AI for competitive advantage. AI as a force for levelling a playing field that, by most measures, is still pretty tilted.

Building for Equity Across All Identities

Over the past few years for International Women's Day, we’ve hosted internal and external panels, shared resources and facilitated conversation to shine a light on inequity, gender issues, cultural tolerance, neurodivergence, te Tiriti…

But something about 2026 is demanding ACTION.

So for this year’s International Women’s Day, we decided to do something a bit different. 

Back in February we put out the feelers for an internal Micro-Hackathon to focus as a team on how we might "Balance the Scales" through ethical AI. We surfaced three challenge tracks, and invited the team to submit a project idea or problem statement where AI currently leaves people behind, or could be used to address an imbalance.

Gender Equity: Addressing bias or unmet needs for women in areas like health, finance, and safety.

Intersectionality: Exploring how AI fails when gender meets other identities (e.g., gender + disability or culture).

Diversity: Focusing on equity for all including accessibility, pride/LGBTQ+, culture, ethnicity, and age.

The Walls We Stop Noticing

But first, a note on systemic inequity. It doesn't announce itself. It accumulates quietly in the gaps between what we intend and what we do.

It's the woman on parental leave whose skills quietly atrophy while the organisation moves on without her. It's the person in the meeting who raises their hand and never gets called on. It's the colleague whose first language isn't English, whose ideas get half the airtime because fluency gets mistaken for intelligence. It's the developer who ships a beautiful product that people with particular disabilities simply cannot use.

None of these things happen because anyone decided to be unkind. Ironically, it happens a lot in spaces where people identify as allies and then fall into the trap of not noticing new inequities. They happen because the default settings of most workplaces - and most technology - were built by a particular kind of person, for a particular kind of person.

AI didn't create that problem, but it might be one of the more interesting tools we have to address it. And our team was up for the challenge of facing these uncomfortable truths, and using our skills to ideate change.

The Hackathon & The Ideas

From a longlist of submitted ideas, we started our Micro-Hackathon with short pitches to the company to articulate why our idea was worth developing, and entice people on to our teams (AI-enabled designers & engineers being most highly sought after!)

Ultimately seven distinct ideas went rapidly from pitch to MVP, and the concepts our team produced ranged from the immediately deployable to the genuinely visionary.

Without giving too much away, here are just a few of the products we demo’d at the end of the three-hour fast-build:

  • A career re-entry platform for people returning from extended leave which included a way to actively document and highlight the transferable skills built during time away (such as parental leave, carer leave or medical leave)
  • A conversational AI designed to surface invisible advantages, built on cultural humility research and anti-oppressive practice frameworks to guides users through their own privilege blind spots without shame or blame and into an action plan
  • An real-time interruption detection tool for meetings that tracks when someone is getting cut off or spoken over
  • A connection platform designed to connect people with mentors who truly understand their experience; not just professionally, but across health, disability, culture, and identity.

With team members combining tools and systems like Claude Artifacts, Cursor, Perplexity Research, Figma Make, Superbase and Docker, we were able to point AI and technology at lived experiences in the hope of addressing moments where systemic inequity showed up.

And not only were the tech solutions as inspiring as they were feasible, the conversations around the solutions were really valuable.

“Did you know that men interrupt women 33% more often than they interrupt other men, and women interrupt men half as much as this.”

“Over a billion people with disabilities often find websites unusable.”

“We found that most career platforms are biased towards a male style of career development.”

“A hidden privilege that others don't carry is the mental load of learning to constantly assess whether spaces are safe based on your race and gender.”

(And yes these are actual quotes from the transcript of our hackathon.)

It is an incredible thing to be able to raise these issues openly in a workplace and have colleagues not shy away from them, but want to lean into making things better.

The tension worth sitting with

I want to be honest about something, because I think the uplifting version of this story does a disservice to the harder question underneath it.

AI is not a neutral tool. It reflects the data it's trained on, the priorities of the people who built it, and the assumptions baked into every design decision along the way. There are well-documented cases of AI amplifying bias rather than reducing it; in hiring, in healthcare, in criminal justice. And it's not environmentally neutral either; the energy cost of training and running these models is significant, and largely invisible to the people deploying them. Equity has to include the planet.

So there's a real and important distinction between using AI to make the current system faster and using AI to interrogate the system itself.

What struck me about the work our team produced was that the better concepts were doing the second thing. They weren't automating existing processes. They were surfacing things we'd trained ourselves not to see.

That feels like the more interesting, and ultimately more responsible, direction of travel.

What good innovation actually looks like

One of our Hackathon judges made an observation during the debrief that I keep turning over. The winning tool wasn’t just technically feasible, but it addressed bias in the design and development process itself. It considered physical bias, geographical bias, language bias. It went upstream.

That's the move. Not patching inequity at the output - but catching it at the input.

It's also, if I'm honest, a reasonable metaphor for what AI-for-equity needs to be more broadly. The question isn't just "can we build a tool to help underrepresented groups?" It's "how do we build tools and systems differently in the first place?"

Feeling hopeful yet?

A room of people given a half day and a brief that asked them to care about feminism, equity and intersectional depth produced multiple ideas worth taking seriously. You know we’re putting a couple of them into production.

So, dear reader. What would you build if you pointed AI at the problem instead of the symptom?

I hope this inspires you and your team to run such an event as well - and if you do, I’d really, really love to hear about it.

Arrow right