Panel of five women seated in red patterned chairs during a discussion, with a large screen displaying bios in the background.
| |

Bridges to Tomorrow: Women Engineering AI Solutions That Matter

A Night at the Ritchie School

I have been meaning to share my thoughts on this AH_event for quite some time, but Data Love has been in the middle of two pilots and a critical product sprint as we head toward a planned Q4 Product Launch (IYKYK). On Tuesday, March 31, I had the honor of joining a panel for Bridges to Tomorrow, the seventh annual Women in STEM gathering hosted by DU Alumni Women in STEM, Zayo Group, #wie5280, and Rocky Mountain SWE. The room was full of students, engineers, and technologists — all of us circling the same question from different angles:

What does it actually mean to engineer AI solutions that matter?

This year’s theme — From Capability to Responsibility — captured something I think too few AI conversations name out loud: that AI amplifies what we can do, but the real engineering challenge is deciding what we should do, and building the judgment, systems, and communities to back that up.

The evening opened with something I wish more AI conversations started with: a live, interactive demo. Ritchie School students pulled prompts from the audience and ran them side-by-side through different large language models, surfacing the differences in reasoning, accuracy, creativity — and bias — in real time.

It was the perfect setup for the conversation that followed, because it grounded everyone in the same truth I bring to every Data Love engagement:

The output looks confident. That doesn’t mean it’s correct.

The Panel

Our conversation was guided by Susan Adams, Founder of Women in AI Colorado and Adjunct Professor at the University of Denver, whose work centers on responsible AI and inclusive tech ecosystems. I shared the panel with three technologists I admire deeply:

  • Crystal “Crys” Black — a marketing and operations executive working at the intersection of generative AI, revenue operations, and information architecture. Crys brings a computer science background (with roots that go all the way back to Apple IIe programming) to questions about how organizations adopt AI responsibly.
  • Danika Hannon — a cybersecurity professional and Head of Deep Research at the Quantum Strategy Institute, who has spoken on AI, quantum, and security at SXSW, SnowFROC, RMISC, and the Department of Energy’s Emerging Tech Studio Venture Summit.
  • Katherine Kirchner — SVP of Service Assurance at Zayo Group, where she leads operational excellence and AI-driven automation across one of the largest fiber networks in North America. Katherine brings decades of cross-functional telecom and technology leadership and an engineering-first lens that grounds every AI conversation in real-world infrastructure.
Jasmine Motupalli, founder of The Data Love Co., speaks into a microphone during the "Women Engineering AI Solutions That Matter" panel at the 2026 Bridges to Tomorrow event, seated alongside fellow panelists with their bios displayed on screen behind them.
DU WomenInStem 2026 24

Susan moved us through three parts: The On-Ramp (how each of us got here and what shifted along the way), The Build (what we’re actually engineering and the tradeoffs we’re making), and What’s at Stake and What Do We Do About It? (ethics, responsibility, and what we owe the future). Here’s where I landed.

From Capability to Responsibility — Bridges to Tomorrow 2026
From Capability to Responsibility
Bridges to Tomorrow · 2026
1
The On-Ramp

From Army Intel to AI Governance

Commander’s intent is the spine of good AI governance — users must understand the why behind every metric.

2
The Build

Delegate Authority, Not Responsibility

A seven-stage pipeline with human-in-the-loop checkpoints and a strict no-hallucination policy.

3
What’s at Stake

The Missing Guardrails

Closing the blame gap takes anticipatory ethics, public data literacy, and architectural transparency.

Part 1 — The On-Ramp: From Army Intelligence to AI Governance

People sometimes raise an eyebrow when they hear my path went from military intelligence to founding an AI analytics company. To me, it’s the most direct line possible.

In Army intelligence, we lived by the principle of commander’s intent — every order carries a clear purpose so that everyone in the chain understands the why, not just the what. That principle is the spine of how we build at Data Love. If the people using our outputs don’t understand the intent behind the metrics, the system has already failed — no matter how clean the dashboard looks.

The military also taught me something I think about constantly when I see polished AI-generated reports:

Bad intelligence presented confidently is more dangerous than no intelligence at all.

That’s the “polished but wrong” problem in a sentence. AI is exceptionally good at producing clean-looking outputs grounded in the wrong data — and the cleaner it looks, the less likely anyone is to question it.

The third lesson is structural: chain of command means accountability is not optional. You don’t get to say the system made the decision. Someone always signs off. That principle is the foundation of human-in-the-loop governance, and it’s why our core framing at Data Love is that we should be data-informed, not data-driven. Data supports human judgment. It doesn’t replace it.

Part 2 — The Build: Delegate Authority, Not Responsibility

The easy path in AI development right now is what I’d call brute force: dump your full schema into a model, let it generate whatever it generates, and ship a demo. It builds fast. It demos beautifully. It also breaks the moment math is involved or stakes are real.

We chose a harder path. Our platform runs on a seven-stage pipeline where AI handles the parts it’s genuinely good at — planning, narrative, summarization — while deterministic logic and human reviewers handle correctness. A few of the design decisions I shared with the audience:

  • A no-hallucination policy. When the system doesn’t have the data, it says so. It does not guess. That single design choice preserves the user’s ability to make a real judgment call instead of inheriting the model’s overconfidence.
  • Human-in-the-loop checkpoints. Recently expanded across the pipeline, because the right place for human judgment isn’t at the end — it’s at the decision points.
  • Execution traces. Every stage records its outputs so that when something goes wrong, we can identify exactly where: table selection, join logic, aggregation, or interpretation.

The phrase I keep coming back to is one I borrowed from leadership doctrine: you can delegate authority, but you cannot delegate responsibility. AI can do an enormous amount of work on your behalf. The accountability for what it does still belongs to a human being.

Part 3 — What’s at Stake: Trust, Young People, and the Missing Guardrails

Susan’s final thread of questions pushed us toward the future, and specifically toward the generation growing up with these tools as defaults rather than novelties. I think about this through two lenses.

Lens One: Organizational and Systemic

This is the territory of the AI blame gap — the phenomenon I’ve researched where people assign less moral responsibility to AI systems than they would to humans for equivalent harms. The blame gap matters because it creates a structural escape hatch for accountability. When something goes wrong, the harm is real but the responsibility evaporates into the system.

A few things have to be true to close that gap, and they map closely to the principles I’ve written about before in Ethical AI in the Social Sector:

  • Anticipatory ethics. Ethical review can’t be the post-mortem. It has to be part of the design phase, before harm is possible.
  • Data literacy in the public. Communities can’t scrutinize systems they don’t understand. Building literacy isn’t a nice-to-have — it’s a prerequisite for democratic accountability.
  • Architectural transparency. If a system can’t be explained, it can’t be challenged, and if it can’t be challenged, it can’t be trusted.

The cautionary tales here are not hypothetical. The Dutch childcare benefits scandal wrongly accused tens of thousands of families of fraud based on an algorithmic risk model and pushed many into financial ruin. Australia’s Robodebt scheme issued automated debt notices that were later ruled unlawful, with the Royal Commission ultimately describing the scheme as a “crude and cruel mechanism” that caused devastating human consequences. In both cases, removing human judgment from the loop didn’t make the systems more efficient — it made the harm scalable.

Lens Two: Individual

The systemic guardrails matter, but they’re not the only thing that matters. Each of us also needs personal guardrails for how we engage with these tools.

The single most useful question I’ve trained myself to ask is: what did the AI not consider? That question reintroduces the messiness the model has compressed away. It puts you back into the role of the thinker, not just the consumer of outputs.

For parents, mentors, and managers: the most powerful thing you can do for the younger generation isn’t restricting access. It’s narrating your own critical thinking out loud. Show them how you interrogate an answer. Show them when you decide not to trust one.

And finally: protect your own decision-making autonomy. There is a real difference between using AI to expand your options and using AI to make your choices. The first makes you more capable. The second makes you less of a participant in your own life.

The Data Love Perspective

What I came home thinking about, after the panel ended and the room emptied, is that the conversation about AI keeps getting framed as a question of capability — what can these systems do? But the more useful question, and the one Bridges to Tomorrow kept circling back to, is a question of relationship:

What are we willing to be accountable for?

That’s the question that determines whether the AI we build is the kind that earns trust or the kind that quietly erodes it. It’s the question that separates systems that genuinely serve communities from systems that just process them efficiently. And it’s the question I’m grateful to be asking alongside engineers, students, and leaders like the ones who filled Room 510 that night.

Thank you to DU Alumni Women in STEM, Zayo Group, #wie5280, Rocky Mountain SWE, and to Susan, Crys, Danika, and Katherine for a conversation that mattered.

The work continues.


At Data Love, we build AI-powered analytics for nonprofits and government agencies focused on food security, housing, and healthcare access. We believe in leading with heart and building with data — and in being data-informed, not data-driven. Subscribe to our newsletter for more updates.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *