What a Panel Conversation Taught Me About AI, People, and Getting It Right

Last week, I moderated a panel with Nikitha Kendyala, Johanna Brown, and Janine Murphy at the 2026 Newfoundland and Labrador Organization of Women Entrepreneurs (NLOWE) Conference. We talked about AI, covering how organizations are adopting it, where it's going sideways, and what leaders need to think about before they roll it out. 

I came away from the panel with three ideas worth sharing.

1) Who uses AI matters as Much as How You Use It

When you interact with an AI tool like ChatGPT or Claude, you're not just a user. You're a trainer. The prompts you write, the outputs you accept or reject, and the corrections you make all feed back into how these systems learn and develop over time.

So ask yourself, who in your organization is actually using these tools? If the answer skews heavily toward one group (by gender, age, cultural background, or role) then the AI your organization is shaping will reflect that skew. This isn't theoretical. It shows up in outputs, in blind spots, in whose needs the technology serves well and whose it doesn't.

Diversity in AI adoption isn't a values statement. It's a quality control issue. The panelists stressed this point — if you want AI that works for everyone, you need everyone involved in using it.

2) Resistance Isn't the Problem You Think It Is

When organizations launch AI tools, some employees will push back. In most workplaces, the instinct is often to treat that resistance as an obstacle. Get people trained, manage the change, move forward. 

But our panel offered a more useful way to think about pushback: resistance is information.

People push back on AI for reasons — sometimes they're worried about their jobs, sometimes they've watched previous technology rollouts promise efficiency and deliver chaos, and sometimes they simply weren't consulted and are reacting to that. Those are legitimate concerns, not communication failures to be smoothed over.

The organizations getting this right are building psychological safety into their AI launches from the start. That means giving people room to ask questions without penalty, being honest about what is and isn't known, and not overselling the technology to get buy-in. Leaders set the tone here more than they realize. If senior people fake enthusiasm toward AI they don't actually feel, everyone below them notices.

3) The Math is Genuinely Interesting for Smaller Organizations

The productivity numbers associated with AI are easy to dismiss as hype. But the panelists argued that AI can realistically multiply the effective output of one employee by a factor of four. Not by replacing judgment, but by handling the time-consuming work that surrounds it. Research, drafting, synthesis, formatting, and follow-up, all the things that fill a knowledge worker's week.

For a large organization, that's a meaningful efficiency gain. For a small business, a startup, or a nonprofit running lean, it's something closer to a structural change. Teams that couldn't previously compete on volume or speed now can. That's a real shift in what's possible, and it's arriving faster than most small organizations have planned for.

The Common Thread

These three ideas seem separate, but they're not. They all point to the same thing — how AI performs in your organization will depend far more on your people decisions than your technology decisions. Who gets access, who feels safe enough to actually use it, and who shares in the benefits — those questions don't have technical answers. 

They have leadership answers.

Next
Next

Stop Guessing What They Want