Productside Stories
AI, Agents, and Accountability
Featured Guest:
Summary
In this episode of Productside Stories, host Rina Alexin sits down with Roger Snyder, Principal Consultant and Trainer at Productside, to unpack what AI really means for product managers today — beyond the hype, fear, and buzzwords.
Roger shares his journey from engineer to product leader to educator, and explains how AI is transforming discovery, experimentation, and delivery while increasing the need for critical thinking, not replacing it.
They dive into practical uses of AI for PMs (like research, empathy at scale, competitive analysis, and vibe coding), why many companies are struggling to get ROI from AI, and how to avoid the trap of just “sprinkling AI pixie dust” on products.
The conversation then shifts to agents and agentic AI — what real agents are, how they differ from tools like Copilot, and why autonomy must always be balanced with accountability and guardrails. Roger closes with a challenge to product managers: make 2026 the year you experiment more, use AI every day as a partner, and stay relentlessly focused on customer problems.
Key Takeaways
-
AI is a partner, not a replacement.
AI should be treated as an assistant and thought partner — product managers still own the outcomes, the decisions, and the accountability for what gets shipped. -
Critical thinking matters more than ever.
AI can generate ideas and insights at scale, but without strong critical thinking, PMs will just get “bad outputs, faster.” PMs must evaluate, question, and refine what AI produces. -
Start with the problem, not the technology.
Failed AI initiatives often “sprinkle AI” on existing products without a clear customer problem, leading to high costs and low ROI. The fundamentals of good product management still apply. -
Use AI to accelerate discovery and communication.
Tools like chatbots, vibe coding, and custom agents can help PMs do better market research, create richer prototypes, and communicate ideas more clearly to both customers and engineers. -
Real agents = autonomy + guardrails.
True agentic AI acts with some autonomy toward a goal, using context and decision-making — but must operate within clear constraints like budgets, regulations, and safety rules. -
The future belongs to AI-literate PMs.
Product managers don’t need to be AI engineers, but they do need AI literacy: comfort with tools, understanding of risks, and the ability to design experiments and evaluate AI-powered solutions.
Chapters
-
00:03 – Welcome to Productside Stories and Introducing Roger Snyder
-
01:12 – Roger’s Journey into Product Management
-
02:23 – From Practitioner to Trainer: A Vantage Point on Product Teams
-
03:26 – How Product Teams Are Really Using AI Today
-
06:33 – AI as Partner, Not Replacement: Critical Thinking and Accountability
-
08:55 – Cutting Through the AI Noise and Where PMs Should Focus
-
12:13 – Evals, Safety, and Working with Development Teams
-
13:50 – Role Compression, Product vs. Engineering, and Staying in the Problem Space
-
15:38 – Why AI ROI Is Hard (and the “AI Pixie Dust” Trap)
-
18:54 – Learning from Past Tech Hype Cycles (“Let’s Build an App for That”)
-
20:56 – AI-First Organizations, Strong Data, and Real-World Examples
-
24:42 – Turning “AI-First” into “Customer-First” with Experiments and Discovery
-
26:32 – How AI Can Improve Agile and Continuous Delivery
-
27:52 – Predictions and Hopes for Product Teams in 2026
-
29:50 – Agents vs. “Agents”: Definitions, Copilot, and True Agentic AI
-
33:45 – Autonomy vs. Accountability: Risk, Cost, and Guardrails for Agents
-
38:55 – How to Start with AI as a PM (Baby Steps and Daily Habits)
-
42:27 – Inside the Classroom: First-Time AI Users and Experimentation
-
44:08 – Roger’s 2026 Challenge to Product Managers and Closing Remarks
Keywords
AI product management, AI agents, agentic AI, AI for product managers, accountability in AI, vibe coding, evals, AI-first organizations, agile and AI, product discovery with AI, empathy at scale, experimentation, product leadership, Productside Stories, Rina Alexin, Roger Snyder
Welcome to Productside Stories and Introducing Roger Snyder
Rina Alexin | 00:03-01:01
Hi everyone and welcome to Productside Stories, the podcast where we reveal the very real and raw lessons learned from product leaders and thinkers all over the world. I’m your host, Rina Alexin, CEO of Productside And today I have the pleasure to speak with Roger Snyder. Roger is a Principal Consultant Trainer with us here at Productside and he has prior to joining Productside over 15 years of experience leading product Management teams.
as a Senior Director and VP at companies like OpenWave, Danger, and Microsoft. He’s also a truly great person giving back to his community as a president of his local school board. And at Productside, he has led many teams, worn many hats, and coached hundreds of organizations in product strategy, strengthening product discovery practices, and most recently, helping teams figure out how to incorporate AI into their product management practices.
Welcome to the podcast, Roger.
Roger Snyder | 01:01-01:04
Thank you very much, Rina. It’s my pleasure to be here.
Roger’s Journey into Product Management
Rina Alexin | 01:04-01:12
So I know that I know you, but help our listeners get to know you a little bit better. How did you end up in product management?
Roger Snyder | 01:12-02:23
good question. Well, so I started out, I got a degree in electrical engineering and computer science at Berkeley. You know, went into a technical career, but very quickly was asked, hey, you you actually can interact well with people. So we would like to get you out of coding and into a more technical relationships kind of job. I did that for a few years and project management for a few years and then kind of stumbled into product management. And that’s when I was like, whoa, okay, this is super cool.
This is the kind of problems I like to solve. Coders solve technical challenges, train electrons, right? But product managers are looking at the big picture of all the problems surrounding a product, as well as the problems that the product should solve for customers. And that’s where I just became passionate and said, OK, this is what I want to do the rest of my career. So yeah, I’ve been a product manager ever since.
And as you said, number of different jobs from being number seven at a startup to being part of Microsoft for two years. So I’ve seen product management in various different contexts, in various different ways to do it, scopes and sizes of companies. So it’s been fun to practice that craft in a lot of different ways over micro.
From Practitioner to Trainer: A Vantage Point on Product Teams
Rina Alexin | 02:23-02:42
Yeah, and I personally, really like the ripple effects that you have with our clients where now you get to lend your expertise to so many, including myself. Roger was actually my instructor when I went through the optimal product management course for the first time. And he is wonderful. And I’m not just blowing smoke. You really are truly, truly wonderful.
Roger Snyder | 02:42-02:59
Well, thank you. appreciate that. Yeah, it’s been
a lot of fun. know, also I live in California, right? So I’m in the Santa Cruz mountains. You can see the redwood outside my window there. ⁓ And proud father of four now adult daughters ⁓ and still working on trying to make public schools better for all of our students.
How Product Teams Are Really Using AI Today
Rina Alexin | 02:59-03:26
Yes. Thank you for doing that. It is one of those thankless, but very, very important jobs to educate, not just product managers, but also our people. ⁓ so you now have as a trainer, you have a vantage point that I think a lot of people do not is that you see how different companies, different teams struggle with or do really well with AI specific. ⁓ AI has now been out for.
Roger Snyder | 03:26-06:33
Mm-hmm.
know, several years actually, you know, Dean, one of our colleagues would say it’s been out since the eighties, but it’s really started getting into, ⁓ just ubiquitous practice now in the last two years. What are you actually seeing out in the market? How are people actually using this technology and maybe talk about what separates those who are doing it really well with those who are maybe seeing some challenges.
Fair enough. There’s, ⁓ like you said, a couple of different tiers of that, right? And I’ve seen it in training either any of our courses actually, but in the AI course as well. ⁓ There are some that are using it for discovery. There are some that are using it to, as we talked about in one of our previous podcasts, start being able to realize empathy at scale, right? Being able to get broader reach faster and get more customer feedback. ⁓ It can be a fantastic brainstorming partner.
as well. And so I’ve seen it being used that way. Unfortunately, you also see folks where they are afraid of it and simply don’t want to even try. I’ve had multiple students in the AI class where it’s the first time they’re using AI. So you have to get over some concerns and fears that way. And then we have to teach folks that it is not a substitute for critical thinking. In fact, one of the things that I see is that product managers have got to get better at that skill.
of critical thinking. We have to do it more effectively than ever before, because otherwise AI will just create garbage for you much faster. And you need to really be thoughtful in how you engage with AI. On the product side, I’ve seen companies integrating AI in really thoughtful, meaningful ways, creating better products so that you’ve got Jasper, which just does a much better job of helping you create content.
I use Duolingo, right? And the AI features of Duolingo are actually really powerful in helping you become more conversant in a language, right? They’re starting to be able to use it in PCs and smart. You know, we see Microsoft and the other folks using it now to be more effective interactions, right? But there are times, too, where you make mistakes with AI, and it can have significant consequences. There are plenty of news stories.
where AI has gone sideways and led to some poor outcomes. At the very most innocuous, it just does silly things. But in the worst cases, it can actually be harmful. So as product managers, we also have to become very thoughtful and really think through the consequences of how we’re going to implement AI. And we’ve seen the whole spectrum.
of products using it well and others not using it well and some getting into dark places, right? So we’ve got to, but honestly, I’m an AI optimist, right? I think, you know, there’s this whole spectrum of folks. I’m an AI optimist. I definitely feel like it adds significant value when used right. I’m using it every day in my practices and I’m seeing it come to life in products in exciting ways.
AI as Partner, Not Replacement: Critical Thinking and Accountability
Rina Alexin | 06:33-06:48
So I kind of heard you say a couple of things about what’s up, right? Somebody who’s using it right versus somebody who might get themselves in trouble. ⁓ One thing I heard you say was that it’s you’re not actually, can I call it delegating your critical thinking to AI, ⁓ but we’re using it as a thought partner. So I heard you say that. And then the second one is the accountability still has to land with you as the person running it, right?
Roger Snyder | 06:48-08:12
Absolutely.
Yes.
And.
I say it as well. encourage AI use at product side, but I say that the people like whoever is using it, that output is now your output. have to own it and know it as well as if you wrote it yourself completely from scratch.
Mm-hmm.
Absolutely, right. It is not a magic eight ball that’s going to make decisions for you, right. It is not a copy editor that you can just turn loose and then submit that as your result, right. And it is not a strategic decision maker. It is a great thought provoker. It can give you ideas perhaps you hadn’t thought of. It can help you formulate.
a far more valuable strategic position or make a better decision. But at the end of the day, like you said, you as the product manager own that 100% and never is going to be the day. I use a lot of school analogies never is going to be the day where you can just tell your boss, oh, well, you know, I didn’t actually do that. This person over here did that. That was my AI assistant. No, that’s never going to fly. Right. So you’ve got to be that critical thinker.
and use AI as an assistant every day. ⁓
Cutting Through the AI Noise and Where PMs Should Focus
Rina Alexin | 08:12-08:55
Yeah, well, you also mentioned something that I think is true. And that is that there is quite a bit of fear out in the market as to what is going to happen with, know, specifically knowledge work, right? Given that this is the first time we’re actually generating what looks like knowledge work, but isn’t really quite there. ⁓ but it, I mean, I look, I see it left and right, ⁓ companies writing on X, how they’re now part of their interview processes to vibe code something for a PM.
⁓ or, I hear it even with, you know, some thought leaders in our industry, which are saying that is going to be the norm where there’s an idea and then you vibe coded and then you put this almost like complete solution that’s supposed to be an MVP in front of people. ⁓ how do you cut the signal from the noise? How do you, what, guess, what do you tell if, if I’m a product manager coming into your class and I hear all of this and I am scared and I’m not really using it today, what should I focus on?
Roger Snyder | 08:55-11:35
Mm-hmm.
I think you start with a perspective, and I’m going to use this term multiple times, you bring in AI as your partner and your assistant. And so that automatically, it should set people a little bit at ease, because you’re in charge. You are making it a partner. You’re making it an assistant to you.
So you’re going to ultimately decide, and that creates both that requirement that you’re the one in charge. You ultimately are the one who’s going to be held accountable. But it also gives you this authority to be in charge of that AI. And you’re going to help it. You’re going to shape it. So you bring it in as an assistant. You make it a part of your team, too. We can talk more about that later as well. But ⁓ it then becomes this experimentation friend, assistant, whatever.
And I think that helps set people at ease. ⁓ In terms of the signal to noise, you’re talking about vibe coding. I think vibe coding is very powerful. And I want you to use, as a product manager, though, I want you to use it to bring ideas to life, bring a solution hypothesis to life, get it in front of customers and get feedback, and better communicate to your development partners what you’re thinking. There’s nothing like being able to show someone instead of just telling someone.
both in terms of customer feedback and your interactions with your development team. However, I’ll take a stand and say, I don’t think product managers should write production code with Vibe coding. Leave that to your developers. They may well Vibe code, but they’re going to Vibe code through the lens of all the experiences and knowledge they have as a developer. And they’re going to be thinking about how to use that tool, Vibe coding, as a production level development tool.
And there’s a big difference. So yeah, vibe coding in interview, if you take the perspective of, OK, I’m going to use some vibe coding to express an idea to get a concept out there, I think that’s great. Sure, you want to test your PM candidates to make sure they get AI and know how to use it. But you even, in fact, should say, hey, I vibe coded this from the perspective of it’s not production. It’s for an experiment. It’s for showing.
And that will also demonstrate not that you just did the assignment of Vibe coding, but you understand the context of how a product manager should be using Vibe.
Evals, Safety, and Working with Development Teams
Rina Alexin | 11:35-12:02
So Roger, I completely agree with you. In fact, ⁓ I also try to warn people with my coding. Yes, it’s such a useful, useful tool. But if you’re doing it way too soon in your product discovery practices, you might actually get into trouble in that people react a lot to what you’re putting in front of them. And if you’re putting in front of them a fully baked solution, you’re actually going to make a lot of assumptions instead of testing, is this really the right problem to solve? so, right.
Roger Snyder | 12:02-12:05
You’re hitting on a very key point, right?
The issue of, let’s understand the problem first.
Rina Alexin | 12:05-12:13
on
Yes. And so how do know it’s the right problem to solve if you’re already trying to solve it? ⁓ So, so is there anything else that you see in terms of, ⁓ what’s out in the market and what to focus on?
Roger Snyder | 12:13-13:20
Right, right, exactly right.
Yeah, so you talked about noise and signals, right? There’s also been a lot of talk about Evals lately, right? And I think Evals are another powerful tool, right? You can use them to measure performance and reliability and even the safety of your AI model. But it’s something that, ⁓ again, you want to put in the hands of the developers. You as a product manager,
want to become proficient in understanding the value of Evals and make sure your development team are using Evals to make your AI safer, to make your AI more effective, to reduce hallucinations. I prefer the word fictions, but increasing the amount of user testing before launch, increasing your confidence that you’re actually ready to launch.
So Evals, again, a very powerful tool. Product managers need to understand them, understand their uses. But again, leave that to the developing team to develop the Evals so that you can create a more powerful and effective AI tool that operates well inside some good guardrails.
Role Compression, Product vs. Engineering, and Staying in the Problem Space
Rina Alexin | 13:20-13:34
Yeah, I think you’re making a pretty strong
point here. And, ⁓ if I were to think about it, just reacting to what you’re saying, it’s, it’s like, actually we, we, heard this from Dean as well, like the great compression of there’s all of these new. Honestly, they’re not even that new. Like Evals is just a different way to call like Duke quality assurance testing. Right. ⁓ it’s just now called Evals. but.
Roger Snyder | 13:34-13:50
Thank you.
UQA, yes, yes.
Mm-hmm.
Rina Alexin | 13:50-14:19
The question of who’s now responsible is the one that we’re, right? And it’s like, if now product managers can code, who’s responsible? And I think that’s what a lot of companies are actually trying to struggle with right now, because it’s like the democratization of knowledge, and the knowledge specifically in here is around how to make something.
Roger Snyder | 14:19-15:38
That’s why I always think like, I’m very, you said you’re optimistic on AI. I’m very optimistic on product because if I think about like, who’s going to navigate this, this, this environment to make good business decisions that goes back to really strong product management capabilities.
Absolutely.
Thank you.
Absolutely. don’t this compression concept you’re talking about. I don’t think it should be thought of as an elimination of key roles and an elimination of key training and perspectives. Just because I used to be a developer and that was a long time ago, doesn’t mean that I would be able to develop well thinking through all of the issues of actually creating good production, great products. That’s a developer’s role. And yes, I want them to use Vibe Codes. I want them to use Evals. Right. But
I want to stay in the business space problem and make sure that we’re actually building a great product that achieves product market fit using AI tools all along the way to do that. Using Vibe Coding is a way to express ideas and conduct experiments. And yeah, that does allow me to get way more valuable information about the market before I then turn our developers loose. But I don’t feel like I’m taking over their responsibility when I use things like Vibe Coding.
I’m just getting better answers for them to then vibe code on the right things when it’s time to actually go to production.
Why AI ROI Is Hard (and the “AI Pixie Dust” Trap)
Rina Alexin | 15:38-16:00
So then Roger, are people or rather leaders and companies reporting that they’re really having a tough time getting ROI out of their AI investments? If AI is supposed to you up, why did many people responding to our survey asking, well, is it speeding you up and answered that it’s actually slowing them down?
Roger Snyder | 16:00-18:54
Right. ⁓ I think because they break one of the first fundamental rules of product management, which is start with a problem to solve. Instead of, I’m just going to put little AI sprinkles on my product, and then I can start saying we use AI. And that has happened with other technology disruptions before. Mobile first, cloud-based, SaaS, all of those things.
add real value. And they have definitely revolutionized the way we interact with customers. But every time you need to start with, how can that new disruption actually help better solve customer problems, put a smile on their face to light them, right? Versus, just going to, like I said, I like this little, you I’m going to sprinkle some AI pixie dust on the product and then I can let the marketing team say we’re using AI, right? That doesn’t work.
That’s one half of it. Then the other part of it is companies don’t realize how expensive AI is. They’re not understanding that you can’t just crack open an LLM and off you go. It takes a lot of time, energy, and a whole bunch of data management skills to properly train an LLM to deal with the issues that are going to be pertinent to you and your customers, to conduct Evals to avoid dangerous hallucinations.
The costs aren’t just the upfront cost. It’s also the operational ongoing costs. The cost of not just feeding it the cycles of all those GPUs, but also the cost of making sure that the data remains clean, that you’re not letting someone hijack your LLM and take it in a dangerous way, which we’ve seen, unfortunately, several examples of.
There is a whole bunch more that comes with AI than just the initial part. It’s the ongoing operation and cost. It’s the ongoing responsibility to make sure that it evolves in an effective way versus going other directions, right? And that you’re putting good guardrails, management practices, observations, all of that needs to go into place as well. And people just didn’t think that through. You’re right. There’s a whole slew of stories. There’s even commercials now that talk about how AI is actually taking this more time.
and more energy than before. What problem are you trying to solve? What value could AI bring to that problem? And then what’s the actual business case? What’s the cost really going to be long term to actually implement this well? And sometimes you don’t need AI. There are other solutions that are cheaper and faster.
So you need to be thinking this all the way through, just like we said with other technology disruptions. ⁓ Let’s really think about the problems to solve. And then how could this disruption, how could this AI thing really help me make a better product?
Learning from Past Tech Hype Cycles (“Let’s Build an App for That”)
Rina Alexin | 18:54-19:22
You’re giving me so many flashbacks right now. I just have to let you know. remember the whole like, let’s build an app for that debacle. I once thought, okay, I was at a company not naming any names, but I was at a company where they literally built an app. All that app did was showed the customer support number when you download it, but it’s true. And
Roger Snyder | 18:57-19:22
You
yeah!
Rina Alexin | 19:22-19:37
None of them, not the leader that was like, okay, we’re having trouble. The problem might’ve been that people can’t find the customer support line. I mean, nobody thought, okay, well, what is it going to cost to support this app? How are we going to distribute the app? How are we going to maintain it across iOS and every single, what is it? The Android.
Roger Snyder | 19:37-19:54
Mm-hmm.
Thank you.
Rina Alexin | 19:54-19:57
Yeah, it’s just these and it feels like history has this bizarre way, right, of repeating itself over and over again. So it’s a good idea to look, look at the past for what we could be doing, I guess, differently or better today.
Roger Snyder | 19:57-20:56
It absolutely does.
Yes,
absolutely. Absolutely agree. Right. So yeah, there can be an ROI for AI, but you’ve got to start from first principles of good product management and what are the problems we’re trying to solve. Then how could AI possibly help solve us that problem better? Right. In reading up on this too, there’s many of times where
People just want to bolt on a chat bot to whatever the current experience is. And that’s not the answer. You need to look at ways to completely redesign that potential user experience, that workflow, go beyond just, you know, screens and say, Hey, could I now use a voice interface that doesn’t require keystrokes? Could I imagine a more adaptive and up to the minute ready to serve me experience? So there’s, there’s all kinds of new doors that AI opens, but it all has to start with, what problem can I solve?
And how can I solve it better? How could I put a smile on that customer’s face?
AI-First Organizations, Strong Data, and Real-World Examples
Rina Alexin | 20:56-21:53
also think that there’s something to be said about truly AI first organizations who really get it because I think the ones that get slowed down by AI are just not using it correctly. They’re probably using it in a way that ⁓
exacerbates mistakes, right? Because AI can make mistakes and maybe not, they don’t have like the right operating process to make good use of it. Because I can tell you there are companies and they might be small startups who are building with something like 90 % AI generated code that has been checked by engineers and they’re gonna grow, right? Like that is potentially the future. So there are to..
To that point, there are people who are doing it right. And are there any examples from what you’ve seen of people who are actually able to make maybe the change or the switch to becoming more AI-led?
Roger Snyder | 21:53-23:53
Again, think it is you start in the problem space first, right? The ones that are doing it well, like one of the clients I worked with over the summer was talking about how to improve. And this is a company, by the way, that produces physical goods, right? They produce ⁓ stuff to help in construction. So it’s not like their products are going to have AI in them, but all the tools that help everyone use their products more effectively, they started from that perspective of how could I?
make the tools that people are trying to use today to help buy and use our products more effective. And that’s where they started from. And when they did, then they started working on completely reimagining those interfaces, those experiences, and creating the opportunities for real places where AI could speed things up and where AI could also add new qualitatively different experiences, 3D modeling, easier rotation.
Right? More visualization capabilities than had ever really been thought about before, because AI can do that much more well, much more effectively than we’ve had in the past. Right? So it was both an efficiency gain as well as an experience, a whole new experience improvement as a result of really starting from the problems that vex their customers and then coming at it from, now where could AI actually have real value? And so it’s fun to watch that kind of thing.
And every week, we’re seeing some examples of new tools out there that can create new visualizations. ⁓ Marble just launched, you can now, and they were doing it for some neat, fantastical 3D visualizations. But I immediately went to Architects. Wow. Being able to actually do all of this in bits and visual virtual experiences is going to save so much time and money building a house.
or building a new multi-purpose room in a school, right? Two things that you and I are both thinking about. So the future is bright.
Rina Alexin | 23:53-23:59
It’s already happening Roger. I’m gonna let you know my
architect shared multiple vision visuals already and they are definitely chat generated
Roger Snyder | 23:59-24:05
Nice. That’s fantastic, right? That’s what
we’re talking about.
Rina Alexin | 24:05-24:23
It allowed us to like as, as a, as a person to actually see it and then react to it, like as we were talking about. So, and I agree with you. I think that this is, I, and I see it also in my personal life. My husband runs an AI first or a company that’s really changing things. And I don’t want to give too much of a shout out. Maybe I should bring him on the podcast, but, ⁓ he was just telling me the other day that they’ve already were able to create like a tool that you just walk around and you visualize and you like, you record a video, you talk to it and then it creates the.
Roger Snyder | 24:23-24:34
Yeah.
Rina Alexin | 24:34-24:42
And it’s just the, but to your point, like their data game is so strong in order to enable this. If your data is not good, ⁓ it’s not going to work. Like it’s going to, it’s, it’s just, I mean, that’s just not possible to work. have to, you have to have very strong data for the ones who are doing well. So yeah, go ahead.
Turning “AI-First” into “Customer-First” with Experiments and Discovery
Roger Snyder | 24:42-26:32
That’s it. Bye.
Mm-hmm.
Absolutely. Right. Now you mentioned something else I want ⁓ to key in on
as an opportunity for product managers. AI first, right? AI first is creating this dynamic in companies where now it’s like, OK, we’ve got to invest in AI tools. We’ve got to invest in being able to either use AI better in our practices or incorporate AI into our products, what’s appropriate, right? So this AI first thing. Well, the AI first thing is an opportunity for product managers.
to shift their organizations to become more customer focused, more customer first, with more experimentation than ever before. It unlocks the door. Now people are opening their checkbooks. They’re opening up their budgets inside their companies. Great. Get them to write some checks that then lets you use AI conducting more discovery, conducting more experimentation, getting ideas, even with vibe coding, out in front of customers earlier so you can get that feedback.
You can show and tell and be able to get the feedback and more rapidly iterate towards what really customers want. This unlocks an ability that I wish when I was doing the craft 10, 15 years ago, I could only dream of being able to reach so many customers with so much richer capabilities of sharing with them what we think really is their problem first.
And then later showing them mockups of solutions and getting that rapid feedback in small little bits incrementally. I AI is going to also unlock the way we do Agile more effectively than ever before. But the checkbooks open right now. Everybody wants to invest in their work.
How AI Can Improve Agile and Continuous Delivery
Rina Alexin | 26:32-26:42
Wait, no,
let’s go a little more specific. How, cause I’ve heard this as well of AI improving agile. Like can you tell a specific story, just something more actionable for our listeners.
Roger Snyder | 26:42-27:52
Yeah,
well, so ⁓ one of the things that Agile should do is produce a production-grade increment at the end of every sprint. So both in the actual development practices, AI should help you get there faster, both in terms of things like vibe coding and Evals, so you actually test everything well and you get a better production-grade increment. But then that increment can get out in front of customers and actually be tested.
and get feedback and do that at scale. You can now deploy that little piece of new value and test it out. And you’ve got it properly instrumented. And now you’re using AI in the back end to pay attention to the actual usage, to get that empathy at scale and be able to pull it all back together really fast.
Get the results again, analyze the data more quickly, get the results, understand where the product needs to improve further, where you made a great improvement and where you maybe made a big mistake, and get those things back into the product backlog and then the subsequent sprints to move more rapidly. AI can help in every aspect of
Predictions and Hopes for Product Teams in 2026
Rina Alexin | 27:52-28:14
Is that some of maybe just thinking ahead? I mean, we’re, you know, at the end of 2025 right now, looking into 2026. How do you think that, um, I guess that’s my prediction question is that what do you predict is going to change in 2026? Maybe just in the first half, given at how fast things are changing, what’s going to change by March of next year?
Roger Snyder | 28:14-29:05
Wow, well, I’m hoping, like I said, product managers need to take this opportunity and spend wisely to get better tooling, work with their development partner teams to also make sure that AI is being incorporated into their development practices in ways that allow them to unleash this and go more rapidly. In the first half of 2026, I want to see a lot more experiments. I want to see a lot more learning.
And I think AI can bring us there. As an individual product manager, we all need to continue to be learning machines ourselves, playing with new tools, every week thinking, asking yourself the question of, can I use AI to help do this better? And I think that’s where we are on the cusp of this right now. And it’s an exciting time.
Agents vs. “Agents”: Definitions, Copilot, and True Agentic AI
Rina Alexin | 29:05-29:10
Well, and I think I want to bring this conversation back a little bit. I do agree with you, but in terms of the signal and the noise, there’s also a lot of talk about agents and agentic AI. And, uh,
Roger Snyder | 29:10-29:19
We done.
Hmm.
Mm-hmm.
Rina Alexin | 29:19-29:50
How do I put this? I mean, it’s been already out in the news for a while. There are large companies who have deployed something like 10,000 agents ⁓ for use cases of who knows. And now, and I even heard this, I was talking to a product leader where it’s like, it’s not just product and development that’s doing it. The customer success team just spun up an agent or spun up a tool. ⁓ And there’s definitely some challenges with governance of that.
That’s a whole other story. But I also feel people don’t really know really how to define it. maybe given that you are a great instructor, maybe you can help define that a little bit of what exactly are people talking about for those who maybe are just hearing it and not really understanding it.
Roger Snyder | 29:50-31:53
Mm-hmm.
Sure. Yeah, well, Dean Peters, as you mentioned him a couple of times already, he and I have been working on this very thing lately. ⁓ And first off, let me just say this. Not all agents, so air quotes, agents, are really agents. So let’s just say, like Microsoft Copilot has a thing called agents. And they’re really valuable, but they’re not really agentic agents. Instead, they’re kind of custom mini LLMs. And you can listen to my recent webinar on how to put those into practice.
But true agentic agents have got these sort five hallmarks. And the five hallmarks are autonomy, where they can execute without human commands. They’re goal-oriented. They choose actions to take to go after a specific objective that you’ve defined for them. They are context-aware. So they adapt to different circumstances using memory and data. And they do make decisions now. They evaluate options, and they take action.
But there’s then this last element that’s really important of accountability. They do operate within specific guardrails, such as maybe a budget or the regulations that they need to follow in any particular case, particularly if you’re using them in a medical or a legal field. There’s going to be some clear guardrails that have to be followed for accountability. So what makes an agent really powerful
is this ability to do work with, you know, and we’re now talking about like psychological term, agency, right? It actually, an agent has agency to achieve goals for you without your supervision, but within those guardrails, all right? So does that help in terms of like, what is an agent, what is not an agent?
Autonomy vs. Accountability: Risk, Cost, and Guardrails for Agents
Rina Alexin | 31:53-32:02
Well, why, what is the difference? ⁓ Like why is the copilot? Like now that you’ve defined it, why is the co, like what is missing about copilot? That it’s not.
Roger Snyder | 32:02-33:01
Yeah, so a
co-pilot agent is not an agent in the sense that it does not have autonomy. It doesn’t go do things on your behalf. Yes, it can have context. And it doesn’t make decisions. It doesn’t do goal orientation. It becomes a mini LLM that you train, give it some great knowledge, and then it can actually do good things like respond to queries. So you can train your AI assistant using a co-pilot agent.
or gem and iGem, right? And then others can go and query it and ask questions and help me understand this. Who’s the right ⁓ customer to go after for this? And what benefits would be the most important ones for me to talk to this particular customer about? It can do all that, right? But it is an action that is taken only when it’s queried, right? Just like your traditional chat box, okay? Whereas a real agent, you’re going to spin it up, give it these five aspects, and then send it on its way to do its job.
Rina Alexin | 33:01-33:41
So, and actually to that last point that you just made, ⁓ just the other day, we made a change in our HubSpot CRM and it had unintended consequences. Small ones, small ones, but still unintended. And that happens quite a lot. And you also mentioned at some time at the start of our conversation that AI is also very expensive. And so how, like, I saw a report that came out that…
an AI could take something like 15 basis points off of profit. And I’m not talking about 15 basis points of the profit. I mean, off of profit. That’s a crazy amount. That’s a lot.
Roger Snyder | 33:41-33:45
Hmm. Wait.
Rina Alexin | 33:45-33:57
And, uh, this is, this is standard. It could go high as 40%. And, and I think in terms of just an investors look to AI to be value, not necessarily destroying. So agents to me, maybe there’s some danger around, well, if it truly has autonomy, I mean, can it just go and spend, spend, spend,
Roger Snyder | 33:57-36:51
and
Mm-hmm. Whoa, what’s going on here?
All right, all right, all right, exactly. So there’s this tension then between the first and last hallmarks that we just talked about of autonomy and accountability, right? You may want it to purchase groceries for you and have them delivered and be able to adapt to, you know, the particular cereal that you wanted wasn’t available or your favorite kinds of soup weren’t available. And it will now in the moment make decisions.
and get you different soup and get you a different cereal. And it will do that. And you’re probably OK with that. As long as you set a budget and say, my grocery spend on a weekly basis should never be more than $125, whatever it might be. And it must abide by those rules. And you’ve taught it about your preferences in soups. So you’ve given it those guardrails. The autonomy and accountability then are hopefully in balance. But to your point, I honestly
Even though I’m an AI optimist, I’m not ready to give an AI the authority to plan my 35th anniversary vacation and just give it my credit card number. Even if I do give it a spend limit and all my preferences, I’m not there yet. So there are still risks with agents that have to be thought through to your point. And when you get to the corporate level, like the examples you’ve just cited, it becomes even more serious.
So I think with agents, need to, again, experiment. We need to start giving it the autonomy with good sets of accountability aspects and see how well it does. And over time, we will then evolve our degree of comfort and also its ability to perform effectively. This is also going to be an iterative process. And so we will get there. But we also shouldn’t, again, just like AI itself, think of agents as the be-all, do-all.
solution for everything. Sometimes simple automation, which we’ve had for a very long time, is really all you need. And you just use something like a Zapier automation system of rules, right? And that’s going to be a lot simpler and a lot cheaper and a lot easier to maintain than building up a set of agents to do this task. When you actually need a learning system, something that it can adapt based on new data, right? Then, all right, let’s try this agent out, but let’s be really clear.
about those accountability rules to make sure that agency stays in its lane, right? And stops at the stop sign and makes sure that you don’t overspend and you’re not affecting profit, right? All of that is something we’re still honestly learning and we need to do well.
How to Start with AI as a PM (Baby Steps and Daily Habits)
Rina Alexin | 36:51-37:10
Yeah, I’m kind of internally just from you you sharing this reminded me that humans also make mistakes, right? ⁓ the other day, my Instacart order came with two packets, sorry, two, bunches of radishes instead of beets because they kind of look the same.
Roger Snyder | 36:57-37:21
Mm-hmm.
Really dope, right?
Rina Alexin | 37:10-37:37
So the
point I’m trying to make is we all make mistakes and the concept though of an agent is maybe it doesn’t even know it’s making a mistake. so it doesn’t know what it is. Right. And so how can, I guess what I’m trying to ask is like, how can a product manager evaluate that this is a strong use case for an agent versus not? And I think you kind of started answering that with.
Roger Snyder | 37:21-38:55
No, of course it doesn’t know anything, right? Let’s always be clear about that.
⁓ the need for flexibility. ⁓
Right, right. The learning system aspects, right, are where an agent can really shine in terms of, okay, can we give it more autonomy, but with those other aspects so that it can react in real time to the situation of the context? As long as the goals are really clear, if the context is shifting, it can adapt. And that is pretty amazing. That is something that we don’t have without, you know, a truly agent and it can make good decisions.
But it has to make those good decisions with all of those guardrails well defined and set out. So like I said, think, you know, agentic AI is where we were with chat GPT three, four years ago, and we’re just starting with it and we need to practice it. think it can unleash some really amazing new capabilities, but we’ve got to be careful about it and, and use a product manager to your point needs to have thoughtful conversations, just like we always should.
about the ethics and the nature of what we’re doing and making sure that in fact, we will deliver really interesting new value, solve new problems using this technology, but we’re gonna do it in thoughtful ways that don’t cause harm, right?
Rina Alexin | 38:55-39:24
So Roger, we’re almost in 2026 right now. And so I hope I’m talking to a small subset of product management leaders here. But you and I both know that there’s just, are genuinely a lot of people who, to your original point, might still be afraid of working with this technology. So as maybe a takeaway for a product manager listening who
Roger Snyder | 39:14-39:28
Mm-hmm. Sure.
Rina Alexin | 39:24-39:28
needs to get started. What advice do you have for them?
Roger Snyder | 39:28-41:57
Sure. So you’ve really never used AI before. I think, you know, just start baby steps, right? Start by my first thing when I got started on this a few years ago was I stopped using search and I just started asking my favorite chat bot the same question that I would have put into the search bar before and start just gaining a degree of conversational comfort with your chat bot, right? You’re going to see some mistakes. You’re going to see it not get the right answers, but that’s okay.
You’re now going to actually engage in that conversational aspect of like, no, that’s not what I needed. This is what I needed. Whereas with searching before, it was so frustrating because it’s like, my gosh, it totally went off in the wrong direction. You have to reformulate your search completely. No, now you can just have a conversation and tell it, not this, more of that. You can even give it scores. You can say, hey, you know what? That answer was a C minus. And here’s what I’d rather you do. And it actually pays attention to that. And get into interactions with it. So just start getting comfortable.
first step was stop searching, start using a chatbot, right? Then start using that chatbot as an assistant in conversations. Whenever you start your day as a product manager, how could I be better if I were just thinking more broadly about something? And now, especially in this day where we’re not in offices nearly as much, you can turn to your AI assistant and have that over the cubicle wall conversation, if you will, but now at the keyboard with a chatbot, right? Start using it for market research.
Start using it for competitive analysis. Start using it for other forms of discovery. Get comfortable with that. Then move up the ladder. Create a co-pilot agent like I was talking about earlier. As that assistant that can not only help you, but help your sales team, your customer support team, your legal team, make it the place that they go first to ask questions they would normally ask you. That’s going to be a huge labor saver for everybody involved. So you want to create a co-pilot agent or a Gemini gem.
Right. And then next up the ladder, start actually using purpose specific tools, notebook LM for, for really thoughtful market research, right. Descript or synthesia for creating some models to experiment with and put out as lo-fi experimentations for discovery. Right. So that, that I hope that gets people started. I’ve given them like three or four steps up the ladder and, and jump in wherever you may be on that ladder and, and make sure you’re using AI every day.
Inside the Classroom: First-Time AI Users and Experimentation
Rina Alexin | 41:57-42:27
For the, and I’m just gonna, what I’m thinking about is you now have taught our AI class a few times. And as you mentioned, there are people who come into this class without any prior experience. ⁓ Maybe you could make that fear kind of lessened by thinking about how do those learners, how do they, once they start using it for the first time, like,
Do see the emotions or the, like, ⁓ do they present themselves and how do they interact with the iron?
Roger Snyder | 42:27-44:08
yeah.
It is fun actually to watch in class people who haven’t used AI before and see the reactions like you were saying, right? Some people are like, whoa, that was amazingly good. Others are like, are you kidding me right now? That was crazy, right? And both are gonna be true for all of us, right? But that’s where you start breaking down that fear of like, you are always in charge.
and you’re going to learn some things. And then we give tips like in the class of like, how do you properly have a prompt construction? You got to give good context, the old adage of computer science garbage in, garbage out. So the more you provide great context, the better off you’re going to be in the outputs that you get. But just start having fun with it, right? Start with small little things that can’t have a big consequence. Start with a learning mind. The other thing I always tell my students that are new to this is like start subscribing to other voices.
Right. Listen to our webinar series, but subscribe to Lenny’s podcast or how I AI by Clair Vo and listen, I ⁓ can’t wait to hear each new podcast and get new ideas about things. But experiment, experiment, experiment. Right. In the early going in the class, it’s always fun. Like I said, to see people. Right. Doing different things and having both positive and negative experiences with negative learning experiences. Right.
little failures that allow you to learn from those things and keep going. That’s where you’ll gain that confidence. You’ll start getting over your fears, right? And then for every new task, every day, ask yourself, how can I make AI make this easier?
Roger’s 2026 Challenge to Product Managers and Closing Remarks
Rina Alexin | 44:08-44:30
So you all heard it on this podcast, Roger’s Challenge for Product Managers and Product Leaders in 2026. You have to up your experimentations, you have to have more fun, and you have to start using this on a daily basis. So that is Roger’s Challenge. Did I get it right? Well, Roger.
Roger Snyder | 44:26-44:34
That’s right, you totally did. Do it. Have fun with it.
Rina Alexin | 44:30-44:53
It was so fun to talk to you today. I’m so glad we were able to do this. ⁓ How can our listeners, I don’t know, follow you or reach out after they listen to this podcast?
Roger Snyder | 44:34-45:32
Yeah.
yeah, check me out on LinkedIn. Roger Snyder, right? I think I’m actually LinkedIn slash one Roger Snyder. S-NY-D-E-R. People sometimes make mistake on my last name, but that should do it.
Rina Alexin | 44:53-45:32
Great, well, thank you so much. And thank you all for tuning into another episode of Productside Stories. If you found our conversation valuable today, don’t keep it to yourself. Share it with a friend and subscribe to Productside Stories so you don’t miss a future episode. I hope today’s insights inspire you and propel your AI product journey forward. Remember, every challenge is just a lesson waiting to be learned.
Visit us at productside.com for more free resources, including webinars, templates, playbooks, and other product wisdom repackaged for you. I’m Rina Alexin, and until next time, keep innovating, keep leading, and keep creating stories worth sharing.
Roger Snyder | 45:32-45:32
Thanks, Rina, very fun.