Productside Stories

How Typeform Became AI-First (without Losing Its Human Touch)

Featured Guest:

Aleks Bass | Chief Product Officer (and interim CPTO) at Typeform
28/10/2025

Summary  

In this episode of Productside Stories, Rina Alexin speaks with Alex Bass, Chief Product Officer at Typeform, about the journey of becoming an AI-first organization. They discuss the importance of defining key terms, engaging cross-functional teams, and the role of procurement in AI initiatives. Alex shares insights on measuring success, establishing success criteria, and the transformative impact of AI on product development and organizational efficiency. He emphasizes the need for trust, clarity, and observability in the AI adoption process, providing valuable advice for product leaders embarking on their own AI transformation journeys.  

  

Takeaways  

  • Stepping into the C-suite provides visibility into every decision. 
  • Product leadership is about connecting many dots across the organization. 
  • AI first organizations focus on intentional product experiences. 
  • Engaging procurement early is critical for AI tool adoption. 
  • Standardizing evaluation criteria helps in comparing AI tools. 
  • Success in AI transformation requires clarity and observability. 
  • AI can dramatically accelerate processes like competitive intelligence. 
  • Trust and permission to experiment are essential for teams. 
  • AI tools can enhance collaboration between product, design, and engineering. 
  • Be honest about what you know and don’t know in AI initiatives.  

 

Chapters  

00:00 Introduction to AI First Organizations 

12:52 Transitioning to the C-Suite 

24:56 Defining AI First Organizations 

35:46 Standardizing Evaluation Processes for AI Tools 

36:43 Challenges in AI Adoption 

38:30 Defining Success Criteria for AI Adoption 

40:45 Measuring Impact and Success in AI Integration 

47:03 The Role of Procurement in AI Implementation 

49:58 Advice for Product Leaders on AI Transformation 

 

Keywords  

AI first, product leadership, Typeform, AI transformation, cross-functional collaboration, procurement, success criteria, product management, engineering, design 

Introduction to AI First Organizations

Rina Alexin | 00:04.843–01:13.315

Hi everyone. And welcome to Productside stories, the podcast where we revealed the very real and raw lessons learned from product leaders and thinkers all over the world. I’m your host, Rina Alexin, CEO of Productside. And today I had the pleasure of talking with Alex Bass. Alex, you and I had our first conversation. I think it was over a year ago at this point, back when you were the VP of product at survey monkey.

And I just want to first start out by saying a huge congratulations to you. I know you had a wonderful run at SurveyMonkey and the team there was great. But congratulations for being the Chief Product Officer now at Typeform for the past actually two years. And I will even say you’re an even rarer creature because you’re actually the interim Chief Product and Technology Officer of Typeform. And it’s super rare because

When I see CPTOs, they usually have the T background and you come with the P background. So wonderful and great to have you back on the show.

Aleks Bass | 01:13.315–01:17.929

Thank you so much. so excited to be here and excited to talk to you again, Rina.

Rina Alexin | 01:17.929–01:50.774

Yeah, and the reason why we are now having this conversation is there is a LinkedIn post that you made, which caught my eye and I think caught many people’s eye around becoming an AI first organization. And that’s really the topic I want to delve in too deeply with you. But before we do that, how has it been entering the C-suite? I know there’s a lot of product leaders that want to be in your shoes and they want to hear a little bit more about your story. tell us a little bit about how that transition and being this

in being the CPTO at Typeform has been for you.

Aleks Bass | 01:50.774–03:04.545

Yeah, it’s been a wild transition. Stepping into the C-suite was both exciting and grounding. On the one hand, you suddenly have visibility into every decision of the organization, strategy, people, finances, et cetera. On the other hand, it gives you an even stronger conviction that product leadership is about connecting many dots across the organization.

My role is to make sure that the way that we build and we ship aligns with the company’s vision. And that a PLG company, even in some cases, drives the company vision, right? There’s a balance between supporting the company vision and driving it. And that the teams feel empowered and safe and not overburdened. So in the last year, we’ve launched three new teams, hired a bunch of people, and still maintained the highest R &D engagement score that we have ever seen at this company.

And so that tells me we’re building not just products, but a culture that people genuinely believe in, a strategy that people genuinely believe in. And that brings me so much joy and satisfaction. There’s nothing I could say to really fully express my gratitude to the team and this experience.

Rina Alexin | 03:04.545–04:05.749

So I love that you are leading with culture here. I think there’s friction points with product with just about every department because product, unfortunately or fortunately, has the role at an organization to be the change agent. So you often have to be the bearer of bad news as well as good news. And there is always friction between the technology department and the product department.

When I hear product managers and product leaders talk about a situation where at their organization, they’re reporting into a CPTO with a big T, little P, I’ll say it that way. There tends to be maybe some frustration around whether or not that person has a true product sense. Have you found it the other way? Is there frustration from the tech, technology side of the house?

on whether or not you have their best interests in

Aleks Bass | 04:05.749–06:20.629

That’s a really good question. And I often think about this, Rina, because it’s an interesting topic, want to, sitting in a role like this, you should be the steward of the organization and what’s best for the team and the company and the customers and just really balancing a lot of different decisions, information, investments, options, opportunities, et cetera. And I think

I think regardless of whether you come from the product background, the design background, the technology background, the biggest challenges that I’ve seen are not having respect for each other’s crafts and the true complexity and intention and investment that it takes to get good at each of those domains and that they are fundamentally different.

the peak performance of any organization, any team is those three being able to work together and have enough context in each other’s roles and responsibilities to be able to negotiate the right outcome for the customer, right? Because any of them individually without the context from the other two aren’t going to do what’s best for the business and what’s best for the customer. And so they really need enough context to be able to have those really intentional conversations, understand the trade-offs, understand the impact.

and do as best as they can for the team and the organization. And I would say, hopefully I’m doing that for this team, although we’d have to tap in some of our folks in engineering to get a true readout of how they feel. I wouldn’t want to speak for them, but I have a really close collaboration and relationship with the VP of engineering at Typeform, Stefan, who is incredible. He has pioneered quite a few things in the engineering organization that’s

advocate for them and really engage their thought processes and what they need to be the best that they can possibly be at their jobs. And I think honestly, the biggest way that I’ve supported that team is just not getting in the way in some of those scenarios, right? In trusting and understanding what they need and why, and willing to take a bet to try it because we need to be different. We need to try new things. We need to invest in things that

Aleks Bass | 06:20.629–08:41.045

not everybody else is doing in order for us to really have the kind of impact, right? Just using the same playbook everybody else is using isn’t going to get you those results. And so putting trust in the team and having clear expectations and alignment around what we’re willing to bet on has been critical. And that trust, think, is another pillar arena that I think is really interesting to explore because when the challenges that I see is people getting into each other’s lanes because of trust.

They don’t have the trust. They don’t believe that the product manager is going to do the thing that they need to do in order to prioritize the right stuff for the business. Or they don’t believe that the designer is going to do the thing that they need to do. Or that they don’t believe that the engineer is going to build the right thing in the right way for the customer. And I would say we’ve done a ton of work to bridge those gaps through collaboration, open conversations, and really figuring out how to broker the right

open, honest conversation that the teams need in order to get those open issues. And when we dig in, it’s often, it’s simpler than we think it’s going to be, right? Like the technology teams are often concerned that we’re going to force them into building a bunch of stuff really quickly without paying off the tech debt or having a strategy for managing the tech debt that’s going to ensue from that intentional decision to see whether something works, right? Because oftentimes,

You get the pressure in R&D teams to move fast and ship and do all these things so we can learn. But there’s not always time set aside at the end of that learning to actually really productize that offering. Because trade-offs have been made. And that communication pattern between the teams knowing that they’re doing trade-offs and the executive team realizing that that debt has to be paid off at some point, right? And there has to be real investment in how these things are truly scaled to actually support the market.

isn’t always there. And then on the design side, oftentimes, you know, in their ideal world, they would have a lot more time and effort to explore what the right execution is, but there’s time pressures and constraints put on them too. So like, are we ever going to get a chance to actually build out what this experience should be? Is there going to be enough time for iteration or once we get this first version out, are we just putting a checkbox next to it and calling it done? And on the PM side, it’s often

Aleks Bass | 08:41.045–08:55.873

that they can’t have the impact, right, with the team that they want to have from a customer perspective fast enough because their concerns are that the other two teams want to make sure we build it right before we learn if it’s the right thing to build. And I think that by putting all of those things on the table and having those intentional conversations for each domain, the teams, you know, once they realize that those are the real issues that they’re grappling with, can have the right

Rina Alexin | 08:55.873–09:09.421

Yeah.

Aleks Bass | 09:09.421–09:30.397

collaborative conversations to figure out what’s the right thing to do now versus six months from now versus a year from now. And helping to support them to have those conversations and be honest and open about those conversations I think has been the most successful tactic that on the R &D leadership team that we’ve taken, which I’m really grateful for.

Rina Alexin | 09:30.397–10:16.545

And I think what you’re describing right now, I’m just hearing you talk and yeah, it sounds so simple, but as you kind of almost mentioned it, sometimes it really is just that simple. have to almost like call out the elephant in the room. The elephant is quite big. You know what it is, but you have to have that conversation in order to address it. So to build that trust across these organizations, I just, wonder why people fail if it’s so clear and simple. Is it.

Is it that there’s too much, I wanna say like there’s too much hurt in some of these conversations where historically there’s been more finger pointing rather than understanding. Maybe that could be some of it. Do you have an opinion there?

Aleks Bass | 10:16.545–11:39.617

Yeah, think human beings are complicated. And I think very rarely, if we really listen to people, do they actually say what they need? Right? It’s often tied to individual situations or experiences or product features. And it’s often communicated not as the root cause of the challenge that they’re dealing with. It’s defined. I’m trying to think of a really good example.

I’m failing to come up with one right now, honestly, in the moment. But there’s several situations in which teams are thinking about that specific feature and the way they’re articulating what the real challenges are is not clear to the rest of the teams. And there’s so many predictable tapoos in R &D relationships, right? The tech teams always want to avoid tech debt. And the designers always want to spend

a lot of extra cycles to figure out what the right experience is. And the PMs always want to rush to whatever solution is going to get to some customer value and put some points on the board. And I think it’s those assumptions of the taboo scenarios that have been repeated through many R &D organizations over time have led to almost like blindness to the real issues that these teams are experiencing and these, you know,

functions and disciplines are experiencing. And so I think that we have to kind of erase those preconceived notions, biases, assumptions that what other engineers in the past have said to us, what other designers in the past have said to us, what other product managers in the past have said to us, and really lean in with curiosity to understand what’s the issue underneath the issue that’s being communicated for this one particular feature experience.

Rina Alexin | 11:39.617–12:09.441

Check it.

Aleks Bass | 12:09.441–12:14.945

we follow patterns. So as humans, rarely invest the time to actually do that.

Rina Alexin | 12:14.945–13:28.363

So Alex, the reason why I wanted you to maybe try and struggle with that question a little bit and articulate it is because what I want to, so what we’re about to discuss is how do you become an AI first organization? And I think people often miss the people element of transformations, no matter how many times, again, this is like another elephant in the room. You need to have the right people and the right culture and mindset in order to accomplish any of these tasks. And it sounds like by you stepping into the leadership position at Typeform,

And being able to have the ability to have these kinds of conversations actually created a foundation for you to build on to help create change. mean, Typeform is not, you know, technically an AI native company, right? Like you’ve been around for a while, even though I’m sure there’s been elements of AI in your product. So, okay, so let’s actually start and have this discussion.

And before we get into deep with it, I always like to start with a little bit of definitions. So because I hear the terms like AI first being thrown around quite a bit. So I’m curious what your definition of an AI first organization really is.

Transitioning to the C-Suite

Aleks Bass | 13:28.363–15:50.378

Yeah, it’s an excellent question. We’ve been grappling with the definitions because the definitions are so important, right? Because as soon as you’re working off of two different definitions within an organization, it really bifurcates how you’re gonna essentially solve some of the challenges that you’re dealing with. So we’ve been having really interesting conversations even since that post on Being AI First to really define how we would look at being AI native, right? Which is…

from the product experience, that’s how we would categorize it, and then being AI first, which is from an internal processes perspective and how we work. And for us, when we think about this, AI native is about creating that really intentional product experience where AI doesn’t feel bolted on. You have a co-pilot or some sort of assistive AI that is really driving adoption of your features, discoverability of your features without people having to.

use the GUI to get to those things and understand every new thing that you’re adding into the product, which for many, many companies, they get to that place where they’re listening to their customers and they’re evolving their product and they’re adding new capabilities and features and discoverability and usage and all of those things start to become issues as you have more and more more more capabilities in the platform. And so that assistive AI investment can help.

mitigate some of that, really drive that adoption, drive that discoverability, and help you address some of those challenges. And then as we think about being AI native, agentic solutions for differentiation are surfacing as well, right? So how can we get, even beyond what the in-product click, point and click experience can get you, solutions for our customers to solve real meaningful differences for them, take action for them, give them information that’s not necessarily available solely within your own platform.

And then extensibility for scale, is another thing that we’re thinking about right now. MCPs are, you know, all the rage and people are continuing to talk about how you can get, you know, other platforms like chat GPT and cloud and anthropic to, really interface with your product. So how do you start a workflow without even being in type form and kicking off a form generation form creation piece. We’re thinking about all of those things and those investments. And I think some are.

Aleks Bass | 15:50.378–16:46.495

easier to understand how to take to market, how to get the right kind of pricing and packaging and consumption models oriented towards them. And some of them are a little trickier and riskier like some of the extensibility elements where we’re still early in our journey of exploring those and figuring out, you know, how can we take that to market and have it actually be supportive for the business model and not in conflict with the business model. Whereas when I think about

AI first, it’s about every workflow and every team using AI intentionally to move faster and to be smarter in the way that they’re moving. so that shared language has really given us a vision that we could rally around and it helps people understand, you know, when they’re working on either a process improvement or a new capability, where within that range it would sit and what pillar of our AI strategy they’re delivering on.

Rina Alexin | 16:46.495–16:56.822

When you started on this path to becoming, and I’m just going to focus right now on AI first, because rather than the complexity of becoming truly like AI foundational in your product, as you’re kind of describing, so becoming an AI first organization, was this like a rapid aha light bulb moment? Was it gradual? And let’s actually start there. Like how did it come about?

Aleks Bass | 16:56.822–17:14.901

and

Rina Alexin | 17:14.901–17:22.709

Was it you driving it? Was it your leader, like the full leadership team? Walk us through that.

Aleks Bass | 17:22.709–19:44.106

Yeah, so it’s a good question. think our Chief People Officer Laura Daniels, who’s fantastic, happened to mention in one of our LT sessions after she had attended an event where they’re thinking about people strategies to support AI native organizations and really what that means. She came back to an LT session and shared some really good context in terms of

You know, the people function is thinking five, 10 years ahead, what does this mean for organizations? How are they going to evolve? And she was surfacing this concept of a super IC, right? That could do a lot of different things and several assisted by AI. And I’ve gone to a few other events where this concept of specifically within the application of R&D organizations or product development, there’s inflammatory things being said, right? Everywhere from the edge of

you can get by with a quarter or a tenth of the engineers that you have today if you just leverage AI. All the way through, there’s not going to be distinct functions in product design and engineering anymore because everyone’s going to meld into one by being able to go across all of these tools and do all of these things. And I think any extremist views, as I would describe them, that move into those directions are challenging and…

I don’t know that that’s a realistic outcome for most organizations. Maybe there are some folks on the edges where those things would be pretty practical for them. But at least as I think about the strategy at Typeform, I don’t see that as a realistic outcome in the next couple of years. And who knows beyond that because this domain moves so quickly. So that inspired us to think about this. one of the interesting things is I work with an incredibly talented R&D organization, probably the most talented organization.

that I’ve ever had the privilege of working with in my career. And so many of them, even without formal support of exploring some of these tools or being pushed to leverage certain AI in their daily work, they’ve actually taken it upon themselves to start exploring tools and asking for access to things to try to accelerate workflows intentionally. so this is one of those moments where instead of pushing back,

Aleks Bass | 19:44.106–20:52.491

overly relying on risks associated with, you know, I call it fake paranoia, right, where security risks and legal risks and all of those risks start to get surfaced in very generic terms without really looking at the proposal of what the individual is saying and seeing what the risks actually are. Because until you look at that proposed solution, any risk assessment you’re getting from legal or security is a blanket one towards all AI.

Right, and it’s just not specific enough and no effort has been made to actually see what we could do to shore up those risks. So without saying no, enabling the teams to figure out what would actually be helpful for them and investing in POCs to test out how they could improve their workflows and their jobs, both from an accuracy and an efficiency perspective by leveraging.

tools available in the market was one of the critical components that I think we did differently. Or maybe similarly, I don’t know. You’ve spoken to more folks who are trying to play with AI to see where those gaps are.

Rina Alexin | 20:52.491–22:24.255

Well, that’s a big challenge, actually. So what you’re talking about here, I’m hearing more often is that they’re getting the blanket legal roadblock around, well, we don’t want our data to be fed into these models, there’s security risks, and therefore there is no AI. And so what I’m actually seeing, and this is, I think, plays out in many organizations is a lot of people using their personal computers, buying apps personally.

and still putting in, like it or not, they’re still putting in some of the company information. It actually would be more safe if they did a different approach of really understanding those risks and creating better conditions internally for people to really use the tools. People are going to use these tools. I wonder though, given that you’re at an organization that I think is innovative, and so you’re attracting employees who

want to use the next new thing. So maybe you actually have an easier time of becoming AI first because people are naturally like, that’s what I’m hearing from you is our leadership is aware of it. And also we’re getting this bottom up interest and pressure to do something with this. So we kind of have a forcing function both ways. Do you find that there are pockets in your organization that doesn’t exist where there is maybe a lot more resistance?

Is that maybe just the compliance part? know, like, so talk to me about as an organization, because Typeform is more than just R&D department.

Aleks Bass | 22:24.255–24:40.577

It is absolutely. I would say R&D is definitely pushing the boundaries of where we can leverage AI just inherently by being people who build technology more, it’s easier and faster for us to adopt that technology as well. But I think actually given all of the things that you’ve shared, Rina, the one thing that I think is a reflect back on the journey that we probably did differently is instead of assuming an oversimplification of

how AI tools can impact the organization. We actually did the opposite and we created a really, we had an offsite, I think it was back in March of this year where we brought together data, infra, security, PMs, designers, engineers, and some other cross-functional folks as well to really think about what needs to be true.

in order for us to feel comfortable enabling key AI tools and processes within our systems. And I think it’s like every, no one wants to act or be honest about the fact that none of us really truly know all of it, right? It’s, I’m going to say something inflammatory that maybe people may disagree with me about, but I think it’s impossible to know everything there is to know about every single facet.

that AI tools can impact every other part of the business, right? Like the level of visibility you would need to have is so high and the level of depth with each function that you would need to have is so high that there is no getting around having to say, look, I don’t know what the impact is going to be to these functions that I’m not necessarily in 24 seven. and so the best way to do it is to admit that and to bring people together and to ask them. And everybody in the room has to admit that they don’t know how it’s going to impact every other party.

Right, because it’s this false knowing that ends up getting communicated that turns these situations of exploration and innovation into scenarios where people get told no, because we know it’s going to impact you negatively. It’s going to impact the organization negatively, or it’s going to put things at risk without even really looking at what the details of that proposal are that I think can unlock a lot of doors.

Aleks Bass | 24:40.577–26:54.079

doing that and bringing people together and facilitating conversations. And some of these conversations were uncomfortable, right? Because people have heard of different tools that are more helpful in certain situations than others. And there’s different ways to structure how you would purchase that tool. you going to buy it as a small company and get some of those protections? Or are going to buy an enterprise plan? And then who’s going to get access? And how are you going to manage whether the access is being used appropriately or not? There’s a lot of tricky and hard conversations.

And I would say the biggest place where from an industry perspective and organizational structure perspective that I think most organizations are going to run into roadblocks is procurement for these tools, right? Because, and I think for our learning specifically where there’s so many tools.

It’s unlike anything I’ve ever seen. Normally you have a use case, right? You need to buy a tool to solve X problem. And there’s probably a handful of competitors and you can probably eliminate a couple just by virtue of the fact that they don’t quite do exactly what you need. And then you go through the procurement process for the rest of them. But when you’re looking at AI tools, especially in the last couple of years, the explosion of new products, new tools, new models, new sources of procuring those models.

is astronomically large, right? Much more than I think most procurement processes were built to withstand. And so that’s been the biggest challenge for us is unlocking at scale POCs for, I mean, 10 to 20 AI tools at a given time in a safe environment while also then following the rest of that procurement process through once we’ve decided what we actually want to purchase for what problem space or solution.

I have a ton of empathy for all of the teams involved in procurement right now because it is not just a small increment in terms of what they’re being asked to do as they’re being asked to evaluate these AI tools. It is a full step change for that flow, that function, and the speed at which procurement processes exist. And I think that’s where most folks are going to run into challenges as they start to bring AI tools into the fold and into what their organization is able to use.

Defining AI First Organizations

Rina Alexin | 26:54.079–28:59.085

You know, Alex, I just want to call this out, because I actually think this is what you’re talking about here has deep implications, for all companies over the next, don’t even know how long, because I’m also hearing, understandably this problem from that you’re articulating from procurement. There’s also a problem. I’ll say one of, how do I put this in Silicon Valley where, you know, all of these tools are being founded.

I think there’s a perception now that is just going to be the way it is, that there will be 10 competitors for anything because of just how fast things are changing. And so the other question that I don’t think we have a good answer for is, well, what’s the longevity of each solution that we buy into because there’s going to be massive competition. Some of these are going to lose over time and you make a bet. And so it is a real decision of how

You know, it has serious implications as you build, especially if you build businesses on top of it. And I think that’s why a lot of what I’m understanding, a lot of companies are starting to think, maybe I do something more localized where there’s a limit of dependency on, you know, on a third party. But, okay, so what I’m hearing from you is it’s interesting.

Because I didn’t come into this conversation thinking about it this way. So you’ve changed my mind that the your best friend, if you’re in a AI first transformation should actually be someone in procurement. Like you have to get bring them along for the journey sooner. It’s not necessarily even the the selection committee in R &D. It’s also like the people because those people to your point are already

open to using new tools or they’re kind of, they’re aware of what’s going on in the landscape versus procurement, which maybe are not traditionally aware of these tools. So, so would you, would you say, so how did you bring procurement along? How did you do it?

Aleks Bass | 28:59.085–29:05.578

think that’s such a good question, Rina, because I think I can see this going sideways too, because it’s not enough to say to procurement, hey, we’re going to be POCing some new AI tools. Okay, great. Shake hands and move on. You have to really set an expectation and you have to, I would almost say like lean in early and say, look, we have a fundamentally new need. And this new need means that we need to look at.

Rina Alexin | 29:05.578–29:25.229

You

Aleks Bass | 29:25.229–31:42.926

You know, 10x the scale of tools we usually would. We want to do a POC to prove out what’s actually going to work, which means we’re going to have to have access and people are going to have to use it. And we’re going to have to have some limits around what types of situations we can use it in, et cetera, and how much we’re willing to pay for trial periods and all of those kinds of things. But we need to prove out whether the things that we want to use this for is actually good quality or not. Because I think this is the other fundamental difference is whether the AI tools are helpful or not is so dependent on

the model structure and quality, the latency, the cost. Like there’s so many different variables that are actually variable. That’s not necessarily true for traditional tool purchases that you find yourself in a situation where you can’t like in other situations, you can just, know, maybe a POC is necessary. Maybe not the tool has enough, you know, domain expertise as it relates to what you’re trying to do. They solve this problem for lots of other organizations. So you’re solid, but these tools have.

some interesting criteria. One is that there’s a lot of variability in the quality, the speed, and the cost that you’re gonna pay for them based on different situations and use cases. And they’re also usually relatively new to the market, which means they haven’t been pressure tested through all of the different use cases, and they’re not necessarily specific to you and your type of business. They’re broader. And so the POC becomes non-negotiable. You must go through one.

in order to compare the output of the tools to validate whether something actually can meet your needs or not. And sometimes those things, you you have a front runner that you think is gonna win at the beginning of those conversations and somebody else completely overtakes them based on some of those other variables by the time you’re done with the POC. So you have to remain flexible. And having that conversation with procurement to set that expectation that…

fundamentally you and them are going to have to come up with another process that’s not the standard procurement process for these explorations is critical. Because what I’ve seen happen so often is, hey, we want to run all of these POCs, right? Great, here’s our procurement process. And they just send you to their standard approach or process, which is not going to meet your needs, right? You’ll be stuck in approval processes and reviews and documentation submissions and all of this stuff that

Aleks Bass | 31:42.926–33:09.439

I see in many of these cases may not exist for some of your, you know, potential providers at the level that your organization is going to need in order for it to be effective. so pushing them, pushing 10 potential tools in the direction of getting you this like very granular context is hard, right? And it, it drags on for months and months and months versus if you can create a lightweight

POC for AI tools approach and partnership with procurement knowing that you know, maybe you commit to not putting any sensitive company data in there. You constrain who has access to use it just for the purposes of the POC and you involve them in that process and then they’re much more flexible and creative and they might actually like the process too, right? Because it’s forcing them to do something a little bit differently, to think about their domain in a little bit more of an innovative way that supports the business. So I wouldn’t…

I would definitely make procurement your best friend through this process, but I would also make it very clear that whatever your current processes are, are not going to be sufficient to support the POC needs for AI related tools and you are going to have to co-create and you process with them on that. Unless of course, somehow some of these organizations have already figured out how to fast and incredible tools.

then in that case, think it would, you know, kudos to them, but I haven’t come across one yet. So we’ll see.

Rina Alexin | 33:09.439–35:04.821

Yeah, you have me thinking. So I have two questions based on this. One is it’s actually more of a comment, suppose, procurement, but I’m also thinking, you know, people in finance, what you said is true. This is a variable cost. It’s not necessarily, it’s not fixed. It’s not the similar as like you’re buying a licenses and you know, here’s how many licenses, here’s the monthly costs or annual costs. This is a variable cost structure.

And I saw a study about 2025 benchmark around that AI companies adopting AI are seeing as large as 6 to 50, 6 to 15% margin hit. as in that, that’s like double the marketing budget of some, like 15% of your revenue is now going into unforeseen AI costs. So this is not trivial to, to,

to get right, but also to your point, companies have to pay, this is an investment, companies have to pay this cost to really understand what new value they can be delivering, how they can also in many cases, I don’t wanna say reward, how they can add additional points of leverage for each person that they have on the team. So this is value creation when done right, but you have to, to your point, do it right.

My actual question to you is given that there may be a need to test something like 10 tools, this is not normal. Like it’s not two or three, it’s 10. How do you prevent that process from being just completely chaotic paralysis, just not really making a decision or avoiding the situation where you have a pocket which prefers tool A, pocket tool B, and they don’t, you know, they don’t mesh well together.

Aleks Bass | 35:04.821–35:33.313

Yeah, that’s a great question. From my perspective, the most success we’ve had is in standardizing the thing or the situation we’re evaluating. So if the tool is, let’s say we explored a bunch of tools to support engineers through coding, right, coding assistance, the things that are being coded, the prompts that are being used are standardized and very similar or the same in many instances across

Rina Alexin | 35:33.313–37:11.499

to look nasty, different teams and that way you can actually compare the output more or less apples to apples, maybe not apples to apples fully, because there’s a little bit of preferential treatment around, know, we all have preferences for which apples we love to eat versus not, but enough that like you’re getting an apple, right? It may be a different, slightly different flavor, consistency, sourness profile, but you’re getting an apple roughly and it’s figuring out which apple is the right apple for your

business needs more or less relative to the other options that exist. And that’s been really helpful and successful for us is to be able to standardize that as much as possible and include even people with diverse needs in that ecosystem, right? So if we have four different teams that would want to use it in four different ways, they each have their standardized set of tool or prompts or needs, et cetera, that are going through the same way through all of these products that we’re testing.

And then we figure out, you know, maybe there’s some needs are higher value than others. so higher quality there would give us a higher score in terms of likelihood towards one product winning over another. But it’s just aligning on what are you going to test, make sure it’s consistent across the tools, and what are the criteria by which you decide which tool wins? Because sometimes that involves waiting. Not all problems are created equal. Not all solutions are created equal, right? And so as long as you

go into that process knowing which ones are the most important problem spaces to solve, then the team can usually get to a really good place naturally.

Standardizing Evaluation Processes for AI Tools

Rina Alexin | 35:04–35:33

My actual question to you is — given that there may be a need to test something like ten tools, which is not normal, right? It’s not two or three, it’s ten — how do you prevent that process from becoming completely chaotic, leading to paralysis or indecision? Or avoiding situations where one pocket of the org prefers Tool A and another prefers Tool B, and they just don’t mesh well together?

Aleks Bass | 35:04–37:11

Yeah, that’s a great question. From my perspective, the most success we’ve had is in standardizing what we’re evaluating. So, if the tool is—let’s say we explored a bunch of tools to support engineers through coding, right, coding assistance—the things that are being coded, the prompts that are being used are standardized and very similar or the same in many instances across different teams.

That way you can actually compare the output more or less apples-to-apples—maybe not fully, because there’s always a little bit of preferential treatment. We all have preferences for which apples we love to eat versus not, but enough that you’re getting an apple. It may have a slightly different flavor, consistency, or sourness profile, but you’re still getting an apple.

And the point is figuring out which apple is the right apple for your business needs relative to the other options that exist. That’s been really helpful and successful for us—to be able to standardize as much as possible, and also to include people with diverse needs in that ecosystem.

So, if we have four different teams that would want to use it in four different ways, they each have their standardized set of tools or prompts or needs that are going through the same evaluation flow across all the products we’re testing.

Then we figure out—maybe some needs are higher-value than others. Higher quality there would give us a higher score in terms of likelihood toward one product winning over another. But it’s really about aligning on what you’re going to test, making sure it’s consistent across the tools, and what the criteria are by which you decide which tool wins.

Because sometimes that involves weighting. Not all problems are created equal; not all solutions are created equal. So as long as you go into that process knowing which problem spaces are the most important to solve, the team can usually get to a really good place naturally.

Challenges in AI Adoption

Rina Alexin | 37:11.499–37:48.876

So then I think that actually segues into my next question, which is success criteria. Because what you’re talking about is you have to have good success criteria upfront to make the decision. I completely agree with you. It’s actually a shame that people don’t put in the work early and then they end up in a situation where they don’t have those criteria well defined and then they can’t make a good decision. So how do you know that

going back to just actually becoming AI first organization, how do you know that you’ve done it or that you’re entering a new stage of maturity? How do you measure success here?

Aleks Bass | 37:48.876–39:42.252

Yeah, so for me, the necessary conditions for success as we’re thinking about any of these processes where you’re selecting new tools, building new procurement processes, leveraging AI to support your frameworks is clarity and observability. Those are two variables that are incredibly important. So you need a clear strategy that people can repeat back.

And that’s critical, right? Because if you don’t understand something, then you can’t articulate it back to somebody else, which is one of those old adages, if you truly understand something or you want to understand something, teach it to someone else, and that’ll help you make sure that you retain that context. So in our case, it’s understanding the AI strategy, understanding the processes and the problem spaces, and where those tools or those opportunities for innovation exist, and what specifically you’re going to get.

out of that innovation? Are you thinking that it’s going to be a cost savings? Are you thinking it’s going to be a quality update? Like the output quality is going to be so much higher by using these tools. Is it going to be a speed benefit? Is it some combination of these three things? Is it something else? So that’s really critical. And then you need the observability to know whether those things are actually true. Is it working? Usage data.

quality metrics, cost trade-offs, all of those things. So without both the clarity for the problem space, what you’re working on, how it’s gonna support the business or your processes, and without the observability to understand whether those things are being impacted by the investment that you’re making in some of these tools, it is next to impossible for my experience to be able to make good decisions in this space, and then you end up with AI adoption as chaos.

Right, full chaos throughout the org and no way to sort of rein it back in without shutting it down unanimously, which is not good, right, for the rest of the org.

Defining Success Criteria for AI Adoption

Rina Alexin | 39:42.252–40:03.297

So your answer essentially, I mean, this is one of those things that generally does not change. You need to be aware of what you’re trying to accomplish and you need to be able to measure whether or not you’re getting there. So then my question to you is, so how are you doing it? So what was your, or if you can chair, what was your North Star goal and how are you measuring it?

Aleks Bass | 40:03.297–42:28.543

Yeah, our North Star goal to become an AI first organization is quality and speed for our R &D processes, right? So a lot of teams and domains and disciplines are spending a lot of time doing things in a very manual way, right? And or depleting resources and certain functions like research for things that AI can dramatically accelerate like competitive Intel.

Right, you might have to double check and make sure that some of the things you’re getting back are true. But overall, it’ll generate a fairly accurate, good understanding of where your competitors stand in a lot of different domains. And maybe there’s a recency bias against it, right? Like some of the new features they would have released or some of the new moves they would have made aren’t captured, but that’s an easier bridge for you to close by just looking at those competitors’ pages, products, et cetera, to get a sense for what’s going on there.

So being able to think about where are we spending time that is manually depleting resources from either key teams or key disciplines and being able to ask yourself the question, could AI help me do this better or faster? And if the answer is yes, which AI tool can help me do this better and faster? And then going through that evaluation process to make sure that the tools that you’re using are actually getting you to a better output that’s mostly accurate.

faster and not cost prohibitive from a usage perspective to get to that output. And we’ve done that across disciplines, right? Because I think there’s the cross-functional pieces that, from a communication alignment, generating assets to help tell your strategy story, your roadmap story, et cetera, that the three groups can do together. Individually, each function needs different tools, distinct tools to help them

do things faster, better, et cetera. So we have PMs leveraging things like ChatGPT, Clod, Gemini on product requirements, right, and defining what those are and doing some deep research for certain subsets of customers and what their pain points are and what their experiences are, et cetera. And of course augmenting that with other research that we’re doing internally, but it’s by combining all of those things that we can get to instead of it taking you.

Measuring Impact and Success in AI Integration

Aleks Bass | 42:28.543–44:36.395

a week to generate a one pager that’s just right on a lot of these things. You can have a thought partner in some of these AI tools that can help you get there faster, at least to 80%. And then you refine with the team moving forward from there. Similar things from a design perspective. We’ve seen actually some of our most innovative stuff come out of the design team because we have a design system that they’ve now connected to an MCP and a coding tool. that basically what happens is if

there’s a design in Figma that looks decent and the teams pulled it together that we can actually accelerate the coding of that. 80% of the code actually gets written through the MCP and it’s 80% right. So the engineers often have to tweak and adjust things, but they become more reviewers and editors in some of those instances than necessarily writing that code from scratch, which has been a really interesting partnership between design and engineering. And then we don’t have to…

You know, it’s not generic code because it’s using our existing design system. We’re not having to, to make sure it kind of looks similar, but it’s fundamentally different. It’s a very aligned with the rest of the product experience, which is really exciting. And then rolling out those coding agents to the rest of the engineering org. We started with like, I think 20% adoption, 20% of our engineers were using these coding assistants and now we’re at 80% and we’re trying to get to.

90% by the end of the year and then 100% in Q1. And so really just having an intentional approach to having the functions understand how AI can help them accelerate their core jobs, giving them the access to POCs and tools that can help them do that. And then continuing to enable across the organization. There’s some of it’s.

standards change management things, right? The things we’ve learned from change management and other disciplines can help in AI. But you do really have to have that benefit clear to the disciplines themselves in order for people, humans, to start to adopt these things and to be motivated to use them and to accelerate their workflows.

Rina Alexin | 44:36.395–44:43.904

So Alex, I’m gonna cut this part. I just wanna, cause I realized we’re at time. Do you have like another five ish minutes? Okay, good. So then I’m gonna continue cause I don’t wanna, we’re almost there. I’m gonna comment on this, ask you for a concrete. Are you willing to share like a concrete like metric outside of the adoption or no?

Aleks Bass | 44:43.904–44:58.189

New yeah.

Aleks Bass | 44:58.189–45:05.375

I am, just don’t know which, let me look at my AI deck and see if there’s anything.

Rina Alexin | 45:05.375–45:13.431

If not, it’s okay. And then I’m just gonna ask the advice question we can wrap up. I just wanted to make sure I’m not making you super late, because then I’d wrap up right now.

Aleks Bass | 45:13.431–45:20.499

No worries. What would be helpful? Are you thinking usage or in product stuff? What would be helpful?

Rina Alexin | 45:20.499–45:26.093

I would think of like a hard business impact.

Aleks Bass | 45:26.093–45:28.042

yes, good point. I don’t know if I do yet. have, so one thing that I can share and you tell me if this is helpful. We measure the average PRs that are submitted by each engineer, right? It’s an IC metric. And since the beginning of the year that has increased 200%, which is one of the most critical, I mean,

Rina Alexin | 45:28.042–45:32.545

if you have one.

Rina Alexin | 45:32.545–45:49.119

That’s okay.

Rina Alexin | 45:49.119–46:57.067

Wow, okay.

That’s a great share. So, all right. So I’m going to go back into what you were talking about and then ask that question and then we can unpause. So I’m just going to let the time, like let blank space so that they know to cut it.

You know, Alex, in what you’re saying, if you can save 80 % of your time, you become such a, just a much more effective, efficient person that in terms of your adoption, almost not, I mean, amazing that you’re at 80%, but I’m almost not surprised because if I were an engineer and I’m not using these tools, I now have fallen behind my peers and it’s almost, it’s probably very easy to see that because of just how much more

people who are leveraging these tours are able to accomplish. So it almost like drives itself in a way, right? So that’s phenomenal still. Do you have a just, I’m just curious if you have an example of like maybe some like a more concrete example of what that means for your business, what impacts you’ve seen on the team.

Aleks Bass | 46:57.067–49:21.641

Absolutely. I actually have a couple. So in the engineering organization, we measure PRs that are submitted by engineers, which is essentially the codes that they’re submitting into the product. And what happens is at the beginning of the year, we measure it pretty consistently over, I think we’ve had our measurements in place for a few years. And since the beginning of the year, when we’ve really started this investment in AI,

we’ve had a 200 % increase in the volume of PRs that are being submitted on a regular basis by our engineers. And so to me, that speaks to the amount of, and the level of speed and acceleration that these AI tools are able to provide for our teams, which is incredible. And I’m excited to just see more to come. there may be some critics out there that are like, well,

PRs aren’t always the cleanest metric as you think about evaluating true engineering speed because they’re able to be manipulated by other variables like the size of the tickets and all of that kind of stuff. True, but I have practical examples as well. So a couple of years ago when I started, there were questions that I had about how long it took, for example, to get an integration out the door. And when was the last time you built a new block within the builder that added net new functionality?

And the answers were the last block was built a few years before that and some of the integrations took quite a while. So we’re talking at least a quarter, if not sometimes for the more complex ones, a couple of quarters to get live for a team. And through resourcing changes, AI investment, architecture and a few other investments that we’ve made as an organization.

We have now completely accelerated both of those domains. We are getting integrations out the door in a matter of hours at this point, instead of in a matter of quarters, which is just wild to see how much impact the team has had, especially with support from AI. And on a blocks perspective, we are shipping multiple in a single quarter, which is also incredible. And this was a domain that was so complex that, you know.

The Role of Procurement in AI Implementation

Aleks Bass | 49:21.641–49:43.176

We really wanted to touch it for a while. It was one of those hot potatoes that just kept getting passed around. And now we have a really powerful team leveraging AI, leveraging expertise in architecture to drive innovation in this space in a way that is sustainable, that is scalable. And it’s AI that has made the difference.

Rina Alexin | 49:43.176–50:50.157

Yeah, you know, I almost want to change the term now, like becoming a 10X, X like 10X product manager. It sounds like you became a hundred X organization. And to my previous point around each, each engineer needing to adopt this for themselves, like on an organizational level, if an organization does not adopt it, it’s going to be easy to see, like in terms of just the, just what you’re describing here instead of.

quarterly annual changes, you’re doing it multiple times a quarter, that’s gonna only increase in terms of speed. So just again, kudos to you for being on the forefront here of this truly, truly innovative step change function in our lifetime. Just amazing, Alex. So given everything we’ve talked about today, and thank you for sharing just your knowledge and wisdom here.

What would you say to a product leader who is maybe at the start of the journey? What advice would you have for them?

Advice for Product Leaders on AI Transformation

Aleks Bass | 50:50.157–52:39.561

I would say…

Start by defining what these terms mean to you. AI native, AI first, whatever terms that you’re using within your organization, start defining it and get ahead of what your company thinks that is because that’s going to be critical. Build a strategy that is easy to understand so much so that you can have people who are not in your function repeat that back to you and it still makes sense and retains the values that you think that it should.

Invest in the observability and do that from the start. Don’t wait until you invest in these tools and then you can’t answer questions about how much faster is it helping you be, how much more accurate, what is the cost? Because those are gonna be questions that are gonna be asked. And the longer that you put those off, the more risk it puts your strategy under.

And then lead with trust. So give your team’s permission to experiment within a framework and distance that from the output that your organization expects, right? Really position that as R &D, research and development. not, it shouldn’t be on your roadmap, right? It should be the bonuses that aren’t counted on in your roadmap that can bring additional acceleration and benefits. But if you put too much delivery requirement and expectation around when some of these benefits and some of these elements are going to shift,

you set the team up for failure because that’s, think, when some of the decision making quality deteriorates over time. So that would be a critical one as well because AI transformation isn’t just about flashy demos, right, or what you can show other people. It’s really about making AI that invisible accelerant, that 10x investment of everything that your company does. And it can go beyond R&D as a whole as well. And so supporting others.

Aleks Bass | 52:39.561–53:18.537

on their journey internally is critical. And then the last piece that I’ll say is be honest about where you are. None of us know all of it. And I know that’s incredibly uncomfortable to admit in some of these meetings where you’re with your really intelligent peers who maybe have more expertise in some areas. Be honest about what you know, what you don’t know and ask the question that you think is a dumb question because it’s those questions that are going to get you all to the right alignment, the right prioritization and the right investments.

And avoiding those questions just gets you to suboptimal outcomes that have blind spots that none of you saw.

Rina Alexin | 53:18.537–53:18.935

Yeah, that advice rings true no matter if you’re embarking on this kind of transformation, the more that you can just admit there’s a lot more that we don’t know than we know, the better everyone is. And almost like a relief to be able to say that out loud for you and also for, I think, everybody else on the team. Well, Alex, thanks so much, like I said, for sharing your wisdom here. How can people follow you or find you after they hear this recording?

Aleks Bass | 53:18.935–53:48.725

you

Aleks Bass | 53:48.725–54:38.081

Yeah, please reach out to me on LinkedIn, connect with me. I’m constantly sharing updates from the team. We have a new dynamic in the team that we’ve started called Type Talks that then get turned into blog posts and pieces of content and they span every domain that’s touching AI. So you’ll hear from data, you’ll hear from design, from product, from engineering, from customer success. We’ve had some successes there as well. Our customer success team is using AI to deflect.

tons of tickets and our customers are actually extremely satisfied with some of those experiences. And so just you’ll hear a lot more about all of the innovative ways that we’re leveraging AI and hopefully it inspires you. And we’d love to hear stories of what other people are learning through their explorations with AI tools to accelerate their businesses as well.

Rina Alexin | 54:38.081–54:38.081

Well, there you have it. And we’ll make sure to add links to your profile. Definitely, like I said, this conversation started because of a LinkedIn post. And so I do absolutely recommend please follow Alex Bass. And thank you all for tuning into another episode of Productside Stories. If you found this conversation valuable, please do not keep it to yourself. Share it with a friend and subscribe to Productside Stories so you don’t miss a future episode.

I hope today’s insights inspire you and propel your product journey or AI journey forward. Remember every challenge is just a lesson waiting to be learned. Visit us at Productside.com for more free resources, including webinars, templates, playbooks, and other product wisdom repackaged for you. I’m Rina Alexin, and until next time, keep innovating, keep leading, and keep creating stories worth sharing.