One of the things product leaders enjoy the most about their roles is to invest into the personal growth of their teams. A task that has become a new kind of challenge in a world where AI seems to have taken over what it means to work, create and lead in product. How might we invest into the future skills of our team and set our product organization and the people we work with up for success?
I have previously written about “How Product Leadership must evolve in the age of AI” and also about “The core Jobs to be Done for a Product Leader“. This post builds on these ideas and goes a bit deeper. It proposes the following three areas of focus for product leaders today:
1) Preparing your organization to function well in uncertainty
2) AI upskilling
3) Ethics and commercial upskilling
Preparing your organization to function well in uncertainty
Uncertainty is the name of the game for almost any organization building technical products these days. We are witnessing disruption around how we are discovering new ideas, prototyping, building and thinking about growth as a result of AI. There is no part of a typical product lifecycle that is not affected in HOW we go about the related product activities.
In Discovery AI helps us analyze customer sentiment, industry forums or interview insights. Based on that we can quickly vibe code prototypes to close feedback loops with clients and showcase our ideas for internal alignment. Coding agents like Claude Code or Codex or Microsoft Copilot are in use by our development teams to quickly build out prioritized ideas. Adapting our websites to agentic buying agents is opening up new distribution channels beyond classic SEO tactics. All of the related tools and approaches are constantly improving. And what we thought we knew about any of them from trying them out a few months ago might be completely outdated just a few weeks later.
All of this, combined with a lot of hype language promoted by large foundation model providers and the consultants around them trying to pitch their services, leads to a lot of uncertainty. Team members report feeling overwhelmed, “not good enough”, scared about their jobs and uncertain where to start their learning journey about AI.
It is literally impossible to keep up with all new tools, all new developments and your own full-time role at the same time. And while a message of “We’ll be AI first now” might motivate some team members it is not clear enough and might paralyze and prevent other team members from even getting started to learn.
Which leads to a necessity for us as product leaders to create a context where it becomes possible to stay functional, curious, playful and creative despite a ton of uncertainty. This could take the form of a hackathon, an “AI Friday”, an “AI playground”, an “AI lab” or any other format or space that regularly makes it inviting and fun to explore the boundaries of the new AI enhanced world we are all participating and co-creating in.
As a product leader you should be able to answer the question of “What space and format am I creating that allows to playfully explore and learn?” And if that space does not come close to at least 10% of your team’s time: “How can I make sure people regularly engage with experimentation and learning in a meaningful way that guarantees insight creation, experience gain and joy?”
There are no “proven frameworks”, and likely no experts on your context and AI that know substantially more than what your own team knows about your customers, industry and context. The trick is to explore and learn together with your team and not think that you as a leader should or can have all the answers ready for your team. You’re in this new world together tasked to explore, learn, adapt, and create product success out of it. The skills you as a product leader have to master: (1) role modeling vulnerability in uncertainty, (2) calmness in rapid change, (3) clarity in direction, decision criteria and success definition, and (4) creating a psychologically safe and trusting container for exploration.
AI upskilling
Your team cannot possibly understand, apply and create well with AI without the people on your team getting hands-on with the topic. Obviously you’ll want your team to do this in a safe and compliant way. But just like you would not have hired somebody in a product management role without any kind of relevant experience, you also can’t think about and apply AI skills in a strategic and useful fashion without having gained some hands-on practical experience with them.
And it matters what kind of context you’re preparing your team for. I like to think of this like allowing somebody to use, drive or build a car. Depending on what your team’s ambition and approach is, you’ll follow different upskilling techniques as a product leader. You could
(1) ignore the topic and do nothing
(2) help your team get the necessary AI literacy to utilize AI tools in your context
(3) set up a team to be capable to build AI features into your own product using 3rd party foundational models
(4) set up a team capable of developing their own AI models (potentially to license out for others to use).
Depending on what your ambition level is, you’ll follow different approaches in upskilling your team.
1) Ignore the topic (not recommended):
Anybody can be a passenger in a car without special skills. But as a passenger you don’t control the speed or direction of your car.
The following is part of what will likely happen if you do absolutely nothing about AI as a product leader:
People in your team will be using AI products to get something done. They may have no prompting skills, no context setting skills, lack critical judgement of outputs or structured ways to evaluate the output, and may not be thinking about data access, security, compliance or trust, etc… They may think that their experience with a tool from a few months ago is how that tools operates today and discount and underestimate the rapid development of this technology and what that means for you and your business.
2) Using AI products to get product work done:
In order to drive a car legally and safely and utilize it well you need some skills and capabilities and need to pass a driving test.
At a very minimum, this is where you as a product leader should start to take action. Your teams need to learn the foundations of GenAI and agentic tools to understand how they work, what they can and cannot do, so they can safely use AI tools, judge what tool to use in what context, understand what applications are approved and licensed in your organization, know what your AI usage policy is, or whether and how it is ok to access your organization’s proprietary data with the tools you have licensed.
3) Building AI experiences into your product (using 3rd party foundation models):
In order to improve and build a car you need deeper and more specialized skills. You obviously need to know how the car and its engine work and then build on that.
This may look like learning how to set up data pipelines and machine learning pipelines, designing eval systems, tracking of model performance for optimization and enhancements, judging what parts of a product experience or internal process could benefit from using AI, re-thinking workflows and product experiences from an AI centric perspective, etc…
You’ll need this level of skill if you are building AI capabilities into your products. This goes way beyond the normal use of AI products and typically comes with specialists in data science, machine learning and AI engineering. The consultants and trainers to get your team up to speed on this level of proficiency should be well vetted and experienced. There are not that many of them and they are expensive.
4) Building AI foundation model products:
In order to build a new style of car or a winning race car, you need even deeper and more specialized skills.
This may go into inference training, foundational ML and AI research, following, contributing and shaping improvements towards actual artificial general intelligence e.g. by working on “world model” or other AI research directions.
You’ll need this level of skill if you are building models or foundational AI experiences. Very few teams will actually work at this level as this takes serious investments into people, systems, hardware and data. But if you are, you’re likely going to look for talent coming out of universities or try to get employees from an existing foundation model provider. The people with this level of skill will be very expensive to hire.
Questions that might guide your thinking:
No matter where you are planning to play with AI it’s probably useful to explore this ever changing terrain together with your team and not alone. Setting up some form of group learning and sharing context will help you make sense of where you are and where next steps might take you.
Together you can explore questions such as:
– How might AI help us with our foundational product jobs to be done (Discovery, Prioritization, Strategy, Analysis, Delivery, Go to Market, Scaling, Continuous sense making, Value Delivery, Moats, etc…).
– What skill is our team currently lacking that we need to train, learn and nurture?
– What new development in AI might mean yet another shift in our thinking about the work we do, the clients we serve and our business model?
– What will I commit to learning in this coming week and share back to my team about?
– What support might my team need to learn or even simply make time for learning?
– What product principles does my team need to learn more about (e.g. the five product risks, speed of value to market, commercial skills, ethical skills, product lifecycle thinking, market sizing, strategy and decision making principles, ethics, etc…)
Ethics and Commercial skills
I might be biased, but I do see an increased need to think about ethics and commercial skills with everything we do in AI. Customers will pay us money for our products and services if these are seen as “good value for money” to solve a problem and or be a delight for them. They will not give us their money and seek alternative solutions if they don’t trust us, or if they object to who we are and what we stand for based on the values they hold. This requires us to think about what “trustworthy” means. And this leads us straight into ethical considerations.
The role of trust, ethics and “trustworthy AI experiences”
What I mean with ethics is: Your teams should have a solid understanding of the kind of impact you aim to have with your product (and what harms you want to avoid). You’ll want to have a deliberate and clear picture of what you consider to be good impact vs. bad impact. And equip your teams with easy to follow guidelines around that.
These guidelines should go beyond a legal perspective (e.g. in building safe and non-discriminatory AI experiences) and incorporate how you want to contribute to trustworthy, joyful, fair and reliable product experiences. I’ve previously written about how to operationalize ethical practices in a product team. If you’re just getting started with this, you might like this article and the related canvas you can download.
Keep in mind: the cost of publicly failing with ethical shortcomings is the trust and therefore the future business of your users. The moment a more ethically aligned alternative to your product becomes available users will make shifts.
We see this today in how seriously European companies are evaluating their options to host and transact with European providers (after losing trust in the reliability of US solutions). In how many people just cancelled their OpenAI subscription in favor using Anthropic (after the news broke of Anthropic taking an ethical stance vs. the ask of the US government to not put guardrails in place around mass surveillance or autonomous weapons systems). Or in how people reacted to the “undress people” feature in Grok. Humans don’t want these kind of product behaviors, they much rather want to have a product that brings them joy, certainty and safety and treats them with respect.
Customers will vote with their feet based on how you act and communicate your choices here. Building ethical alternative solutions has become easier than ever with the universal access we all have to AI based coding assistants.
Each of these public fails increases the pressure on regulatory bodies to come up with laws and regulations that will enforce harm prevention and more ethical approaches. If you built trustworthy AI experiences to begin with, there will not be expensive refactoring needs down the road. And you are way more likely to be seen as the “ethical choice” customers will want to go to. Which leads us directly to how we might think about the commercial impact of our ethical choices.
The role of commercial impact
The Basics of Business Impact and commercial considerations will also not change. And while your board and executive team may be patient with your teams trying to figure out this new AI world, they will want to see financial value creation from your teams eventually. Learning how to think about the ROI of your AI initiatives will become something you’ll be asked to calculate and deliver on eventually. You can download this free template to start looking at what an AI Business Case might look like for you. Especially if you are planning to have AI use cases as part of your product experience.
But regardless of all the new advances in Technology, the success of a for-profit business will keep being measured by profitability, ROI, EBITDA and growth. All investments in AI initiatives will eventually have to show up as positive contributions to your company’s P&L and Financial footprint.
Your product teams will need to keep thinking about:
– What does meaningful commercial impact mean in our organization?
– What financial goals is my organization currently optimizing for? (Top line revenue, vs. bottom line profit, EBITDA goals, margin goals, etc…)
– What are the main financial metrics our board, shareholders and investors are looking at? And how might we impact these?
– How do our product initiatives impact the P&L of our organization? How does what we do impact revenue generation, revenue protection, cost reductions or cost avoidance?
– How do we think about the Business case of the product initiatives we prioritize? How are we calculating our Business Cases and communicating them to decision makers?
In conclusion
It’s exciting times to be in Tech right now. So much is changing and with all that change we have a lot of new opportunities. The best thing you can do for your product team today is making it safe and fun for them to explore, learn and build with AI so you learn the future skills this new technology asks us to develop together. In other words, making it fun to dance with uncertainty. You and your teams can use the questions and inspirations of this article as a starting point for your intentional learning journey. And you as a product leader can make sure that ethics and commercial skills are in focus for your team together with learning about all the exciting new technological advances. We are in this fun world where we are asked to explore and exploit at the same time. Equip your teams with tools to build trustworthy AI. The faster your teams get value to your users AND ROI for your business, the more successful your product career will be.
___________________________
If you would like to explore this more: reach out for a free coaching session with me.
I coach, speak, do workshops and blog about #leadership, #product leadership, #AIEthics #innovation, the #importance of creating a culture of belonging and how to succeed with your #hybrid or #remote teams.
Get my latest blog posts delivered directly to your inbox.
Your email address will only be used for receiving emails with my latest blog posts in them. You can unsubscribe at any point in time.
If you enjoyed reading this post, you may enjoy the following posts as well:
The RoI of AI initiatives and the realistic cost of AI readiness
We live in this funny world, where for a few quarters we currently get to play with AI use cases without having to show the RoI of our efforts just yet. But AI will not change the rules of business at a fundamental level. Profitability, EBITDA, revenue growth and cost efficiency will continue to matter…
Your first 90 days in your new product leadership role
Congrats! You have just been promoted, or you were hired into a new product leadership role. That’s super exciting, and also the best time to make an intentional plan for how to succeed in it. The first three months count in setting you up for success (or learning fast, that this particular role is not…
The core Jobs To Be Done for a product leader
Whenever I coach a newly promoted product leader one of the first questions they ask is what the job of a product leader even is. They often feel very insecure about their new responsibilities and lost as to where to get started and whose input to trust. Product Leaders are often advised to develop a…
How Product Leadership must evolve in the age of AI
When the internet became mainstream it had a huge effect on how organizations and teams worked and collaborated. The whole digital software product space came into existence at the time. AI is an equally big new influence affecting teams today, and the role of Product Leadership has to evolve again. There are two main perspectives…
Finance skills for product people
Every executive team is focused on the financials. They care about revenue and their organization’s financial results. Understanding how to relate the value of what we do in product storytelling to business results and money is a key factor in getting teams funded and being trusted with our roadmap priorities. Ideally, product initiatives make sense…
So you want a high performing product team?
Many years ago, I started pondering this question. What makes a high performing product team? What can I learn about this and how do I apply my findings in practice? The theory is pretty simple: Build something both customers and your organization value, do it with a great team. But the practice of this is…
