The Five Skills I Actually Use Every Day as an AI PM (and How You Can Too) – O’Reilly

This post appeared first on Aman Khan The AI ​​Products Playbook Newsletter and is republished here with permission of the author.

Let me start with some honesty. When people ask me “Should I become an AI Prime Minister?” I tell them they are asking the wrong question.

Here’s what I’ve learned: Becoming an AI-driven CEO doesn’t mean striving for a trendy job title. It’s about developing tangible skills that will make you more effective at building products in a world where AI touches everything.

Every Prime Minister is turning into an AI Prime Minister, whether they realize it or not. Your payment flow will detect fraud. Your search bar will have semantic understanding. Your customer support will have chatbots.

Think of AI product management as less than just… or Instead more than and. For example: AI x health tech PM or AI x fintech PM.

The five skills I actually use every day

This post was adapted from a conversation with Akash Gupta Growth podcast. You can find the episode here.

After about 9 years of building AI products (the last 3 of which were full ramp-up using masters and agents), here are the skills I use most frequently – not the ones that look good in a blog post but the ones I literally used yesterday.

  • Artificial intelligence prototypes
  • Possibility of observation, such as telemetry
  • AI Assessments: The new PRD system for project managers working with AI
  • RAG vs Fine Tuning vs Rapid Engineering
  • Working with artificial intelligence engineers

1. Prototyping: Why I code every week

Last month, our design team spent two weeks creating beautiful mockups of the AI ​​agent interface. It looked perfect. I then spent 30 minutes in Cursor building a functional prototype, and we immediately discovered three key user experience issues that mocks hadn’t revealed.

Skill: Using AI-powered coding tools to build rough prototypes.
Tool: cursor. (It’s VS code but you can describe what you want in plain English.)
Why does it matter?: It is impossible to understand AI behavior through constant sarcasm.

How to start this week:

  1. Download the indicator.
  2. Building something stupidly simple. (I started with a personal website landing page.)
  3. Show it to the engineer and ask him what mistake you made.
  4. Repeats.

You’re not trying to become an engineer. You are trying to understand the limitations and possibilities.

2. Observability: black box debugging

Observability is how you actually peek under the hood and see how your agent is doing.

Skill: Using traces to understand what your AI has actually done.
Tool: Any APM supports LLM tracking. (We use our own products at Arize, but there are a lot of them out there.)
Why does it matter?: “AI is down” is not executable. “Context retrieval returned the wrong document.”

First observability exercise:

  1. Choose any AI product you use daily.
  2. Try running an edge or error condition.
  3. Write down what you think went wrong internally.
  4. Building this mental model is 80% of the skill.

3. Evaluations: Your new definition of “done”

Vibe coding works if you’re shipping prototypes. It doesn’t really work if you’re shipping production code.

Skill: Transforming subjective quality into measurable metrics.
ToolStart with spreadsheets, then move to appropriate assessment frameworks.
Why does it matter?: You can’t improve what you can’t measure.

Build your first assessment:

  1. Choose one dimension of quality (brevity, friendliness, precision).
  2. Create 20 examples of good and evil. Call it “Long” or “Brief.”
  3. Records your current system. Set a goal: 85% of answers should be “exactly correct.”
  4. This number is now your new North Star. Repeat until you hit it.

4. Technical intuition: Knowing your options

Rapid Engineering (1 day): Add voice brand instructions to the system prompt.

A few examples (3 days): Include examples of brand-related responses.

RAG with Style Guide (1 week): Pull from our actual brand documents.

Fine-tuning (1 month): Train a model on our support scripts.

Each has different costs, timelines, and trade-offs. My job is to know what to recommend.

Building intuition without building models:

  1. When you see an AI feature you like, write down three ways they might have created it.
  2. Ask an AI engineer if I’m right.
  3. Wrong guesses teach you more than correct guesses.

5. The new partnership between the Prime Minister and the Engineer

Biggest transformation? How do I work with engineers?

The old way: Write down the requirements. They build it. We are testing it. boat.

New method: We label the training data together. We define success metrics together. We correct failures together. We own results together.

Last month, I spent two hours with an engineer to determine whether the answers were “helpful” or not. We disagreed on many of them. This taught me that I needed to start collaborating on evaluations with AI engineers.

Start collaborating differently:

  • Next feature: Ask to join a model evaluation session.
  • Offer help naming test data.
  • Share customer feedback regarding evaluation metrics.
  • Celebrate evaluation improvements just as you used to celebrate feature launches.

Your four-week transition plan

Week 1: Set up the tool

  • Install the indicator.
  • Get access to your company’s LLM playground.
  • Find where your AI records/traces are located.
  • Build a small prototype (it took me three hours to build the first one).

The second week: observation

  • Track five AI interactions in the products you use.
  • Document what you think happened versus what actually happened.
  • Share the results with the AI ​​engineer to get his or her feedback.

Week 3: Measurement

  • Create your first evaluation set of 20 forms.
  • Registers an existing feature.
  • Suggest one improvement based on the scores.

Week Four: Cooperation

  • Join the engineering model review.
  • Volunteer to name 50 examples.
  • Frame your next feature request as evaluation criteria.

Week five: repetition

  • Take what you’ve learned from prototyping and build it into a production proposal.
  • Set the bar with ratings.
  • Use your AI intuition to iterate – which knobs should you turn?

The uncomfortable truth

Here’s what I wish someone had told me three years ago: You’ll feel like a beginner again. After years of being the expert in the room, you’ll be the one asking the key questions. This is exactly where you should be.

The project managers who succeed with AI are the ones who are comfortable with being uncomfortable. They’re the ones who build bad prototypes, ask “stupid” questions, and treat every confusing model output as a learning opportunity.

Start this week

Don’t wait for the perfect path, the perfect role, or for the AI ​​to “settle in.” The skills you need are practical, learnable, and immediately applicable.

Pick one thing from this post, commit to doing it this week, and then tell someone what you learned. This is how you will start to accelerate your AI product management feedback loop.

The gap between PMs who talk about AI and PMs who build with AI is smaller than you think. It is measured in hours of practical training, not years of study.

See you on the other side.

Leave a Reply