Framework

Jobs to be Done

A research, design, and prioritisation framework that uses the customer's job as the unit of analysis. Researched, launched, trained, and embedded across Product and UX; backed by a 700-job database that is still how Pushpay thinks and talks about product design.

Origin
Pushpay UX, FY22 to ongoing
Applied at
  • Pushpay Product and UX practice
  • JTBD database of 700+ catalogued jobs across the ChMS portfolio
  • Standard vocabulary in design reviews, roadmap planning, and product strategy

Why JTBD

Most product orgs talk about features, personas, or user stories. Each is useful and each has the same blind spot: it describes the solution, not the underlying objective. Two customers can have the same persona and want different things; two features can solve the same job and not know it; two roadmap items can compete for the same outcome under different names.

JTBD is built on a different unit of analysis: the job the customer is trying to get done, independent of the product or technology used to do it. The benefit is not philosophical. It changes the conversation. Marketing, sales, support, research, product, and design get a single focal point and a shared vocabulary. Strategy decisions get easier because you can rank jobs by how important, frequent, and frustrating they are, not by who is shouting loudest.

I researched, launched, and trained Product and UX on JTBD at Pushpay. It is now the way the org talks about product design and has been for years.

Five principles

The framework rests on five principles. They are easy to remember and surprisingly hard to live by:

  1. People hire products to get their job done, not to interact with our business. Focus on the underlying objective, not the brand relationship.
  2. Jobs are stable over time, even when technology changes. Filing taxes is the same job today as it was a generation ago. The technology turns over; the job does not.
  3. People seek services that help them get more of their job done easier and quicker. Map the job, not the buying journey.
  4. Making the job the unit of analysis makes innovation more predictable. You can identify unmet needs in advance rather than waiting for a market signal.
  5. JTBD is not limited to one discipline. Sales can use it in discovery calls, marketing can use it to write campaigns, support can use it to handle tickets, strategy teams can use it to spot acquisition opportunities. The vocabulary is the same across the org.

The unit of analysis

A job in the framework has a deliberate shape. The worksheet captures it:

  • Job performer. Who is trying to get the job done. The end user, not the buyer. One person can wear several hats and have a distinct job in each.
  • Main job. A goal objective with a clear end state, independent of any product. It never contains adjectives like fast, easy, or inexpensive; those are needs, not the job.
  • Related jobs. Supporting jobs that surround the main job and help define what it is and is not.
  • Emotional and social jobs. How the performer wants to feel and how they want to be perceived while doing the job.
  • Process. The stages the performer moves through to accomplish the job. Each stage is a small job inside the main one. This is the structure that organises everything else.
  • Needs. The performer’s requirements for getting the job done. Not feature requests; underlying conditions for success.
  • Circumstances. When and where the job happens. The context that changes how the job is done.

The worksheet is the artefact most teams touch first. It is also where most teams realise their personas were describing identity rather than work.

How research works

JTBD research mixes qualitative and quantitative methods. Each one answers a different question.

Jobs interviews are the qualitative cornerstone. They are structured to surface the main job, the related jobs, the process the performer follows, the needs they are trying to satisfy, and the circumstances that shape the work. The conversation moves through five blocks: background and the job, the main job and its relations, the process of executing it, the needs hiding in the workarounds and frustrations, and the circumstances that change how the job is done. The critical-incident technique (recall a specific time, describe what happened, discuss the ideal) gets people out of generality and into evidence.

Switch interviews are the variant aimed at understanding why a customer hired a product in the first place. They are best for the buyer rather than the user, and they surface the underlying intent and motivation behind a switch.

Outcome-Driven Innovation surveys and job scoring are the quantitative half. Once the job set is identified qualitatively, a survey scores each job along three dimensions: how important it is, how frequently it is performed, and how frustrating it currently is. The scores roll up into an opportunity score that lets you rank jobs across the portfolio.

The pairing matters. Qualitative work without quantitative validation produces compelling stories that may not be representative. Quantitative work without qualitative grounding produces rankings of things you do not understand. JTBD insists on both.

Scoring and prioritisation

The opportunity score is the framework’s pricing model. For each job:

Opportunity = (Importance + Frequency) × Frustration

Importance and frequency are scored 0 to 5; frustration is scored 1 to 5. Multiplying by frustration is deliberate: a job that is important and frequent but well served already is not an opportunity, even if it is heavily used. The opportunity is where the customer cannot do something they need to do.

Once jobs are scored, they group into projects. Projects are evaluated by summing the opportunity scores of the jobs they contain, and the sum becomes the Impact input on a RICE matrix:

Project opportunity scoreRICE impact
201+3
131 to 2002
101 to 1301
51 to 1000.5
0 to 500.25, or do not do

Within a project, jobs are prioritised against each other using a frustration-by-importance chart, with the opportunity score over 24 acting as the cutoff for serious consideration.

The reason this section matters is not the arithmetic. It is that the score gives leadership a defensible, customer-rooted way to compare projects against each other when the projects do not look alike. Two competing investments with disparate scopes become comparable through their opportunity scores.

The database

A framework only sticks if there is somewhere the work lives between projects. I built a JTBD database that holds the catalogued jobs across the portfolio. It now contains more than seven hundred jobs spanning aspirational, big, small, and micro sizes, tagged by parent job, customer type, product, product area, and project, with description and outcome statements where available.

The database does several things at once:

  • Compounds learning. Every research engagement adds jobs to the catalogue rather than starting from scratch. Teams searching for what is already known can find it.
  • Reveals patterns. Aspirational jobs and parent jobs surface relationships across product areas that would not appear in a single project.
  • Anchors prioritisation. Opportunity scores live with the jobs, so prioritisation has a memory.
  • Speeds onboarding. A new designer or PM can read the database and understand the customer’s world without waiting for a research cycle.

The database is the artefact most responsible for JTBD becoming durable practice rather than a project-level method.

The research-to-design pipeline

The framework also names how research becomes design without a handoff cliff. The pipeline runs:

  1. Prep. Gather existing research, the JTBD database, and qualitative interviews. Identify new jobs and gaps in knowledge.
  2. Plan. Set business goals and research goals. Form hypotheses to test. Align research goals to business goals. Submit the request.
  3. Qualitative round. Interviews using the JTBD interview template, cataloguing jobs and outcomes after each session, building an insights deck.
  4. Quantitative round. Use the qualitative jobs to build an outcome survey. Launch through research, collect responses.
  5. Opportunity landscape. Synthesise the survey into a ranked landscape. Align stakeholders on the JTBD that need to be solved to meet business goals.
  6. Design sprint. Move from validated job set into design.
  7. Stakeholder validation. Validate concepts with internal stakeholders (CS, sales).
  8. User validation. Validate designs with users.
  9. Sync and store. Sync findings with the team, vault data in Dovetail, update the database.

The pipeline is where the framework stops being a workshop and becomes the operating discipline.

What stuck

The piece I am proudest of is not any single artefact. It is the language. Years after launch, “what is the job?”, “what is the opportunity score?”, “what is the related job?” still come up unprompted in design reviews, roadmap conversations, and pitch decks. JTBD became how the team talks about product design, not just how it researches.

If I were starting again, I would invest earlier in keeping the database hygiene easy. The catalogue grew faster than the conventions for tagging it, and a few cycles of cleanup later, the data was sharper but the lift was bigger than it needed to be. A small investment in tagging guidelines and review cadences in year one would have paid back several times over.

What I would not change is the bet at the centre of the framework: the job is the right unit of analysis. Personas describe people. Features describe products. Jobs describe what customers are actually trying to do. Building from the job is what made the rest of the work coherent.