Applying Job-to-be-done Theory to Healthcare Data
I believe that when a customer purchases a product, they are “hiring” that product to do a job. I am influenced by Clayton Christensen’s job-to-be-done theory described in his classic book The Innovator’s Dilemma. In the book, he recounts a time when he worked with fast food chain McDonald’s to help them understand why so many customers purchased milkshakes at 8am. Through real-world observation and customer interviews, Christensen discovered that commuters hired milkshakes to take the edge off the boredom of a long commute to the office. McDonald’s was able to introduce changes to optimize around this job (thinner straws – to make the experience last longer, chunks of fruit – to make the experience more interesting) to increase morning sales of milkshakes.
Data is the lifeblood of health tech solutions. Increasingly, data is a leading feature of health tech products or sometimes, the product itself. Without data or poor quality data, these products cannot deliver desired results to customers and users. So perhaps we should pose the question: Can the data that resides within health tech solutions be managed with a product mindset? That is, does healthcare data simply exist to represent a set of facts, or do those set of facts accomplish specific jobs for a customer? I’ve concluded that data should be managed with a product mindset and that the jobs that the data is expected to perform be clearly understood and defined. Otherwise, the data may not be optimized for product objectives, and the product fails to do the job that the customer has hired it to do.
Example #1: Health Plan Provider Directories
Let’s take the example of health plan provider directories. Most patients are covered by some type of health insurance plan. Patients use directories published by those health insurers to search for in-network care. The Centers for Medicare and Medicaid Services (CMS) found provider directories to be on average 50% inaccurate and has analyzed the problem over the years through telephonic audits of these directories. One root cause that CMS and others have identified was a misalignment in the “job mindset” of the submitter and the recipient of the data. Historically, information about practice location addresses were submitted as part of provider enrollment into the health plan’s network. Practices submit the information, and health plans publish it. What is interesting is the primary “job” each party is seeking to accomplish. Practices primarily want to get into network so that they can submit claims and get paid for healthcare services. They also want to avoid denied claims that may be submitted from locations where clinicians occasionally see patients. Payers primarily want to get providers enrolled so that they can have a robust provider network to offer their customers and members. Both of these “jobs to be done” can result in the over-association of practitioners to practice locations that they were affiliated with, but where they rarely saw patients. Innovators in the space are exploring interventions (e.g., tapping into more accurate information that health systems use to book appointments) to better align payers and providers on the job-to-be-done of the patient: identifying and accessing the right care in a timely manner.
Example #2: Patient Matching
Another example of healthcare data being asked to do a job is ‘patient matching’ across health IT systems and databases. As the need for providers, payers, and pharmacies to become interoperable increases, these parties also need to be more confident that they are referring to the same patient when exchanging data. In the absence of a national patient identifier, these organizations rely on a combination of patient names, dates of birth, and addresses as a substitute for a unique identifier. Within an individual organization, these attributes may be serviceable identifiers, and further, a single organization has more control over identification and matching of patient records in their own systems. However, as a larger set of organizations are exchanging patient data on a larger, national scope of patients, the probability of mismatches increases. This has unintended consequences such as adverse patient events, administrative mix-ups, and inadvertant disclosures of sensitive personal health information (PHI) to unauthorized individuals. It becomes more important to have data that is complete, accurate, and up-to-date. The “job to be done” of the data has expanded. The bar is higher and the thresholds for confidence, and the information needed to achieve that, have increased. CMS, through its recent enforcement of its Final Rule on Patient Access and Interoperability, is promoting the secure sharing of additional data in the U.S. Core Data for Interoperability (USCDI) data set to better support this job. Healthcare organizations also collaborated within forums like ONC to define best practices for improving patient demographic data to support better patient matching.
Fitness for Intended Use
In the CMMI Institute’s Data Management Maturity Model, the principle of ‘fitness for intended use’ is presented, wherein the business goals around the data are defined. Driven by these goals and objectives, an organization develops the data quality definitions, rules, and roles and responsibilities to govern the quality of its data assets. Interventions to cleanse and improve data quality are prioritized and optimized according to the business goals. In the examples above, a data set that existed to do an initial job was “hired” to do new jobs as industry use cases for the data expanded. As a result, the business goals around the ‘intended uses’ of the data must be updated, and the supporting governance, rules, and interventions that manage and improve the data must evolve. A robust framework for data management should be adaptable to changing business goals. However, if goals aren’t formally defined at the onset, and supporting processes aren’t managed with respect to those goals, data assets and processes may be unable to adapt to emergent needs.
Organizing around the jobs-to-be-done of data in healthcare
It is incumbent on organizations to clearly define business goals for their data, and these goals must be communicated and understood across the organization to realize positive impacts on how data is sourced, managed, and improved. If the data needs to accomplish a job, these goals and ‘job descriptions’ must be clearly articulated, not merely implied. The job(s) need to be spelled out, and the necessary qualities and characteristics of the data to accomplish the job must be defined and continuously measured. If data is being used across organizations (such as both the patient matching and provider directory uses cases described earlier), industry consensus around data quality definitions should be sought and established.
While traditional data scientist, data governance, and data steward roles have often been responsible for defining the “jobs” of data, an emergent role called the “data product manager” is playing a growing role. This role embodies a customer and user-centric approach to managing data, understanding the jobs that customers and stakeholders expect it to do. Importantly, the Data Product Manager is clear-eyed about what jobs it is not fit to do and communicates this within their organization. The Data Product Manager can clarify and translate these “jobs” into product requirements of the data set. Working with data governance and data science colleagues to establish repeatable measurement approaches, the organization can iterate on improvement interventions, and evaluate third-party data vendors according to established data quality definitions. Whether or not your organization has a formal ‘Data Product Manager’, it is important for somebody in the organization to define the data’s jobs-to-be-done and be empowered to ensure that data can accomplish those jobs.
Conclusion
Data-centric products are proliferating within healthcare, and it is becoming increasingly important to manage the data itself with a product mindset. As government mandates and transparency trends motivate health plans and health systems to make more of their data available for secondary use cases, a product mindset will be critical (i.e., interoperability is not just about making data available in the right format, it’s also about ensuring that the data accomplishes the right job).
Here is a summary of the points that we covered:
1) Products do jobs, and it is important to define the job-to-be-done.
2) Healthcare data products are no different. You need to define the job-to-be-done.
3) If you don’t define the job(s) of your healthcare data assets, and organize around those jobs:
a. They might not do those jobs well.
b. If the job evolves, your data assets and processes may not be able to adapt.
4) The job-to-be-done will inform important data quality characteristics of the data.
5) And these requirements will inform how data is sourced, managed, and improved.
6) You need somebody to define the job-to-be-done. Consider a Data Product Manager.