HHS report considers the merits of centralized provider directories

In December 2023 the Office of Behavioral Health, Disability, and Aging Policy (within the US Department of Health and Human Services) published a report on ‘State Efforts to Coordinate Provider Directory Accuracy.’ The report presents a clear-eyed review of the problem of provider directory accuracy, limitations of attempted solutions, and the options that federal and state policymakers could pursue to solve the problem. The report goes into detail about provider directory initiatives in California in terms of its history, key decisions, and current status.

This article summarizes key takeaways from the report, and we will respond to them with our perspective, or add points to consider. If you want to learn more about previous and current provider directory initiatives, we encourage you to review the HHS report in its entirety.

Takeaway #1: “No evidence exists to assess whether the only currently operational centralized provider directory database leads to fewer health plan directory inaccuracies.”

The authors cite California’s Symphony as the only ‘operational centralized provider directory’ solution in the country, and conclude that because no evidence has been presented that the accuracy of California provider directories has improved since the implementation of Symphony, that accuracy has not improved. It follows that if plans or Symphony itself conclusively improved provider directory accuracy, that somebody should have told that story publicly and that we’d have evidence of improvement. However, it is our opinion that the absence of such evidence does not mean a centralized provider directory couldn’t improve the accuracy of health plans’ directories.

The report presents findings from a limited AHIP pilot stating that ‘when a vendor centralized health plan directories across multiple health plans in a state, few providers completed the validation process.’ The authors briefly mention efforts by LexisNexis and CAQH calling them ‘services that may act like centralized provider directories’. It is assumed these directory initiatives are mentioned to support the argument that ‘no evidence exists’ that a centralized directory can improve accuracy.

No solution implemented can truly be considered a ‘centralized provider directory’, and so it is premature to make a judgment on whether a centralized directory could improve accuracy. Even California’s Symphony cannot be considered a centralized directory because carriers in California use a variety of channels to receive directory data. These include phone, fax, e-mails, roster files, and the industry portal platforms that the authors cited. When parallel channels exist alongside Symphony, how can the utility be considered a single source of truth? Even if data within the centralized provider directory were perfect, continuing to ingest data from multiple sources persists administrative burdens and dilutes the impact of higher quality data as it mixes with data from other sources..

Instead of making a premature judgment on whether a centralized provider directory can improve accuracy, let’s try a thought experiment on what conditions need to exist for a true centralized provider directory to be implemented and directory accuracy improved:

  • Rationalize data submission channels – To maximize the impact of a new source, you need to eliminate all other submission channels. This achieves two goals: 1) it delivers on the promise of reduced administrative burden around data submission, and 2) it creates a true ‘chokepoint’ for the data where all directory data needs to pass through a single channel. This enables the utility an opportunity to evaluate and correct any data that passes through and is delivered by the utility.
  • Maximize participation – To support the rationalization of submission channels, the utility needs to make it possible to rely solely upon the utility and not have to go elsewhere to obtain directory data. This requires maximum participation on both sides of the market: providers and payers. Only then can all other data submission channels be sunset.
  • Enforce mandates (incentives and disincentives) – Participation and industry-leading accuracy require a critical mass of participation to organically attract the entire market. If a utility cannot produce a break-through solution out of the gate that attracts a critical mass, then it may be necessary for government to issue and enforce participation mandates to catalyze adoption. This can include a combination of incentives for participation and penalties for non-compliance.
  • Monitor Accuracy Continuously – The utility needs an efficient and scalable way to monitor accuracy across all data. This is important to be able to correct data flowing through the utility and to make empirical claims on ‘industry-leading’ status. This is also important as a necessary feedback loop to providers who submit data so that they can correct it upstream. Finally, this is important to monitor participating payer accuracy, to ensure that the data they post in directories matches what the utility knows to be accurate. Monitoring accuracy across the ‘data supply chain’ will ensure integrity of the data as it becomes available to consumers. If the government decides to leverage the utility as its monitoring mechanism, it further elevates the importance of accuracy monitoring capabilities that the utility has built and promotes greater usage by payers.

After reviewing these conditions for a true centralized provider directory to exist, a reasonable person may consider these requirements improbable. That person may be right! Such a forceful government mandate to motivate both payers and providers to transition would be disruptive and would require political will and multi-stakeholder coalition building.

However, questioning whether or not it is feasible to establish a true ‘centralized provider directory’ is different from questioning whether or not a centralized directory could achieve its goals. Defacto Health agrees with the authors when they say ‘there is no evidence’, but we believe that if pre-conditions were met, a centralized solution could improve directory accuracy.

Why split hairs on the question of ‘feasibility to establish’ versus ‘ability to achieve goals’? Because it informs how to implement and evaluate pilots. A pilot, while it may not rationalize all data submission channels nationally, needs to do so locally, so that effects on both accuracy and administrative burden can be credibly evaluated. Success or failure at a local level will better inform whether such a model could scale into a National Directory of Healthcare Providers and Services.

Takeaway #2: Providers need incentives to improve the accuracy of health plan provider directories.

Requirements have been imposed on payers for directory accuracy for years, but they do not yet exist on the provider side. Reduced data submission burden is a limited incentive that may not attract enough providers to maximize participation, and in any case, significant reductions in data submission burden have not yet been achieved. California law (SB 137) and the No Surprises Act provide ways for payers to increase responsibility to providers for directory data, but payers have been reluctant to use these levers. Even when payer participation is mandated or incentivized, if providers still use legacy channels, payers will continue to accept data from providers via these legacy channels. In the end, government mandates on both payers and providers to use a utility may be necessary to maximize participation. 

Takeaway #3: Directory data audits are important, but phone calls are costly and abrasive.

The authors say that ‘One reason there is little data on the accuracy of provider directories is that few state regulators conduct regular surveys of all health plan products.’ They also cite the need for an ‘alternative to a “secret shopper” method or phone survey method for monitoring may offer efficiency and value in identifying inaccurate listings.’ There needs to be an alternative to phone calls because calls are abrasive to providers. The authors mention analytics on claims data. While claims data can help validate ‘active’ status (i.e., that a provider is still active within a practice or group, and they are actively seeing patients covered by a plan), it does not validate information on phone numbers and address data. There needs to be a holistic, scalable way to assess accuracy efficiently across all data elements. Payers can leverage such methods to understand how well their data stacks up against industry benchmarks, and regulators can use those same methods to monitor payer directory accuracy.

We expand on novel methods to audit payer provider directories in our next post.

In conclusion

It is premature to make a judgment on whether centralized provider directories can improve accuracy because we have yet to see a true centralized directory implemented. A true centralized directory may require government mandates (or overwhelming voluntary adoption), to achieve its goals. Worthy milestones towards a centralized directory include explicit incentives for providers and more frequent scrutiny on directory data accuracy by regulators and payers using less-abrasive audit methods.

We encourage you to review the HHS ‘State Efforts to Coordinate Provider Directory Accuracy’ report in its entirety, as it is the most complete review of industry-wide provider directory initiatives, and contains important lessons for regulators, policy makers, innovators, and payers.