Are We Disregarding Privacy Rules Because They Are Hard? Part 3 of 3

Are We Disregarding Privacy Rules Because They Are Hard? Part 3 of 3

Shouldn’t This Be Easier By Now?

hcfa1500 redactEventually, someone in Information Technology or Database Administration gets asked to extract data from a PHI rich line of business system or data warehouse but deliver it as de-identified data.  Almost any data extraction approach allows for data to be masked, redacted, suppressed or even randomized in some way.  This type of functionality can give us de-identified but often useless data for testing, analytics or development.

Since my company, The EDI Project™ was founded in 2001, we have been asked to de-identify or anonymize data for testing and development work many times.  Each time we have written custom code to do so for each project.  This code is never transferable to another customer environment and must be re-done for every scenario.  If we were doing this every time, we thought there has to be other companies who are having the same problem.

It turns out, there are tools on the market to address extracting data from a line of business system or data warehouse and anonymize the data so it is useful and not just de-identified into useless “John Doe” records.

For example, one of the largest integration engines on the market offers this functionality as a $250,000 add on to their existing, very expensive suite of products.  It is complicated to learn and use and must have custom code added if multiple systems are required to be anonymized the same way (e.g. enrollment, eligibility and claims data have to have matching but anonymized names and dates of birth).

There are other tools in this space that sniff out vast data stores for PHI and attempt to automagically de-identify the data.  Usually this is a masking or data redaction type approach, but even when it is not, many fields are marked as “suspect PHI” and left for human review.  I can’t blame them either.  While Patient Name fields or Date of Birth are easy enough to identify, free form fields can be a nightmare.  Either way, these tools are usually very expensive and often leave the job half done.

There are a lot of cases where a certain files like EDI 837 Claims or maybe an enrollment database has to be de-identified for a test system.  Perhaps it is an ongoing extract of data from a data warehouse for an analytics study.  This is where most of the time, the work is either not done (exemption granted), or custom code is deployed (expensive / time consuming).  But technology is supposed to be faster, better and cheaper isn’t it?

Since we are the guys who are often asked to do the work looked at our experience in extraction of health care data to design a tool we would want to use.  No compromises.  We wanted easy to learn and use, powerful to handle big data environments without being a bottleneck to any extraction work.  Finally, it would be able to anonymize data across multiple sources so that the matching but de-identified data maintained record integrity (i.e. all the records for one patient in the PHI data sources had corresponding records in the de-identified data sources).  Oh yeah – and since the main project being done is already expensive enough, the tool should be inexpensive.

People have been using ETL (Extract, Transform, Load) tools for decades and are familiar with how they work.  Thinking about the “T” in “Transform”, a common thing to do would be to change a date from MMDDYYYY format to DDMMYYYY format.  This type of common transformation logic doesn’t have to be rewritten every time you extract from a new source.  The integrator just picks it from a list when doing mapping work.  Anonymizing PHI should be that simple as well.

Functions and drop downs need to be available to anonymize every kind of PHI and handle it according to the special properties for that type of data.  Names are anonymized differently than zip codes.  More specifically, the anonymization routine for a Date of Birth (DOB) is handled differently than a Date of Service (DOS).  The software should know that already and not need to be defined by the integration team or subject matter expert.

As a result, we developed and launched our own Anonymization Engine called “Don’t Redact!™”.  We’re integrators and so we built the tool an integrator would want to get this done quickly and easily.  It can be learned by someone who has experience with integration tools in an afternoon and your first sizeable anonymization effort can be deployed in a day or so after learning the ropes.

Under the spirit of no compromises and disruptive technology, the Don’t Redact!™ Anonymization Engine is $25,000.

While The EDI Project™ is a professional services organization and we would be happy to deploy the software for you or set up your first live anonymized environment, the tool is well thought out and easy enough you won’t need any services at all.

Want to find out more?  http://theediproject.com/anonymization.html

Part 1: Minimum Necessary or Optional   

Part 2: A False Choice. . . 

Advertisement

Are We Disregarding Privacy Rules Because They Are Hard? Part 2 of 3

Are We Disregarding Privacy Rules Because They Are Hard?   Part 2 of 3

A False Choice

heavy_factory_workerImagine you work at a health insurance company.  Your title is “Claims Examiner” and you spend each day deciding if bills sent from doctors for the insurance company’s members should be paid.  You must be sure the treatments match the diagnosis, the member is eligible for the payment and the amount being asked for is correct. This work is performed in a “Claims System”.  Claims Systems are one of the first widespread uses of computers in business and have been around for 40 years.  This is the lifeblood of a health insurance company and seemingly all their other systems are related to it.  The data the Examiner uses to pay or adjust the bills doesn’t need to be obscured in any way because it is part of TPO (treatment, payment or health care operations).

A covered entity may disclose PHI (Protected Health Information) to facilitate treatment, payment, or health care operations (TPO) without a patient’s express written authorization. Any other disclosures of PHI (Protected Health Information) require the covered entity to obtain written authorization from the individual for the disclosure. However, when a covered entity discloses any PHI, it must make a reasonable effort to disclose only the minimum necessary information required to achieve its purpose.

When we talk about Privacy and Security of data, even though Claims Systems have the most information about a patient / member, they are rarely if ever the place where a breach of PHI (Protected Health Information) takes place.  Instead, breaches happen at the edges.  New systems being stood up, test / development systems, ancillary data stores for things like analytics or other systems, seem to be the place where PHI breaches tend to happen.  In most cases however, these systems really should not have had PHI at all.

So why did these systems have PHI to begin with?  Usually it is because an exemption was created.

This isn’t a story of malice, indifference or even incompetence.  It is a story of real life choices that are all very reasonable.

Imagine a new system being brought on line for claims or another vital function.  There are outside vendors and subject matter experts helping employees to ensure the environment will be capable and reliable when it replaces the existing system.  But if all the data being used to test is simple and looks like this:

 “John Doe, DOB 1/1/1950, DOS 1/1/2018, 15 Minute Office Visit, Common Cold”

the team will never uncover all the potential problems that come with complicated, real world scenarios.

While the organization knows where the PHI is in the data, sometimes just de-identifying the real data in such a way can be a six-month project on its own.  How would one test if the system would be able to find duplicates if names are randomly replaced in the test data?  How can a test Examiner check eligibility if the names in the eligibility file are randomly replaced in a different way than in the test claims data?  If dates are randomized, how would claims be paid for Dates of Service (DOS) that occur before Date of Birth (DOB)?

Usually an exemption is granted for the testing of the new system that allows previously run, real world PHI data to be used.  This is very reasonable of course and the systems and environments are all secured as they should be.  Either way, this is the type of place a breach happens.   A port is left open, test data is left on a remote machine, or any number of other ways things can happen to even careful, conscientious people.

Whether for test or development systems or for an analytics project that is delayed or never happens while the PHI is scrubbed, this represents a false choice.  We have been dealing with this problem formally for 20 years and realistically even before people started mis-spelling the HIPAA acronym.  Technology is getting faster, better and cheaper all the time.

So why is this so hard? 

FULL DISCLOSURE: My company, The EDI Project™ has developed a tool to address this problem and I’m not a disinterested party in my recommendation.

Link to Part 1: Minimum Necessary or Optional? 

Link to Part 3: Shouldn’t This Be Easier By Now? 

Are We Disregarding Privacy Rules Because They Are Hard? Part 1 of 3

Are We Disregarding Privacy Rules Because They Are Hard? Part 1 of 3

Minimum Necessary or Optional?

One of the things that continues to excite me about the world of healthcare informatics is the opportunity to reduce the cost of care while providing better care and overall better outcomes.  Often people think in terms of zero sum game where reducing the cost of care always reduces care and outcomes.  But the promise of technology is that it can make us more efficient; a man can dig a hole faster with a shovel with more precise dimensions than with his bare hands.

tools

Having the right tool for the right job is important. . . 

 

Much attention has been paid of late to re-admission rates for hospitals.  Hospitals stays are expensive and if a patient is sufficiently recovered from whatever put them there to begin with, they are usually eager to get home to continue to recover in a more familiar environment.  Both parties – the hospital and the patient – often want the stay to end as soon as possible.

But if the patient is released too early, it is always bad news.  At best, they must be re-admitted – often through the emergency room process.  Worse, they could relapse and not make it back to the hospital at all.  Outcomes for patients who are released too early are both worse and more expensive than if they had stayed in the hospital instead of being released.

Certainly, trusting our doctors is a first step, but they are often very busy and under the same pressures to release a patient discussed above.  There are simply too many variables to be perfect at this when practicing medicine.  While experience gives the doctor his most potent weapon she can only draw from the experience available to them.  Patterns do exist, however, that are indicators of good situations to use additional caution when deciding to release.  No one doctor could ever amass enough experience to recognize them all though.

Today, there are powerful analytic tools available that can take massive amounts of data and sift through looking for patterns that simply would not or could not be seen otherwise.  Rather than take a sample scenario and examine the data to see if that scenario is more likely to result in a readmission, these tools are capable of comparing millions or billions of situations to each other at the same time.  The result is finding co-morbidities or patterns of care that no one could have ever thought to test out on their own.

These types of comparisons were computational fairy tales just a few years ago but can be done today because of advancements in parallel processing.  The bad news is no matter how good the tools are, they are only as good as the data they have to examine in the first place. . . What if no one can get the data?

Minimum Necessary is the process that is defined in the HIPAA regulations:  When using or disclosing protected health information or when requesting protected health information from another covered entity, a covered entity must make reasonable efforts to limit protected health information to the minimum necessary to accomplish the intended purpose of the use, disclosure or request. 

 

Next: Part 2A False Choice. . .  

Part 3: Shouldn’t This Be Easier By Now? 

Risk Adjustment Deletes Are Hard

Risk Adjustment Deletes Are Hard

 

08eaftvA lot of questions are being asked about Medicare Advantage and Risk Adjustment lately, very likely due to the news on UnitedHealth  and alleged over-billing.  While there are great conversations to be had about the proper nature of comprehensive chart reviews and best practices surrounding them, there has also been a renewed focus on  the current state of the Encounter Data Processing System (EDPS) and the difficulties involved with deleting diagnosis codes.

The process is ugly due to a very complicated submission process, difficulties in identifying what should and shouldn’t be deleted as well as the chaotic matching process health plans have to go through to mirror deletes from RAPS and EDPS submissions.

A delete by any other name. . .

A CMS delete isn’t really a delete per se.  It is removing a code that is correct from consideration for risk adjustment.  One might say, “well hold on, isn’t CMS in charge of determining what is risk adjustable in the EDPS process?”  You’d be right, except for the fact that CMS will still penalize the plan if a code is accepted by THEM that shouldn’t have been.  How would this happen – a million different ways, but consider:

  • Member has a sniffle and goes to the doctor.
  • Plan gets claim for an office visit and a full health evaluation is done.
  • In addition to diagnosing the cold, one of the diagnosis submitted is “Acute Myocardial Infarction” because the member had a stroke two years ago.  This coding mistake should have been the code for “history of AMI” instead.
  • Plan’s claims process pays the claim because office visits can be paid for just about any diagnosis and a valid one is there for the cold.  Even if the plan asks the doctor to correct and resubmit, it is unlikely to happen (super busy, already paid, resubmission rejected for duplicate etc.).
  • CMS accepts the code through the EDPS system even though the plan had a filter in place to make sure it was not submitted through the RAPS process.  EDPS does not allow the plan to “edit” the submission or filter results.
  • The submitted code then needs to be deleted from EDPS (but not until after it was submitted  and accepted).

So basically, plans are responsible even though CMS is determining what is risk adjustable in the EDPS process.

How Did We Get Here?

Many health plans and vendors took a “store and forward” approach to implementing Encounter Data submissions.  Basically, the store and forward approach takes data from a source system (e.g. claims) and forwards the formatted message to CMS.  This might be fine if there were no other encounter sources (like charts, supplemental data, etc.) and no other submission methods.  However, the plans are also getting data into their RAPS process and sending RAPS submissions to CMS.  The majority of the plans kept their legacy RAPS process in place as a separate system assuming it was going away as CMS claimed.  The extract used in the RAPS process only asked for the data it needed from the source system.  This leads to two very different data stores doing a similar job.

There are a lot of problems that will manifest if two separate systems for submitting risk data to CMS are used long term.  They include having to correct problems in data twice (a missing NPI in one encounter now must be addressed twice – likely be separate teams), differences when data makes it to one system and not another (charts didn’t make it into the EDPS data store, but are in the RAPS data store) and general differences due to the content of the data (limited data set in RAPS vs. rich data set in EDPS).  While we could spend a lot of time on each of those areas and others, the challenges related to ensuring the exact same risk data is reflected in both submissions to CMS is one of the most complicated and the worst of it might be the delete process.

Technical Hurdles

rvboklrFor the purpose of this discussion, we’ll put aside the fact that CMS took an overly complicated and non-standard approach to submitting deletes via EDI.  However, the store and forward approach makes things a lot harder even if you know exactly what to delete.  The store and forward approach in a nutshell is get stuff (encounter data), and forward that stuff on once it is formatted as a message to CMS.  Following this flow, what “stuff” is the system to “get” so that it is to be forwarded as a message letting CMS know it should delete that code?  A new process needs to be created to look through existing submissions for things to delete.  This process is needs to do complex matching and status queries to even have a chance to send a delete.  But even if all that can be pulled off, what should be deleted?

Risk Adjustment Delete Sources (EDPS)

There are many sources of deletes and each of them is difficult to perform for the reasons above as well as unique challenges for each source.  Here are a few but not all sources to consider:

Mirror your RAPS deletes – seems like the most obvious one. If a plan saw fit to delete a code from their RAPS submission for whatever reason, it should also be deleted in the EDPS data.  Tough to do in practice.

If I were to hand a store and forward system the RAPS deletes, it would have no idea what to do with it.  The Diagnosis Cluster from RAPS does not equal the encounter from EDPS.  So just the process of finding these things involves complicated queries.  It is not like there is a claim number in the RAPS data.  Plus, if you miss even one, it is just as bad as missing ten in most cases.  When are you done?  Not sure because one RAPS delete might be many EDPS submissions that need to have deletes resent.

Delete all codes that were filtered out by RAPS and never sent. How far should a plan go with this is a tough question.  In theory, CMS’s risk filter should have a heavy overlap with a plan’s own filter so there is no need to delete all codes that CMS is not using for risk adjustment.  Then again, CMS has had a lot of problems with processing encounters and returning MAO-004 reports showing what is risk adjustable.  Plans certainly shouldn’t rely on CMS being able to follow their own process.

Ongoing loop back to check for corrected submissions needing deletions. There are a LOT more errors and rejections introduced by the switch to EDPS compared to RAPS.  It is not unheard of for health plans to have error queues containing 100k errors.  The good news is that plans are addressing these errors.  The bad news is that the potential to introduce previously deleted codes is now a real problem.

RAF Score comparison differences MAO-004 results vs. RAPS results scores. Even after doing all that work, doubling back to the risk scores will yield differences.  Comparing the calculated Risk Adjustment Factor (RAF) score from one submission process to the other will uncover differences.  This is a difficult place to operate however due to the unreasonable lag time between submission of EDPS data to CMS and the return of the MAO-004.

Day-forward deletes. When plans consolidate to a single data store for both submission types, any filters, delete code logic or chart review data that are present should be reflected on an ongoing basis in the outbound data.  It simply takes the appropriate action based on submission source.  RAPS filter says don’t send?  EDPS should mark the code for subsequent delete after submission automatically (especially if the MAO-004 comes back as risk adjustable) without all the matching and running around it takes to track these down after the fact.

The worst news?

Yes, deleting diagnosis from encounter data for Medicare Advantage plans is time consuming, complicated and error prone . . . it is also mandatory.  Due to the issues above, many insurers are putting themselves at risk right when the government has renewed their focus on MAOs  alleged over-reporting of risk.

Need more help or want to discuss this further?  Drop me a note.  I’d love to talk about your specific experiences, insights or challenges.

 

Industry Memo on Medicare Filtering: A To Do List

By now, MAO plans have had about a month to read and understand the July 21, 2015 “Industry Memo: Medicare Filtering” letter that was published by CMS.   The letter contained clarifications and confirmations of previously disclosed information as well as new information on the proposed rules CMS will be using to conduct risk filtering.  Some highlights of the letter are: CMS fishing expedition

  • Diagnosis received from the Encounter Data Processing System submissions will be used to calculate risk adjustment dollars for the 2015 payment year (2014 Dates of Service (DOS)) as previously disclosed.
  • CMS will apply their own filter to Encounter Data received from MAOs to determine if a diagnosis is risk adjustable.
  • After confirming and appropriate place of service, CMS will use a risk filter that is CPT only for professional encounters (no specialty codes will be considered).  The codes for 2014 DOS can be found here.
  • Institutional Inpatient encounters will have all diagnosis accepted as long as they are Bill Type 11x or 41x without treatment code filtering.
  • Institutional Outpatient encounters will also filter on bill type (8 types accepted), but also be subjected to the CPT/HCPCS filtering from professional encounters.
  • Risk adjustment calculations for PY 2015 will use Encounter data as a source of additional codes.
  • Risk adjustment calculations for PY 2016 will be a weighted average of 90% RAPS and 10% EDPS scores.
  • Plans are responsible for deleting diagnosis codes from both RAPS and the Encounter data collected and filtered  by CMS by using chart reviews.
  • The submission deadline for 2014 DOS is February 1st, 2016.

Some thoughts and recommendations

The wording of the approach for 2015 PY tells us that risk adjustment dollars won’t go DOWN as a result of the introduction of EDPS data.  While it is true that it can only go up with the addition of EDPS diagnosis, every additional EDPS sourced HCC represents additional RADV risk over what the plan allows today through risk filtering efforts.  2015 DOS / 2016 PY data will use a 90/10 weighted average on payments meaning there can be both upside and downside to risk adjustment revenue.

The biggest problem however, is that there is a lot to do and not much time to do it.  Counting back from February 1, 2016, there are five months.  Plans will have to identify differences and decide if those differences need to be deleted or not.

  • Plans should not wait for CMS to provide the MAO-004 report to indicate what codes have been used for risk adjustment from encounter data under the new rules.  It will take time to approve the proposed rules and more time to start applying the filter and actually send out the backlog of MAO-004 reports.
    • Start tracking, at the very least, diagnosis submitted by encounter for 2014 DOS submissions.  Tracking individual diagnosis would be even better.
    • Apply the proposed CMS CPT filter to come up with a potential list of Encounter Data HCCs per encounter.
    • Use Encounter data HCCs to build a table of Encounter Data Member HCCs.
    • Compare Encounter data Member HCCs to RAPS Member HCCs and identify differences as top priorities for review.  There may not be time or resources to delete every diagnosis submission difference, but if the difference does not involve an actual pick up, the plan is a bit less exposed.
    • Use your own results as a known good to compare to the results of the MAO-004 when it is finally delivered to ensure CMS is applying the filter correctly.
    • Mine RAPS process for automatic deletes and ensure these are done on both sides (e.g. Professional AMI codes like 410.xx)

Another big problem has to do with the EDI process to be used to submit chart review deletes.  It is technically difficult, cumbersome to track and still unclear in some areas.

  • CMS has specified chart review deletes use a REF segment to indicate that diagnosis codes listed be treated as deletes.  At the very least, this REF segment would mean that chart reviews would need to be either “ADDs” or “DELETEs”.  While previous CMS presentations show examples of both in the same transaction, they are not EDI x12 5010 compliant and I assume have been since abandoned.
  • These deletes are not like RAPS deletes that delete on a member level.  Instead they are tied to specific encounters.  This is a problem because. . .
  • There is typically a many to one relationship between a single chart review and many encounters.  If a plan can only delete codes related to a specific ICN, many chart review deletes will have to be sent to actually delete a diagnosis.
    • Example: A chart is reviewed that spans eight encounters.  While the doctor’s notes indicate a history of a stroke, the medical biller each time coded 410.01 as an AMI – Initial episode instead of the 412.xx that would indicate the patient had a history of stroke.  The chart review uncovered this mistake and recommended the 410.xx be deleted and the 412 be added.  To do this, at least 9 chart review transactions would have to be sent.  8 of them would have to be matched to 8 different ICNs for the deletes of the 410.xx codes and at least one more would have to be sent to add back in the 412.xx.
  • Clarification on the EDI problems and Chart review delete process has been requested from CMS.

What are your thoughts?  What is your plan doing to address these issues?  Are there important things I missed or got wrong?  What has your analysis of the CPT filter turned up as a concern?  I’ll monitor comments closely and respond quickly.