Are We Disregarding Privacy Rules Because They Are Hard? Part 3 of 3

Are We Disregarding Privacy Rules Because They Are Hard? Part 3 of 3

Shouldn’t This Be Easier By Now?

hcfa1500 redactEventually, someone in Information Technology or Database Administration gets asked to extract data from a PHI rich line of business system or data warehouse but deliver it as de-identified data.  Almost any data extraction approach allows for data to be masked, redacted, suppressed or even randomized in some way.  This type of functionality can give us de-identified but often useless data for testing, analytics or development.

Since my company, The EDI Project™ was founded in 2001, we have been asked to de-identify or anonymize data for testing and development work many times.  Each time we have written custom code to do so for each project.  This code is never transferable to another customer environment and must be re-done for every scenario.  If we were doing this every time, we thought there has to be other companies who are having the same problem.

It turns out, there are tools on the market to address extracting data from a line of business system or data warehouse and anonymize the data so it is useful and not just de-identified into useless “John Doe” records.

For example, one of the largest integration engines on the market offers this functionality as a $250,000 add on to their existing, very expensive suite of products.  It is complicated to learn and use and must have custom code added if multiple systems are required to be anonymized the same way (e.g. enrollment, eligibility and claims data have to have matching but anonymized names and dates of birth).

There are other tools in this space that sniff out vast data stores for PHI and attempt to automagically de-identify the data.  Usually this is a masking or data redaction type approach, but even when it is not, many fields are marked as “suspect PHI” and left for human review.  I can’t blame them either.  While Patient Name fields or Date of Birth are easy enough to identify, free form fields can be a nightmare.  Either way, these tools are usually very expensive and often leave the job half done.

There are a lot of cases where a certain files like EDI 837 Claims or maybe an enrollment database has to be de-identified for a test system.  Perhaps it is an ongoing extract of data from a data warehouse for an analytics study.  This is where most of the time, the work is either not done (exemption granted), or custom code is deployed (expensive / time consuming).  But technology is supposed to be faster, better and cheaper isn’t it?

Since we are the guys who are often asked to do the work looked at our experience in extraction of health care data to design a tool we would want to use.  No compromises.  We wanted easy to learn and use, powerful to handle big data environments without being a bottleneck to any extraction work.  Finally, it would be able to anonymize data across multiple sources so that the matching but de-identified data maintained record integrity (i.e. all the records for one patient in the PHI data sources had corresponding records in the de-identified data sources).  Oh yeah – and since the main project being done is already expensive enough, the tool should be inexpensive.

People have been using ETL (Extract, Transform, Load) tools for decades and are familiar with how they work.  Thinking about the “T” in “Transform”, a common thing to do would be to change a date from MMDDYYYY format to DDMMYYYY format.  This type of common transformation logic doesn’t have to be rewritten every time you extract from a new source.  The integrator just picks it from a list when doing mapping work.  Anonymizing PHI should be that simple as well.

Functions and drop downs need to be available to anonymize every kind of PHI and handle it according to the special properties for that type of data.  Names are anonymized differently than zip codes.  More specifically, the anonymization routine for a Date of Birth (DOB) is handled differently than a Date of Service (DOS).  The software should know that already and not need to be defined by the integration team or subject matter expert.

As a result, we developed and launched our own Anonymization Engine called “Don’t Redact!™”.  We’re integrators and so we built the tool an integrator would want to get this done quickly and easily.  It can be learned by someone who has experience with integration tools in an afternoon and your first sizeable anonymization effort can be deployed in a day or so after learning the ropes.

Under the spirit of no compromises and disruptive technology, the Don’t Redact!™ Anonymization Engine is $25,000.

While The EDI Project™ is a professional services organization and we would be happy to deploy the software for you or set up your first live anonymized environment, the tool is well thought out and easy enough you won’t need any services at all.

Want to find out more?  http://theediproject.com/anonymization.html

Part 1: Minimum Necessary or Optional   

Part 2: A False Choice. . . 

Are We Disregarding Privacy Rules Because They Are Hard? Part 2 of 3

Are We Disregarding Privacy Rules Because They Are Hard?   Part 2 of 3

A False Choice

heavy_factory_workerImagine you work at a health insurance company.  Your title is “Claims Examiner” and you spend each day deciding if bills sent from doctors for the insurance company’s members should be paid.  You must be sure the treatments match the diagnosis, the member is eligible for the payment and the amount being asked for is correct. This work is performed in a “Claims System”.  Claims Systems are one of the first widespread uses of computers in business and have been around for 40 years.  This is the lifeblood of a health insurance company and seemingly all their other systems are related to it.  The data the Examiner uses to pay or adjust the bills doesn’t need to be obscured in any way because it is part of TPO (treatment, payment or health care operations).

A covered entity may disclose PHI (Protected Health Information) to facilitate treatment, payment, or health care operations (TPO) without a patient’s express written authorization. Any other disclosures of PHI (Protected Health Information) require the covered entity to obtain written authorization from the individual for the disclosure. However, when a covered entity discloses any PHI, it must make a reasonable effort to disclose only the minimum necessary information required to achieve its purpose.

When we talk about Privacy and Security of data, even though Claims Systems have the most information about a patient / member, they are rarely if ever the place where a breach of PHI (Protected Health Information) takes place.  Instead, breaches happen at the edges.  New systems being stood up, test / development systems, ancillary data stores for things like analytics or other systems, seem to be the place where PHI breaches tend to happen.  In most cases however, these systems really should not have had PHI at all.

So why did these systems have PHI to begin with?  Usually it is because an exemption was created.

This isn’t a story of malice, indifference or even incompetence.  It is a story of real life choices that are all very reasonable.

Imagine a new system being brought on line for claims or another vital function.  There are outside vendors and subject matter experts helping employees to ensure the environment will be capable and reliable when it replaces the existing system.  But if all the data being used to test is simple and looks like this:

 “John Doe, DOB 1/1/1950, DOS 1/1/2018, 15 Minute Office Visit, Common Cold”

the team will never uncover all the potential problems that come with complicated, real world scenarios.

While the organization knows where the PHI is in the data, sometimes just de-identifying the real data in such a way can be a six-month project on its own.  How would one test if the system would be able to find duplicates if names are randomly replaced in the test data?  How can a test Examiner check eligibility if the names in the eligibility file are randomly replaced in a different way than in the test claims data?  If dates are randomized, how would claims be paid for Dates of Service (DOS) that occur before Date of Birth (DOB)?

Usually an exemption is granted for the testing of the new system that allows previously run, real world PHI data to be used.  This is very reasonable of course and the systems and environments are all secured as they should be.  Either way, this is the type of place a breach happens.   A port is left open, test data is left on a remote machine, or any number of other ways things can happen to even careful, conscientious people.

Whether for test or development systems or for an analytics project that is delayed or never happens while the PHI is scrubbed, this represents a false choice.  We have been dealing with this problem formally for 20 years and realistically even before people started mis-spelling the HIPAA acronym.  Technology is getting faster, better and cheaper all the time.

So why is this so hard? 

FULL DISCLOSURE: My company, The EDI Project™ has developed a tool to address this problem and I’m not a disinterested party in my recommendation.

Link to Part 1: Minimum Necessary or Optional? 

Link to Part 3: Shouldn’t This Be Easier By Now? 

Are We Disregarding Privacy Rules Because They Are Hard? Part 1 of 3

Are We Disregarding Privacy Rules Because They Are Hard? Part 1 of 3

Minimum Necessary or Optional?

One of the things that continues to excite me about the world of healthcare informatics is the opportunity to reduce the cost of care while providing better care and overall better outcomes.  Often people think in terms of zero sum game where reducing the cost of care always reduces care and outcomes.  But the promise of technology is that it can make us more efficient; a man can dig a hole faster with a shovel with more precise dimensions than with his bare hands.

tools

Having the right tool for the right job is important. . . 

 

Much attention has been paid of late to re-admission rates for hospitals.  Hospitals stays are expensive and if a patient is sufficiently recovered from whatever put them there to begin with, they are usually eager to get home to continue to recover in a more familiar environment.  Both parties – the hospital and the patient – often want the stay to end as soon as possible.

But if the patient is released too early, it is always bad news.  At best, they must be re-admitted – often through the emergency room process.  Worse, they could relapse and not make it back to the hospital at all.  Outcomes for patients who are released too early are both worse and more expensive than if they had stayed in the hospital instead of being released.

Certainly, trusting our doctors is a first step, but they are often very busy and under the same pressures to release a patient discussed above.  There are simply too many variables to be perfect at this when practicing medicine.  While experience gives the doctor his most potent weapon she can only draw from the experience available to them.  Patterns do exist, however, that are indicators of good situations to use additional caution when deciding to release.  No one doctor could ever amass enough experience to recognize them all though.

Today, there are powerful analytic tools available that can take massive amounts of data and sift through looking for patterns that simply would not or could not be seen otherwise.  Rather than take a sample scenario and examine the data to see if that scenario is more likely to result in a readmission, these tools are capable of comparing millions or billions of situations to each other at the same time.  The result is finding co-morbidities or patterns of care that no one could have ever thought to test out on their own.

These types of comparisons were computational fairy tales just a few years ago but can be done today because of advancements in parallel processing.  The bad news is no matter how good the tools are, they are only as good as the data they have to examine in the first place. . . What if no one can get the data?

Minimum Necessary is the process that is defined in the HIPAA regulations:  When using or disclosing protected health information or when requesting protected health information from another covered entity, a covered entity must make reasonable efforts to limit protected health information to the minimum necessary to accomplish the intended purpose of the use, disclosure or request. 

 

Next: Part 2A False Choice. . .  

Part 3: Shouldn’t This Be Easier By Now? 

Risk Adjustment Deletes Are Hard

Risk Adjustment Deletes Are Hard

 

08eaftvA lot of questions are being asked about Medicare Advantage and Risk Adjustment lately, very likely due to the news on UnitedHealth  and alleged over-billing.  While there are great conversations to be had about the proper nature of comprehensive chart reviews and best practices surrounding them, there has also been a renewed focus on  the current state of the Encounter Data Processing System (EDPS) and the difficulties involved with deleting diagnosis codes.

The process is ugly due to a very complicated submission process, difficulties in identifying what should and shouldn’t be deleted as well as the chaotic matching process health plans have to go through to mirror deletes from RAPS and EDPS submissions.

A delete by any other name. . .

A CMS delete isn’t really a delete per se.  It is removing a code that is correct from consideration for risk adjustment.  One might say, “well hold on, isn’t CMS in charge of determining what is risk adjustable in the EDPS process?”  You’d be right, except for the fact that CMS will still penalize the plan if a code is accepted by THEM that shouldn’t have been.  How would this happen – a million different ways, but consider:

  • Member has a sniffle and goes to the doctor.
  • Plan gets claim for an office visit and a full health evaluation is done.
  • In addition to diagnosing the cold, one of the diagnosis submitted is “Acute Myocardial Infarction” because the member had a stroke two years ago.  This coding mistake should have been the code for “history of AMI” instead.
  • Plan’s claims process pays the claim because office visits can be paid for just about any diagnosis and a valid one is there for the cold.  Even if the plan asks the doctor to correct and resubmit, it is unlikely to happen (super busy, already paid, resubmission rejected for duplicate etc.).
  • CMS accepts the code through the EDPS system even though the plan had a filter in place to make sure it was not submitted through the RAPS process.  EDPS does not allow the plan to “edit” the submission or filter results.
  • The submitted code then needs to be deleted from EDPS (but not until after it was submitted  and accepted).

So basically, plans are responsible even though CMS is determining what is risk adjustable in the EDPS process.

How Did We Get Here?

Many health plans and vendors took a “store and forward” approach to implementing Encounter Data submissions.  Basically, the store and forward approach takes data from a source system (e.g. claims) and forwards the formatted message to CMS.  This might be fine if there were no other encounter sources (like charts, supplemental data, etc.) and no other submission methods.  However, the plans are also getting data into their RAPS process and sending RAPS submissions to CMS.  The majority of the plans kept their legacy RAPS process in place as a separate system assuming it was going away as CMS claimed.  The extract used in the RAPS process only asked for the data it needed from the source system.  This leads to two very different data stores doing a similar job.

There are a lot of problems that will manifest if two separate systems for submitting risk data to CMS are used long term.  They include having to correct problems in data twice (a missing NPI in one encounter now must be addressed twice – likely be separate teams), differences when data makes it to one system and not another (charts didn’t make it into the EDPS data store, but are in the RAPS data store) and general differences due to the content of the data (limited data set in RAPS vs. rich data set in EDPS).  While we could spend a lot of time on each of those areas and others, the challenges related to ensuring the exact same risk data is reflected in both submissions to CMS is one of the most complicated and the worst of it might be the delete process.

Technical Hurdles

rvboklrFor the purpose of this discussion, we’ll put aside the fact that CMS took an overly complicated and non-standard approach to submitting deletes via EDI.  However, the store and forward approach makes things a lot harder even if you know exactly what to delete.  The store and forward approach in a nutshell is get stuff (encounter data), and forward that stuff on once it is formatted as a message to CMS.  Following this flow, what “stuff” is the system to “get” so that it is to be forwarded as a message letting CMS know it should delete that code?  A new process needs to be created to look through existing submissions for things to delete.  This process is needs to do complex matching and status queries to even have a chance to send a delete.  But even if all that can be pulled off, what should be deleted?

Risk Adjustment Delete Sources (EDPS)

There are many sources of deletes and each of them is difficult to perform for the reasons above as well as unique challenges for each source.  Here are a few but not all sources to consider:

Mirror your RAPS deletes – seems like the most obvious one. If a plan saw fit to delete a code from their RAPS submission for whatever reason, it should also be deleted in the EDPS data.  Tough to do in practice.

If I were to hand a store and forward system the RAPS deletes, it would have no idea what to do with it.  The Diagnosis Cluster from RAPS does not equal the encounter from EDPS.  So just the process of finding these things involves complicated queries.  It is not like there is a claim number in the RAPS data.  Plus, if you miss even one, it is just as bad as missing ten in most cases.  When are you done?  Not sure because one RAPS delete might be many EDPS submissions that need to have deletes resent.

Delete all codes that were filtered out by RAPS and never sent. How far should a plan go with this is a tough question.  In theory, CMS’s risk filter should have a heavy overlap with a plan’s own filter so there is no need to delete all codes that CMS is not using for risk adjustment.  Then again, CMS has had a lot of problems with processing encounters and returning MAO-004 reports showing what is risk adjustable.  Plans certainly shouldn’t rely on CMS being able to follow their own process.

Ongoing loop back to check for corrected submissions needing deletions. There are a LOT more errors and rejections introduced by the switch to EDPS compared to RAPS.  It is not unheard of for health plans to have error queues containing 100k errors.  The good news is that plans are addressing these errors.  The bad news is that the potential to introduce previously deleted codes is now a real problem.

RAF Score comparison differences MAO-004 results vs. RAPS results scores. Even after doing all that work, doubling back to the risk scores will yield differences.  Comparing the calculated Risk Adjustment Factor (RAF) score from one submission process to the other will uncover differences.  This is a difficult place to operate however due to the unreasonable lag time between submission of EDPS data to CMS and the return of the MAO-004.

Day-forward deletes. When plans consolidate to a single data store for both submission types, any filters, delete code logic or chart review data that are present should be reflected on an ongoing basis in the outbound data.  It simply takes the appropriate action based on submission source.  RAPS filter says don’t send?  EDPS should mark the code for subsequent delete after submission automatically (especially if the MAO-004 comes back as risk adjustable) without all the matching and running around it takes to track these down after the fact.

The worst news?

Yes, deleting diagnosis from encounter data for Medicare Advantage plans is time consuming, complicated and error prone . . . it is also mandatory.  Due to the issues above, many insurers are putting themselves at risk right when the government has renewed their focus on MAOs  alleged over-reporting of risk.

Need more help or want to discuss this further?  Drop me a note.  I’d love to talk about your specific experiences, insights or challenges.

 

Industry Memo on Medicare Filtering: A To Do List

By now, MAO plans have had about a month to read and understand the July 21, 2015 “Industry Memo: Medicare Filtering” letter that was published by CMS.   The letter contained clarifications and confirmations of previously disclosed information as well as new information on the proposed rules CMS will be using to conduct risk filtering.  Some highlights of the letter are: CMS fishing expedition

  • Diagnosis received from the Encounter Data Processing System submissions will be used to calculate risk adjustment dollars for the 2015 payment year (2014 Dates of Service (DOS)) as previously disclosed.
  • CMS will apply their own filter to Encounter Data received from MAOs to determine if a diagnosis is risk adjustable.
  • After confirming and appropriate place of service, CMS will use a risk filter that is CPT only for professional encounters (no specialty codes will be considered).  The codes for 2014 DOS can be found here.
  • Institutional Inpatient encounters will have all diagnosis accepted as long as they are Bill Type 11x or 41x without treatment code filtering.
  • Institutional Outpatient encounters will also filter on bill type (8 types accepted), but also be subjected to the CPT/HCPCS filtering from professional encounters.
  • Risk adjustment calculations for PY 2015 will use Encounter data as a source of additional codes.
  • Risk adjustment calculations for PY 2016 will be a weighted average of 90% RAPS and 10% EDPS scores.
  • Plans are responsible for deleting diagnosis codes from both RAPS and the Encounter data collected and filtered  by CMS by using chart reviews.
  • The submission deadline for 2014 DOS is February 1st, 2016.

Some thoughts and recommendations

The wording of the approach for 2015 PY tells us that risk adjustment dollars won’t go DOWN as a result of the introduction of EDPS data.  While it is true that it can only go up with the addition of EDPS diagnosis, every additional EDPS sourced HCC represents additional RADV risk over what the plan allows today through risk filtering efforts.  2015 DOS / 2016 PY data will use a 90/10 weighted average on payments meaning there can be both upside and downside to risk adjustment revenue.

The biggest problem however, is that there is a lot to do and not much time to do it.  Counting back from February 1, 2016, there are five months.  Plans will have to identify differences and decide if those differences need to be deleted or not.

  • Plans should not wait for CMS to provide the MAO-004 report to indicate what codes have been used for risk adjustment from encounter data under the new rules.  It will take time to approve the proposed rules and more time to start applying the filter and actually send out the backlog of MAO-004 reports.
    • Start tracking, at the very least, diagnosis submitted by encounter for 2014 DOS submissions.  Tracking individual diagnosis would be even better.
    • Apply the proposed CMS CPT filter to come up with a potential list of Encounter Data HCCs per encounter.
    • Use Encounter data HCCs to build a table of Encounter Data Member HCCs.
    • Compare Encounter data Member HCCs to RAPS Member HCCs and identify differences as top priorities for review.  There may not be time or resources to delete every diagnosis submission difference, but if the difference does not involve an actual pick up, the plan is a bit less exposed.
    • Use your own results as a known good to compare to the results of the MAO-004 when it is finally delivered to ensure CMS is applying the filter correctly.
    • Mine RAPS process for automatic deletes and ensure these are done on both sides (e.g. Professional AMI codes like 410.xx)

Another big problem has to do with the EDI process to be used to submit chart review deletes.  It is technically difficult, cumbersome to track and still unclear in some areas.

  • CMS has specified chart review deletes use a REF segment to indicate that diagnosis codes listed be treated as deletes.  At the very least, this REF segment would mean that chart reviews would need to be either “ADDs” or “DELETEs”.  While previous CMS presentations show examples of both in the same transaction, they are not EDI x12 5010 compliant and I assume have been since abandoned.
  • These deletes are not like RAPS deletes that delete on a member level.  Instead they are tied to specific encounters.  This is a problem because. . .
  • There is typically a many to one relationship between a single chart review and many encounters.  If a plan can only delete codes related to a specific ICN, many chart review deletes will have to be sent to actually delete a diagnosis.
    • Example: A chart is reviewed that spans eight encounters.  While the doctor’s notes indicate a history of a stroke, the medical biller each time coded 410.01 as an AMI – Initial episode instead of the 412.xx that would indicate the patient had a history of stroke.  The chart review uncovered this mistake and recommended the 410.xx be deleted and the 412 be added.  To do this, at least 9 chart review transactions would have to be sent.  8 of them would have to be matched to 8 different ICNs for the deletes of the 410.xx codes and at least one more would have to be sent to add back in the 412.xx.
  • Clarification on the EDI problems and Chart review delete process has been requested from CMS.

What are your thoughts?  What is your plan doing to address these issues?  Are there important things I missed or got wrong?  What has your analysis of the CPT filter turned up as a concern?  I’ll monitor comments closely and respond quickly.

Understanding Diagnosis Pointers

Diagnosis Pointers Explained

Pointers 1500

In the last 17 years, I have been asked a number of times to explain diagnosis pointers.  While diagnosis pointers are simple once you understand them, sometimes they are difficult to explain, especially to  those outside the claims world.  The best way I can think of for now is to put together this diagnosis pointer FAQ.  If you have any additions, corrections or would like me to answer other questions, please leave a comment.

What are Diagnosis Pointers?

Diagnosis Pointers are used to describe sometimes complex many to many relationships between submitted diagnosis and service line treatment information on health claims and encounters.

Where did diagnosis pointers come from?  Why are diagnosis pointers used?

Pointers originated with paper claims.  As you can see from the image, there is not a lot of room left in the service line area for diagnosis codes.  Instead, the user just enters a number that corresponds with the diagnosis code they are “pointing” to.  When EDI started to be used for claims, pointers were a natural fit for two reasons: First, to keep things the same no matter how the data was submitted (electronic or paper) and second to keep EDI “lean”.  Transmitting data used to be expensive and charged by the character.  Using pointers meant that no diagnosis code ever had to be listed and transmitted more than once.

Why not just list all the Diagnosis at the line? 

A properly coded claim often has diagnosis that are not pointed to, but still collected during the encounter.  For a service that is somewhat generic like an office visit, the patient may have come in because they had the flu, but ended up getting a full evaluation that showed a previous lower leg amputation and perhaps diabetes management.  While the office visit did not address the leg specifically, capturing the diagnosis is still very important.

Are Diagnosis Pointers used in Institutional Claims?

No.  Diagnosis pointers are only used in Professional Claims.

Who uses Diagnosis Pointers? 

Claims departments use them to determine if they will pay the claim.  After loading the pricing for that provider and determining eligibility and coverage, claims decides if the treatment is covered.  Among other decisions being made is whether the treatment is covered for the diagnosis.  For something simple like an office visit, almost any reason will do, but for something more specific they must match.  If the diagnosis is broken toe and the treatment is removed kidney, the claim will not be paid.  This is a way to prevent fraud and also a way to avoid paying expensive claims that are really a result of a keying error.

How many diagnosis pointers can there be?

On any given service line there are up to 4.  In current EDI (version 5010 of the 837P) the value must be between 1 and 12.

What if more than four (4) diagnosis relate to the treatment? 

The coder who is submitting the claim at the provider picks the 4 best and does not point to the others.  The idea is to give enough detail / justification for the service being claimed to actually be paid.  If one pointer will do, then there is very little reason to point to more codes.  In the off chance other diagnosis are relevant to the treatment, they are still available to the examiner at the insurance company who is doing the adjudication – they just are not specifically pointed to.

Why should HEDIS, Medicare Revenue efforts or the new Health Insurance Exchange ignore Diagnosis  Pointers?

Pointers are limited to 4 or less per line and average around 1.3 per line.  This means that if HEDIS or Revenue only used the codes that were pointed to, codes that are crucial to HEDIS measures or HCC calculations would be dropped.  A doctor who did a proper, comprehensive E&M for a patient would almost certainly have the information ignored when processing.

Besides pointers what other limitations are present on Diagnosis Code Submission?

The total number of submittable codes vary by transmission type.

  • EDI 837 v4010 Professional: 8
  • EDI 837 v5010 Professional: 12
  • Current Paper Claim, Professional: 4
  • EDI 837 v4010 Institutional: 12
  • EDI 837 v5010 Institutional: 25
  • Current Paper Claim, Institutional: 18
  • ICE (no limit)

Is there any reason Medicare Revenue has to pay attention to pointers?

Certain systems may require them to be submittable data.  For example, CMS’s EDPS system that replaces the RAPS system for risk adjustment has them as a required field to be able to submit to the system.

What does it mean when an insurance company asks for numeric diagnosis pointers?

The latest paper form – the CMS 1500 required after April 2014 – has switched from numbers to letters.  Meanwhile the EDI (Electronic Data Interchange) files still require a number from 1-12.  This puts a small disconnect between the paper data and the electronic.  If one were to put a letter into the pointer field of the EDI file, it will reject.  Many payers import native EDI or a flattened form of it to put claims into their system.  Even if the claim came in as paper, many times it is automatically converted to EDI using OCR / scanning.  Done correctly, the OCR vendor should do a crosswalk from Alpha (A-L) to numeric (1-12).  This means that if there is an “A” a “1” is put into the EDI field and if there is a “C” a “3” would be sent.  Most claims systems will not be updated either so any hand entered claims will have to be converted as well.

The new form can be found here: http://www.cms.gov/Medicare/CMS-Forms/CMS-Forms/Downloads/CMS1500.pdf

 Does cross-walking data from a letter pointer to a numeric pointer “change” the data? 

Short answer: no.  Compliance officers at health plans are often very worried about having a source of truth for the claim.  Crosswalks are used throughout data integration projects for a number of reasons.  Sometimes it is something as simple as formatting a date from MMDDCCYY to CCYYMMDD.  Other times it might be reason codes so that internal codes used in the claims payment process can be understood by those outside by converting them to CARC codes.  It is a good idea to document any cross walks or formatting, but the fundamental data has not changed at all.

2014 Diagnosis pointer crosswalk:
A – 1
B – 2
C – 3
D – 4
E – 5
F – 6
G – 7
H – 8
I – 9
J – 10
K – 11
L – 12
Always happy to help answer questions here as soon as possible.  For EDI or Healthcare Data Integration projects, feel free to visit my company at www.theEDIproject.com

Encounter Data or Fishing Expedition?

Recently, I mentioned to my wife that I needed new skis for this winter. Her response? “Define Need.” When it comes to collecting Encounter Data for CMS, perhaps I should consider sending my wife to Baltimore to help smooth things out.

If you have not heard of Encounter Data Processing for CMS is you could go here or just go ahead and skip this article entirely.

So while health plans have been busy for more than two years trying to comply with EDPS and prepare to switch over from RAPS (Risk Adjustment Processing System) , some involved with the process have lost sight of why we are doing this in the first place. CMS isn’t out to make things more difficult or to simply to see how high plans will jump. EDPS exists to settle some issues that can’t be addressed without more complete data. The problem is that data collection requirements can easily get out of hand.

Background: A Disagreement

In 2009, Medicare Advantage cost CMS roughly 14% more per patient than Fee For Service (FFS) patients. In 2010, that number dipped to 9% more, but still represented billions of dollars in additional cost to Medicare. The Medicare Advantage Organizations (MAOs) have pointed out that they have sicker patients on average and provide more services than FFS patients receive. CMS claimed that since MAOs are paid a Risk Adjustment Factor (RAF) based on what is wrong with patients instead of on services they provide like in FFS, they are simply better at reporting than doctors who see FFS patients. In fact, there is already an adjustment to RAF for the effect of coding intensity.

Measuring outcomes such as re-admission rates or patient satisfaction show MAO patients are better off than in FFS Medicare. MAO plans also claim that they do a better job of managing complex conditions such as diabetes and that costs money. Since current reporting (RAPS) does not show all the steps taken to provide the care, there is no way to reconcile whether CMS or the MAOs are right – or even who is “more” right.

Reasons and Realignment

To sort out how to fix the model in a fair way, EDPS uses the full data set of an 837 claim file as the source data instead of the 7 fields or so that are found in RAPS. Essentially, if CMS can get a picture of not only what is wrong with the patients today (like in RAPS) but also, what services were provided in the course of care, they can try and reconcile the model. Are the patients truly more sick on average? Are the MAOs actually being good stewards of the funds they are given and providing equal or even more care than a FFS patient gets? To get to the bottom of this, they would need to get the following information:

1. Clear understanding of services rendered – what are all the things that are being provided to the patients in an MAO plan? With this data, a patient with the same exact condition can be compared from MAO to FFS to determine the level of care received.

2. Complete data – every visit, procedure, test etc. must be submitted rather than the subset of risk adjustable data that is found in RAPS. In RAPS, submitting additional instances of the same diagnosis really didn’t do anything to the RAF calculation. To be able to compare utilization across the models, care provided that is unrelated to HCCs and RAF also must be submitted in total.

In order to make valid 837 files for submission to CMS, every encounter must include Member ID info, Provider Identifiers for both Billing and Rendering, and service line information such as DOS, CPT, Modifiers, REV Codes, Specialties, POS and charges. The problem comes in with how to use this data once it is received by CMS.

Not Claims Processing

While I was not a party to any of the discussions behind how to implement EDPS at CMS, I imagine the reasons they went with outbound 837s as the model is that they already receive these today for FFS processing and perhaps that some state Medicaid systems collect 837s for their model today. The thought was probably that they could just take the FFS system that could already process 837s and modify it to take in encounter data for use in EDPS instead. The problem is that claims processing requirements don’t always line up with EDPS. It is easy to look back and say that collecting 835s that every MAO in America can already output and contains a clear record of what took place in the course of care would have been a better way to go, but that won’t help us here.

In FFS processing, certain data may be required in order to pay a claim. If the data is not present, the claim is denied. If a FFS provider wants to get paid, they will get the needed data and resubmit. With MAO plans however, there isn’t any requirement to follow FFS submission rules. If a plan wants to work with a particular doctor or facility their contract will dictate what needs to be submitted. For example, skilled nursing facilities (SNF) must submit 837 claims to CMS for FFS payment. Another SNF may work with MAO plans and submit claims via paper form which may not have all the data elements needed to make a valid SNF claim. If that MAO then tries to submit EDPS data showing the SNF encounters, they will be rejected due to missing data elements. The encounter certainly happened and the MAO paid the claim; there is nothing to “fix” in the system of record (e.g. claims system) to make it submittable to CMS. If data is made up to make it submittable, the head of the plan’s compliance efforts would likely be less than pleased to say the least. If the data is not submitted to CMS, utilization will seem lower than it actually is. Typically I refer to these types of claims as the “encounter grey zone”. These are claims that are correctly processed by the plan according to their business rules, and yet are unsubmittable to CMS.

In the above example, RAF scores would likely not suffer too greatly if at all. The direct impact is not felt because other encounters would likely be present to cover any related HCC diagnosis. Of course this is going to be a revenue department’s first concern at a plan. However, even if small numbers of encounters are unsubmittable at each plan, utilization across all plans will appear lower and therefore there will be an indirect but definite impact to plan payment when utilization is calculated by CMS and applied to the new reimbursement model.

One option, which would take a great deal of time and effort to come to fruition, would be to make sure the same rules that apply to CMS FFS submission are then followed by providers and then the plan’s claim system processing rules. While this is possible, it essentially means that CMS’s rules and system become a defacto way to enforce payment practices on MAO plans. There are a lot of attractive reasons to work with an MAO rather than FFS Medicare, but those reasons start to go away as MAOs have to add layers of rules and bureaucracy.

There is a lot of data in an 837. When you take into account the fact that all encounters must be submitted to CMS, plans are looking at 500-1000 times as much data as submitted under RAPS. While balancing claim lines for amounts claimed, paid, denied – not to mention coordination of benefit payments – is not a part of the stated goals of EDPS, balanced claims are needed to make a processable 837 file. Due to the nature of contracts and variability of services provided within identical CPTs, this data won’t likely proved statistically significant to CMS even if they are able to collect and data mine it.

Reexamine the stated goals of Encounter Data Collection

I am sure there is lots of data that would be nice to have for some data miner at CMS someday. Now that we are all quite far into this thing, there are certain things that would be painful to undo, however there is still an opportunity to take a step back and reexamine why we are doing this in the first place. In many cases, CMS is still running the submitted data through a system designed to pay or deny claims before it reaches their data store. This means a lot of edits and a lot of reasons why an encounter might reject. To their credit, CMS has turned a lot of edits off, but when the starting point was a full claims environment, there is still a long way to go.

If CMS were to reexamine the edits involved in the EDPS process, they would find it is in not only the plan’s best interest to turn off many edits, but their own as well. If an edit doesn’t fit the following criteria, it should be turned off.

  1. Can the member be identified? Doing a good job so far on this one.
  2. Can the provider be identified? After a positive NPI match, there should not be rejections for mismatched addresses, zip codes, names, etc. If it is a valid NPI and CMS still has rejections then the table CMS is using for this process MUST be shared with the plans so they can do look-ups prior to submission. Plans can’t be expected to guess this information. There are a lot of kinds of provider errors out there that need to be relaxed.
  3. Is it a valid 837 v5010? If the standard is not followed and the required fields according to the TR3 are not present, all bets are off. However, this may mean that certain fields should be able to be defaulted in the same way that Ambulance mileage / pick up and drop off defaults have been allowed.  There are lots of segments and elements to the TR3 that are Situational unless your trading partner requires them.  Most of these are just not required to realign the model.

Finally ask the following: Does a rejection indicate doubt the encounter happened, or that CMS doesn’t normally pay it? If an encounter / line doesn’t have a valid DOS, CPT, Unit where required, Modifier where needed, diagnosis code(s) then it may be unclear what happened and when. Barring that, the decision whether to accept the encounter data should be to accept. Whether CMS normally pays without that data in a FFS environment is irrelevant.

What do you think?  I’ll monitor the comments to hear your thoughts.

Paper Claims and Encounter Data

Addressing the Confusion Related to CMS’s Encounter Data Mandate

Whether it is the various industry calls, conferences or even CMS’s own meetings, the question about what to do with paper claims keeps coming up. This is a short look at the issue including background, unknowns and an attempt to surmise what is most likely to happen.

Background

Paper to dataCMS has decided to replace the current method of reporting risk adjustment information called RAPS with a new process called the Encounter Data Processing System (EDPS). This process requires plans to submit EDI transactions that contain much more information about members, providers, dates of service, diagnosis, treatments and amounts. The EDPS mandate also requires every encounter to be submitted, even if denied. When compared to the 7 or so fields required under RAPS, EDPS means that plans have to submit somewhere around 500 times as much data as they did before.

Many MAOs are frustrated with various areas of the effort from slipping deadlines to changing requirements, but this article will focus on unique problems associated with paper claims.

EDPS calls for data post adjudication
Perhaps this is the most important thing to remember when trying to figure out any problem relating to the paper claims issue. The data that CMS has asked for is not simply a forwarded copy of whatever the provider submitted to the MAO. Instead, CMS is asking for not only the “claim” data, but also the results of the claim through the adjudication process. One simple way to look at the EDPS requirements is as a combination of the originally submitted claim combined with the explanation of benefits / payments. CMS wants to see what lines were accepted, for how much, as well as what lines were denied. Even in a best case, the claim – paper or electronic – only tells half the story.

Paper claims typically get into the claims system in one of two ways:

  •  Paper Claims are entered directly into the system.
  •  Paper claims are scanned, OCRed and typically converted to EDI.

The “paper” EDI is injected into the same EDI stream or a parallel EDI stream to be processed in much the same way as any other inbound electronic claim. Either way, the claims are paid using the same system and rules after the data is entered. This means that to comply with CMS’s Encounter Data Mandate, the data for the outbound file should be pulled from a common pool (usually the plan’s claims system), regardless of submission type. When paper claims are done being processed (paid or denied) they are hard to distinguish from claims that were entered electronically. As such we need to ask ourselves what is so different about a paper claim? To do this we should look at three things:

  1. Identify what data cannot be put on a paper claim form
  2. Examine what data can be put on a form but generally is NOT by the submitter
  3. Find out if any information is present on the paper claim, but not entered into the system for any reason.

After looking at these three areas, we can see what situations a valid 5010 which will comply with the CMS requirements can NOT be created from a claim that originally came in on paper. For area one, it usually turns out that the paper claim has the ability to contain all the information needed. Most of the cases where there are deltas reflect the fact that the paper claims can make a valid but DATA POOR transaction when submitted. For example, there are only 4 spaces available for diagnosis codes. This is an example of a limitation of the paper form when conducting business with a MAO plan, but certainly there is no requirement that there has to be MORE than 4 codes when submitting 5010. While all the data needed by CMS is not available on the form to make a valid transaction, most of that can be added such as the submitter and receiver information. One good example of data that really doesn’t have a place on a professional claim is a Coordination of Benefits (COB) claim.
As it turns out, areas 2 and 3 are where the real problems come up. To make a valid submission, plans need to submit NPI numbers for billing and rendering providers. If providers are not accustomed to providing things like NPI over the years, they may not give it to the plan. In this case, the data CAN be put on the form, but the MAO has never complained about it not being there and so it is skipped. In very much the same way, even when data is put on the form, it might not be entered. On the UB-04, there is actually a place to note an ambulance pick up and drop off location for institutional claims. This field is rarely if ever filled out on an ambulance claim, but even if it were, there may be no place to put that data in the claim system. If the plan doesn’t require it of the ambulance company to get payment and the data is not tracked, it simply won’t be keyed in.
Enforcement of missing data on a paper claim is much tougher than with electronic claims. With an EDI claim, rules can be put in to reject the claim at the EDI gateway. Paper claims are already in the door and rejecting it for something like a missing NPI is most likely more work than simply looking it up. NPI can many times be populated with internal provider tables already integrated into the claims process. If claims tables still don’t have the data, integrating an NPI look up from the NPPES database would certainly help.
Most of the required data for EDPS would be required by the claim system to begin with – paper or EDI. For example, EDPS requires treatment codes where RAPS did not. This has no impact though since every claim system on the planet requires treatment codes anyway at this point. The areas that are required under EDPS that are often not available on the paper claim (for any of the three reasons listed) happen to be areas that are not required or provided in the EDI file. This means that the problems from paper claims may not be unique to paper claims. Instead they are issues with the EDPS requirements as a whole.

Could CMS Offer Eased Requirements for Paper Claims if They Wanted To?
While anything is possible, it is highly unlikely. First off, CMS vastly underestimates the amount of paper being submitted to plans today. Medicare FFS sees very little if any paper and so they can’t seem to imagine why would it be any different at a commercial plan. As such, the fix to them is less important than all the other areas that are making noise in Encounter Data Processing. Second, if we assumed that CMS was interested in doing something about paper claims, the issue is that there isn’t really a reliable way for CMS to know the claim was originally paper in the first place. This is due to:

  • CMS EDPS / EDI 837 v5010 does not have a paper claims indicator
  • Even if there were an indicator, most plans claims systems would not be able to populate it reliably

If these two issues were ever worked out though, the resulting requirements should be a very reasonable bar to hit. Paper claims contain all the fields to report risk, show utilization and price the claims – the stated goals of collecting encounter data in the first place. If paper claim data requirements are ever defined by CMS, treating them as defacto standards for other data types would likely yield a comprehensive, reasonable set of requirements for EDPS as a whole instead of the reach that many requirements are.

What should plans do?
In almost every case, claims that were submitted on paper to the MAO can be turned into a valid, submittable EDPS transaction for CMS. At worst, these transactions may be “data poor” due to less ability to submit diagnosis codes, however that would be true of both RAPS and EDPS. The claims that are truly unsubmittable to CMS due to a missing data element such as COB data or Ambulance Pick Up address are very likely also problems for the native EDI claims as well.
Perhaps instead of trying to work out every possible scenario where a paper claim would present a problem, plans should push back at CMS on the requirements that don’t directly relate to the purpose of EDPS.

The Office of the Future

Scientific American has an interview with Xerox’s Palo Alto Research Center (PARC) research fellow David Biegelsen who has been at the lab since the beginning.  It is a really interesting look back 40 years at “The Office of The Future”.  If you are unfamiliar with PARC (as I was) from the article:

Xerox established its Palo Alto Research Center (better known as Xerox PARC) in June 1970 as a West Coast extension of its research and development laboratories. PARC researchers proved wildly successful in pioneering many contemporary business technologies—the PC (the first was called the “Alto”), graphical user interface (GUI), Ethernet local area computer network (LAN) and laser printing, to name just a few. Xerox, however, was considerably less successful (and less interested) in commercializing much of PARC’s technology itself, leaving the door open for Apple, IBM, Microsoft and others to capitalize on PARC’s innovations.

This is a good reminder for me that being right is not enough.  These folks were ahead of the curve by a long shot and, they were on target about how and what technologies would develop and become useful.  (Image for a moment having email a regular part of your day in 1970).  The thing is that a lot of areas had to catch up before they could capitalize on it.

About 10 years ago, I remember speaking to a vertical market analyst who told me that most of the time, companies when pursuing vertical markets over-estimate short term results and under-estimate long term results.  That rings true here as well.  Having a clear vision of what the future holds may mean that you have to keep pressing for a very long time before you will really see the fruits of your labor pay off.  Just because you are not seeing the results over night, it doesn’t mean your vision is wrong.