Are We Disregarding Privacy Rules Because They Are Hard? Part 3 of 3

Are We Disregarding Privacy Rules Because They Are Hard? Part 3 of 3

Shouldn’t This Be Easier By Now?

hcfa1500 redactEventually, someone in Information Technology or Database Administration gets asked to extract data from a PHI rich line of business system or data warehouse but deliver it as de-identified data.  Almost any data extraction approach allows for data to be masked, redacted, suppressed or even randomized in some way.  This type of functionality can give us de-identified but often useless data for testing, analytics or development.

Since my company, The EDI Project™ was founded in 2001, we have been asked to de-identify or anonymize data for testing and development work many times.  Each time we have written custom code to do so for each project.  This code is never transferable to another customer environment and must be re-done for every scenario.  If we were doing this every time, we thought there has to be other companies who are having the same problem.

It turns out, there are tools on the market to address extracting data from a line of business system or data warehouse and anonymize the data so it is useful and not just de-identified into useless “John Doe” records.

For example, one of the largest integration engines on the market offers this functionality as a $250,000 add on to their existing, very expensive suite of products.  It is complicated to learn and use and must have custom code added if multiple systems are required to be anonymized the same way (e.g. enrollment, eligibility and claims data have to have matching but anonymized names and dates of birth).

There are other tools in this space that sniff out vast data stores for PHI and attempt to automagically de-identify the data.  Usually this is a masking or data redaction type approach, but even when it is not, many fields are marked as “suspect PHI” and left for human review.  I can’t blame them either.  While Patient Name fields or Date of Birth are easy enough to identify, free form fields can be a nightmare.  Either way, these tools are usually very expensive and often leave the job half done.

There are a lot of cases where a certain files like EDI 837 Claims or maybe an enrollment database has to be de-identified for a test system.  Perhaps it is an ongoing extract of data from a data warehouse for an analytics study.  This is where most of the time, the work is either not done (exemption granted), or custom code is deployed (expensive / time consuming).  But technology is supposed to be faster, better and cheaper isn’t it?

Since we are the guys who are often asked to do the work looked at our experience in extraction of health care data to design a tool we would want to use.  No compromises.  We wanted easy to learn and use, powerful to handle big data environments without being a bottleneck to any extraction work.  Finally, it would be able to anonymize data across multiple sources so that the matching but de-identified data maintained record integrity (i.e. all the records for one patient in the PHI data sources had corresponding records in the de-identified data sources).  Oh yeah – and since the main project being done is already expensive enough, the tool should be inexpensive.

People have been using ETL (Extract, Transform, Load) tools for decades and are familiar with how they work.  Thinking about the “T” in “Transform”, a common thing to do would be to change a date from MMDDYYYY format to DDMMYYYY format.  This type of common transformation logic doesn’t have to be rewritten every time you extract from a new source.  The integrator just picks it from a list when doing mapping work.  Anonymizing PHI should be that simple as well.

Functions and drop downs need to be available to anonymize every kind of PHI and handle it according to the special properties for that type of data.  Names are anonymized differently than zip codes.  More specifically, the anonymization routine for a Date of Birth (DOB) is handled differently than a Date of Service (DOS).  The software should know that already and not need to be defined by the integration team or subject matter expert.

As a result, we developed and launched our own Anonymization Engine called “Don’t Redact!™”.  We’re integrators and so we built the tool an integrator would want to get this done quickly and easily.  It can be learned by someone who has experience with integration tools in an afternoon and your first sizeable anonymization effort can be deployed in a day or so after learning the ropes.

Under the spirit of no compromises and disruptive technology, the Don’t Redact!™ Anonymization Engine is $25,000.

While The EDI Project™ is a professional services organization and we would be happy to deploy the software for you or set up your first live anonymized environment, the tool is well thought out and easy enough you won’t need any services at all.

Want to find out more?  http://theediproject.com/anonymization.html

Part 1: Minimum Necessary or Optional   

Part 2: A False Choice. . . 

Advertisement

Are We Disregarding Privacy Rules Because They Are Hard? Part 1 of 3

Are We Disregarding Privacy Rules Because They Are Hard? Part 1 of 3

Minimum Necessary or Optional?

One of the things that continues to excite me about the world of healthcare informatics is the opportunity to reduce the cost of care while providing better care and overall better outcomes.  Often people think in terms of zero sum game where reducing the cost of care always reduces care and outcomes.  But the promise of technology is that it can make us more efficient; a man can dig a hole faster with a shovel with more precise dimensions than with his bare hands.

tools

Having the right tool for the right job is important. . . 

 

Much attention has been paid of late to re-admission rates for hospitals.  Hospitals stays are expensive and if a patient is sufficiently recovered from whatever put them there to begin with, they are usually eager to get home to continue to recover in a more familiar environment.  Both parties – the hospital and the patient – often want the stay to end as soon as possible.

But if the patient is released too early, it is always bad news.  At best, they must be re-admitted – often through the emergency room process.  Worse, they could relapse and not make it back to the hospital at all.  Outcomes for patients who are released too early are both worse and more expensive than if they had stayed in the hospital instead of being released.

Certainly, trusting our doctors is a first step, but they are often very busy and under the same pressures to release a patient discussed above.  There are simply too many variables to be perfect at this when practicing medicine.  While experience gives the doctor his most potent weapon she can only draw from the experience available to them.  Patterns do exist, however, that are indicators of good situations to use additional caution when deciding to release.  No one doctor could ever amass enough experience to recognize them all though.

Today, there are powerful analytic tools available that can take massive amounts of data and sift through looking for patterns that simply would not or could not be seen otherwise.  Rather than take a sample scenario and examine the data to see if that scenario is more likely to result in a readmission, these tools are capable of comparing millions or billions of situations to each other at the same time.  The result is finding co-morbidities or patterns of care that no one could have ever thought to test out on their own.

These types of comparisons were computational fairy tales just a few years ago but can be done today because of advancements in parallel processing.  The bad news is no matter how good the tools are, they are only as good as the data they have to examine in the first place. . . What if no one can get the data?

Minimum Necessary is the process that is defined in the HIPAA regulations:  When using or disclosing protected health information or when requesting protected health information from another covered entity, a covered entity must make reasonable efforts to limit protected health information to the minimum necessary to accomplish the intended purpose of the use, disclosure or request. 

 

Next: Part 2A False Choice. . .  

Part 3: Shouldn’t This Be Easier By Now? 

Industry Memo on Medicare Filtering: A To Do List

By now, MAO plans have had about a month to read and understand the July 21, 2015 “Industry Memo: Medicare Filtering” letter that was published by CMS.   The letter contained clarifications and confirmations of previously disclosed information as well as new information on the proposed rules CMS will be using to conduct risk filtering.  Some highlights of the letter are: CMS fishing expedition

  • Diagnosis received from the Encounter Data Processing System submissions will be used to calculate risk adjustment dollars for the 2015 payment year (2014 Dates of Service (DOS)) as previously disclosed.
  • CMS will apply their own filter to Encounter Data received from MAOs to determine if a diagnosis is risk adjustable.
  • After confirming and appropriate place of service, CMS will use a risk filter that is CPT only for professional encounters (no specialty codes will be considered).  The codes for 2014 DOS can be found here.
  • Institutional Inpatient encounters will have all diagnosis accepted as long as they are Bill Type 11x or 41x without treatment code filtering.
  • Institutional Outpatient encounters will also filter on bill type (8 types accepted), but also be subjected to the CPT/HCPCS filtering from professional encounters.
  • Risk adjustment calculations for PY 2015 will use Encounter data as a source of additional codes.
  • Risk adjustment calculations for PY 2016 will be a weighted average of 90% RAPS and 10% EDPS scores.
  • Plans are responsible for deleting diagnosis codes from both RAPS and the Encounter data collected and filtered  by CMS by using chart reviews.
  • The submission deadline for 2014 DOS is February 1st, 2016.

Some thoughts and recommendations

The wording of the approach for 2015 PY tells us that risk adjustment dollars won’t go DOWN as a result of the introduction of EDPS data.  While it is true that it can only go up with the addition of EDPS diagnosis, every additional EDPS sourced HCC represents additional RADV risk over what the plan allows today through risk filtering efforts.  2015 DOS / 2016 PY data will use a 90/10 weighted average on payments meaning there can be both upside and downside to risk adjustment revenue.

The biggest problem however, is that there is a lot to do and not much time to do it.  Counting back from February 1, 2016, there are five months.  Plans will have to identify differences and decide if those differences need to be deleted or not.

  • Plans should not wait for CMS to provide the MAO-004 report to indicate what codes have been used for risk adjustment from encounter data under the new rules.  It will take time to approve the proposed rules and more time to start applying the filter and actually send out the backlog of MAO-004 reports.
    • Start tracking, at the very least, diagnosis submitted by encounter for 2014 DOS submissions.  Tracking individual diagnosis would be even better.
    • Apply the proposed CMS CPT filter to come up with a potential list of Encounter Data HCCs per encounter.
    • Use Encounter data HCCs to build a table of Encounter Data Member HCCs.
    • Compare Encounter data Member HCCs to RAPS Member HCCs and identify differences as top priorities for review.  There may not be time or resources to delete every diagnosis submission difference, but if the difference does not involve an actual pick up, the plan is a bit less exposed.
    • Use your own results as a known good to compare to the results of the MAO-004 when it is finally delivered to ensure CMS is applying the filter correctly.
    • Mine RAPS process for automatic deletes and ensure these are done on both sides (e.g. Professional AMI codes like 410.xx)

Another big problem has to do with the EDI process to be used to submit chart review deletes.  It is technically difficult, cumbersome to track and still unclear in some areas.

  • CMS has specified chart review deletes use a REF segment to indicate that diagnosis codes listed be treated as deletes.  At the very least, this REF segment would mean that chart reviews would need to be either “ADDs” or “DELETEs”.  While previous CMS presentations show examples of both in the same transaction, they are not EDI x12 5010 compliant and I assume have been since abandoned.
  • These deletes are not like RAPS deletes that delete on a member level.  Instead they are tied to specific encounters.  This is a problem because. . .
  • There is typically a many to one relationship between a single chart review and many encounters.  If a plan can only delete codes related to a specific ICN, many chart review deletes will have to be sent to actually delete a diagnosis.
    • Example: A chart is reviewed that spans eight encounters.  While the doctor’s notes indicate a history of a stroke, the medical biller each time coded 410.01 as an AMI – Initial episode instead of the 412.xx that would indicate the patient had a history of stroke.  The chart review uncovered this mistake and recommended the 410.xx be deleted and the 412 be added.  To do this, at least 9 chart review transactions would have to be sent.  8 of them would have to be matched to 8 different ICNs for the deletes of the 410.xx codes and at least one more would have to be sent to add back in the 412.xx.
  • Clarification on the EDI problems and Chart review delete process has been requested from CMS.

What are your thoughts?  What is your plan doing to address these issues?  Are there important things I missed or got wrong?  What has your analysis of the CPT filter turned up as a concern?  I’ll monitor comments closely and respond quickly.

Understanding Diagnosis Pointers

Diagnosis Pointers Explained

Pointers 1500

In the last 17 years, I have been asked a number of times to explain diagnosis pointers.  While diagnosis pointers are simple once you understand them, sometimes they are difficult to explain, especially to  those outside the claims world.  The best way I can think of for now is to put together this diagnosis pointer FAQ.  If you have any additions, corrections or would like me to answer other questions, please leave a comment.

What are Diagnosis Pointers?

Diagnosis Pointers are used to describe sometimes complex many to many relationships between submitted diagnosis and service line treatment information on health claims and encounters.

Where did diagnosis pointers come from?  Why are diagnosis pointers used?

Pointers originated with paper claims.  As you can see from the image, there is not a lot of room left in the service line area for diagnosis codes.  Instead, the user just enters a number that corresponds with the diagnosis code they are “pointing” to.  When EDI started to be used for claims, pointers were a natural fit for two reasons: First, to keep things the same no matter how the data was submitted (electronic or paper) and second to keep EDI “lean”.  Transmitting data used to be expensive and charged by the character.  Using pointers meant that no diagnosis code ever had to be listed and transmitted more than once.

Why not just list all the Diagnosis at the line? 

A properly coded claim often has diagnosis that are not pointed to, but still collected during the encounter.  For a service that is somewhat generic like an office visit, the patient may have come in because they had the flu, but ended up getting a full evaluation that showed a previous lower leg amputation and perhaps diabetes management.  While the office visit did not address the leg specifically, capturing the diagnosis is still very important.

Are Diagnosis Pointers used in Institutional Claims?

No.  Diagnosis pointers are only used in Professional Claims.

Who uses Diagnosis Pointers? 

Claims departments use them to determine if they will pay the claim.  After loading the pricing for that provider and determining eligibility and coverage, claims decides if the treatment is covered.  Among other decisions being made is whether the treatment is covered for the diagnosis.  For something simple like an office visit, almost any reason will do, but for something more specific they must match.  If the diagnosis is broken toe and the treatment is removed kidney, the claim will not be paid.  This is a way to prevent fraud and also a way to avoid paying expensive claims that are really a result of a keying error.

How many diagnosis pointers can there be?

On any given service line there are up to 4.  In current EDI (version 5010 of the 837P) the value must be between 1 and 12.

What if more than four (4) diagnosis relate to the treatment? 

The coder who is submitting the claim at the provider picks the 4 best and does not point to the others.  The idea is to give enough detail / justification for the service being claimed to actually be paid.  If one pointer will do, then there is very little reason to point to more codes.  In the off chance other diagnosis are relevant to the treatment, they are still available to the examiner at the insurance company who is doing the adjudication – they just are not specifically pointed to.

Why should HEDIS, Medicare Revenue efforts or the new Health Insurance Exchange ignore Diagnosis  Pointers?

Pointers are limited to 4 or less per line and average around 1.3 per line.  This means that if HEDIS or Revenue only used the codes that were pointed to, codes that are crucial to HEDIS measures or HCC calculations would be dropped.  A doctor who did a proper, comprehensive E&M for a patient would almost certainly have the information ignored when processing.

Besides pointers what other limitations are present on Diagnosis Code Submission?

The total number of submittable codes vary by transmission type.

  • EDI 837 v4010 Professional: 8
  • EDI 837 v5010 Professional: 12
  • Current Paper Claim, Professional: 4
  • EDI 837 v4010 Institutional: 12
  • EDI 837 v5010 Institutional: 25
  • Current Paper Claim, Institutional: 18
  • ICE (no limit)

Is there any reason Medicare Revenue has to pay attention to pointers?

Certain systems may require them to be submittable data.  For example, CMS’s EDPS system that replaces the RAPS system for risk adjustment has them as a required field to be able to submit to the system.

What does it mean when an insurance company asks for numeric diagnosis pointers?

The latest paper form – the CMS 1500 required after April 2014 – has switched from numbers to letters.  Meanwhile the EDI (Electronic Data Interchange) files still require a number from 1-12.  This puts a small disconnect between the paper data and the electronic.  If one were to put a letter into the pointer field of the EDI file, it will reject.  Many payers import native EDI or a flattened form of it to put claims into their system.  Even if the claim came in as paper, many times it is automatically converted to EDI using OCR / scanning.  Done correctly, the OCR vendor should do a crosswalk from Alpha (A-L) to numeric (1-12).  This means that if there is an “A” a “1” is put into the EDI field and if there is a “C” a “3” would be sent.  Most claims systems will not be updated either so any hand entered claims will have to be converted as well.

The new form can be found here: http://www.cms.gov/Medicare/CMS-Forms/CMS-Forms/Downloads/CMS1500.pdf

 Does cross-walking data from a letter pointer to a numeric pointer “change” the data? 

Short answer: no.  Compliance officers at health plans are often very worried about having a source of truth for the claim.  Crosswalks are used throughout data integration projects for a number of reasons.  Sometimes it is something as simple as formatting a date from MMDDCCYY to CCYYMMDD.  Other times it might be reason codes so that internal codes used in the claims payment process can be understood by those outside by converting them to CARC codes.  It is a good idea to document any cross walks or formatting, but the fundamental data has not changed at all.

2014 Diagnosis pointer crosswalk:
A – 1
B – 2
C – 3
D – 4
E – 5
F – 6
G – 7
H – 8
I – 9
J – 10
K – 11
L – 12
Always happy to help answer questions here as soon as possible.  For EDI or Healthcare Data Integration projects, feel free to visit my company at www.theEDIproject.com

Encounter Data or Fishing Expedition?

Recently, I mentioned to my wife that I needed new skis for this winter. Her response? “Define Need.” When it comes to collecting Encounter Data for CMS, perhaps I should consider sending my wife to Baltimore to help smooth things out.

If you have not heard of Encounter Data Processing for CMS is you could go here or just go ahead and skip this article entirely.

So while health plans have been busy for more than two years trying to comply with EDPS and prepare to switch over from RAPS (Risk Adjustment Processing System) , some involved with the process have lost sight of why we are doing this in the first place. CMS isn’t out to make things more difficult or to simply to see how high plans will jump. EDPS exists to settle some issues that can’t be addressed without more complete data. The problem is that data collection requirements can easily get out of hand.

Background: A Disagreement

In 2009, Medicare Advantage cost CMS roughly 14% more per patient than Fee For Service (FFS) patients. In 2010, that number dipped to 9% more, but still represented billions of dollars in additional cost to Medicare. The Medicare Advantage Organizations (MAOs) have pointed out that they have sicker patients on average and provide more services than FFS patients receive. CMS claimed that since MAOs are paid a Risk Adjustment Factor (RAF) based on what is wrong with patients instead of on services they provide like in FFS, they are simply better at reporting than doctors who see FFS patients. In fact, there is already an adjustment to RAF for the effect of coding intensity.

Measuring outcomes such as re-admission rates or patient satisfaction show MAO patients are better off than in FFS Medicare. MAO plans also claim that they do a better job of managing complex conditions such as diabetes and that costs money. Since current reporting (RAPS) does not show all the steps taken to provide the care, there is no way to reconcile whether CMS or the MAOs are right – or even who is “more” right.

Reasons and Realignment

To sort out how to fix the model in a fair way, EDPS uses the full data set of an 837 claim file as the source data instead of the 7 fields or so that are found in RAPS. Essentially, if CMS can get a picture of not only what is wrong with the patients today (like in RAPS) but also, what services were provided in the course of care, they can try and reconcile the model. Are the patients truly more sick on average? Are the MAOs actually being good stewards of the funds they are given and providing equal or even more care than a FFS patient gets? To get to the bottom of this, they would need to get the following information:

1. Clear understanding of services rendered – what are all the things that are being provided to the patients in an MAO plan? With this data, a patient with the same exact condition can be compared from MAO to FFS to determine the level of care received.

2. Complete data – every visit, procedure, test etc. must be submitted rather than the subset of risk adjustable data that is found in RAPS. In RAPS, submitting additional instances of the same diagnosis really didn’t do anything to the RAF calculation. To be able to compare utilization across the models, care provided that is unrelated to HCCs and RAF also must be submitted in total.

In order to make valid 837 files for submission to CMS, every encounter must include Member ID info, Provider Identifiers for both Billing and Rendering, and service line information such as DOS, CPT, Modifiers, REV Codes, Specialties, POS and charges. The problem comes in with how to use this data once it is received by CMS.

Not Claims Processing

While I was not a party to any of the discussions behind how to implement EDPS at CMS, I imagine the reasons they went with outbound 837s as the model is that they already receive these today for FFS processing and perhaps that some state Medicaid systems collect 837s for their model today. The thought was probably that they could just take the FFS system that could already process 837s and modify it to take in encounter data for use in EDPS instead. The problem is that claims processing requirements don’t always line up with EDPS. It is easy to look back and say that collecting 835s that every MAO in America can already output and contains a clear record of what took place in the course of care would have been a better way to go, but that won’t help us here.

In FFS processing, certain data may be required in order to pay a claim. If the data is not present, the claim is denied. If a FFS provider wants to get paid, they will get the needed data and resubmit. With MAO plans however, there isn’t any requirement to follow FFS submission rules. If a plan wants to work with a particular doctor or facility their contract will dictate what needs to be submitted. For example, skilled nursing facilities (SNF) must submit 837 claims to CMS for FFS payment. Another SNF may work with MAO plans and submit claims via paper form which may not have all the data elements needed to make a valid SNF claim. If that MAO then tries to submit EDPS data showing the SNF encounters, they will be rejected due to missing data elements. The encounter certainly happened and the MAO paid the claim; there is nothing to “fix” in the system of record (e.g. claims system) to make it submittable to CMS. If data is made up to make it submittable, the head of the plan’s compliance efforts would likely be less than pleased to say the least. If the data is not submitted to CMS, utilization will seem lower than it actually is. Typically I refer to these types of claims as the “encounter grey zone”. These are claims that are correctly processed by the plan according to their business rules, and yet are unsubmittable to CMS.

In the above example, RAF scores would likely not suffer too greatly if at all. The direct impact is not felt because other encounters would likely be present to cover any related HCC diagnosis. Of course this is going to be a revenue department’s first concern at a plan. However, even if small numbers of encounters are unsubmittable at each plan, utilization across all plans will appear lower and therefore there will be an indirect but definite impact to plan payment when utilization is calculated by CMS and applied to the new reimbursement model.

One option, which would take a great deal of time and effort to come to fruition, would be to make sure the same rules that apply to CMS FFS submission are then followed by providers and then the plan’s claim system processing rules. While this is possible, it essentially means that CMS’s rules and system become a defacto way to enforce payment practices on MAO plans. There are a lot of attractive reasons to work with an MAO rather than FFS Medicare, but those reasons start to go away as MAOs have to add layers of rules and bureaucracy.

There is a lot of data in an 837. When you take into account the fact that all encounters must be submitted to CMS, plans are looking at 500-1000 times as much data as submitted under RAPS. While balancing claim lines for amounts claimed, paid, denied – not to mention coordination of benefit payments – is not a part of the stated goals of EDPS, balanced claims are needed to make a processable 837 file. Due to the nature of contracts and variability of services provided within identical CPTs, this data won’t likely proved statistically significant to CMS even if they are able to collect and data mine it.

Reexamine the stated goals of Encounter Data Collection

I am sure there is lots of data that would be nice to have for some data miner at CMS someday. Now that we are all quite far into this thing, there are certain things that would be painful to undo, however there is still an opportunity to take a step back and reexamine why we are doing this in the first place. In many cases, CMS is still running the submitted data through a system designed to pay or deny claims before it reaches their data store. This means a lot of edits and a lot of reasons why an encounter might reject. To their credit, CMS has turned a lot of edits off, but when the starting point was a full claims environment, there is still a long way to go.

If CMS were to reexamine the edits involved in the EDPS process, they would find it is in not only the plan’s best interest to turn off many edits, but their own as well. If an edit doesn’t fit the following criteria, it should be turned off.

  1. Can the member be identified? Doing a good job so far on this one.
  2. Can the provider be identified? After a positive NPI match, there should not be rejections for mismatched addresses, zip codes, names, etc. If it is a valid NPI and CMS still has rejections then the table CMS is using for this process MUST be shared with the plans so they can do look-ups prior to submission. Plans can’t be expected to guess this information. There are a lot of kinds of provider errors out there that need to be relaxed.
  3. Is it a valid 837 v5010? If the standard is not followed and the required fields according to the TR3 are not present, all bets are off. However, this may mean that certain fields should be able to be defaulted in the same way that Ambulance mileage / pick up and drop off defaults have been allowed.  There are lots of segments and elements to the TR3 that are Situational unless your trading partner requires them.  Most of these are just not required to realign the model.

Finally ask the following: Does a rejection indicate doubt the encounter happened, or that CMS doesn’t normally pay it? If an encounter / line doesn’t have a valid DOS, CPT, Unit where required, Modifier where needed, diagnosis code(s) then it may be unclear what happened and when. Barring that, the decision whether to accept the encounter data should be to accept. Whether CMS normally pays without that data in a FFS environment is irrelevant.

What do you think?  I’ll monitor the comments to hear your thoughts.

Paper Medical Records Are Here to Stay

Seems Permanent . . .

About 14 years ago, I got involved with automating medical claims. For those not familiar with the process, as it turns out doctors still lick stamps and send paper medical bills (or claims) to health insurance companies for payment. Sure they can submit electronic bills as EDI, but many don’t. There are a couple big reasons (and a million small ones) that lots of paper claims are still out there:

– Loose Standards (837 the EDI format is implemented in lots of different ways)

– Addressing / Delivery (imagine a doctor needing a separate phone line for every payer – while it is not quite this bad, it certainly isn’t like dropping an envelope in a mailbox (or sending an email for that matter) and knowing it will get to an address despite the fact that you have never talked to them)

So while the above could be overcome, it is easier in lots of cases to just keep doing what you are doing. When it comes down to it, there is a utility to paper that is hard to beat in the short term. This is a common theme to PaperInbox, but in this case I want to discuss how it applies to Medical Records.

Whether it is industry news or even mainstream news covering the new healthcare bill, people talk a lot about the EMR or Electronic Medical Records. EMRs are slated to give us all kinds of great efficiencies from better care due from access to patient history at point of care to huge administrative savings that come from eliminating clerical work. These are pretty great things and somewhat inevitable in the long term. In the short term, I think something quite different will take place. Continue reading “Paper Medical Records Are Here to Stay”