Healthcare Informatics 6/8/12
Healthcare Infrastructure Data Models
Let’s assume for a moment that the current craze surrounding EHRs is completely effective and every physician in America is meaningfully utilizing one in the near future (hurray for blind optimism!). There are two purposes for doing this, despite the numerous reasons that have been thrown out there.
One is the public health motivation, where we can query all healthcare providers and come up with aggregated metrics to better understand the health status of those who seek care. This would probably be as close to real-time analysis that we can get to for a while. Post-analysis, we can provide better recommendations for best practices and get those implemented a lot faster than the current glacial pace. This is in fact why most other industries made the move to digital: to measure, analyze, and make improvements based on the analysis.
The other PH motivation is for record portability. This one gets all the press, probably because journalists can tie it to the ‘P’ in HIPAA and it is one of those things the general public is generally confused about why this is not already possible in the first place.
Given these two purposes — aggregated counts and whole (or at least CCD) record portability — how are we going to achieve both at the same time? Three options have been knocked around for quite a few years and we’re finally getting to a point where they may be realizable. Needless to say, this is a very exciting time, but what are we getting ourselves into?
In this three-part series, I’ll be taking a look at the top contenders, but I will warn you that the answer/the solution is always a combination of the options available.
Option 1: The Centralized Repository
All data gets sent to a single local database which then gets passed up to a larger database and so on and so forth. Honestly, I think this is the model that a large number of people assume will be put in place when we talk about “information” or “data.” Yet keep in mind this is not how the Internet works.
This more replicates the mainframe days of yore or a wagon wheel metaphor, where the information all flows to the central axle. Visually it is clean, but perhaps not in line with today’s infrastructure reality.The big positive of this model is that it would be insanely easy to analyze the data out of the database and to send aggregated numbers up the chain.
Public health departments generally work from disparate registries that are nothing more than centralized repositories specific to their concentration (i.e. cancer, vaccinations, STDs, etc.) This is why you send your vaccination data to the state vaccination registry and not just to the public health department’s main office.
So why not just have a big ol’ relational database that everything gets sent to and pull what you need from there? The allure of easy analysis is probably why the ONC has started a number of Beacon Programs across the nation. The negative is that… well, no one really trusts the government to do these sorts of things.
Additionally, record sharing between these centralized repositories is still a bit of a hang-up. The Beacon Program in SE Minnesota, for example, connects various healthcare organizations through an HIE and the NwHIN to pass records throughout the area in addition to dumping everything into a centralized repository.
In the end this, model embodies a Bon Jovi song, only putting us halfway there. Analysis: yes. Record portability: no.
Aaron Berdofe is an independent health information technology contractor specializing in Meditech’s’s Medical and Practice Management Suite and EHR design and development.
The article about Pediatric Associates in CA has a nugget with a potentially outsized impact: the implication that VFC vaccines…