When I first heard the term “legacy data”, I felt like I needed to adopt a stance of reverence. When I hear that someone is a “legacy”, it sounds like a good thing – like we will be sad when that person passes.
I soon learned that if you are understanding legacy data correctly, you might actually wish they would go away. Legacy data systems refer to very old data systems that still need to be supported in new applications being built. That’s not so bad if the legacy system was built within the last 20 or so years. However, it can be a huge problem if the legacy data system is older.
From the point-of-view of a data scientist or statistician trying to analyze data from these systems, it can be very hard to trend using legacy data, even if it is compliant with the structure of new systems. I demonstrate this in different case studies I go over in my online boot camp course, “How to do Data Close-out”.
Why is Understanding Legacy Data so Problematic to the Analyst?
Basically, it has to do with technology. Before 2000, most of our legacy databases were flat in form. This was because the level of technological development just was not there to support the kind of queries we were finally able to do in the early 2000s using structured query language (SQL). Those new SQL queries were based on a relational structure that paid attention to normal form – which is totally different from flat.
So querying flat databases is totally different than querying relational databases. My mom is a flat database person, and she could never adapt to relational queries. On my end, I prefer to put all my data in relational form before I prepare my research analytic datasets. That way, I can ensure data compliance.
But regardless of what structure you prefer for your analytics, my point is that forever, we’ll always have that difference between data coming from a flat structure, and data coming from a relational structure. You might be thinking that I’m referring to “old data” from a flat structure – but actually, a lot of these databases are still going!
Is Understanding Legacy Data Critical in Data Science?
I want to say “no” but the answer is actually “yes”. The reason I want to say “no” is that it is understanding legacy data is very difficult – but I don’t think you can get away with not doing it. The best way I can explain it is to have you take a look at the Healthcare Cost and Utilization Project (HCUP) documentation web site, which I analyze extensively in my online course, “How to do Data Close-out”.
The reason I cover HCUP so carefully in that course is that HCUP does a really awesome job at documenting their data. Even modern HCUP data originate in many legacy systems (as the original data come from hospitals who are often using old systems). As an example of their terrific documentation, they have this table with the list of data elements, and what years they are available.
Looking at this table will give you a good idea about why it is necessary to understand legacy data – even from old databases, even in 2023 – if you want to be a good data scientist.
Updated June 12, 2023.
Understanding legacy data is necessary if you want to analyze datasets that are extracted from old systems. This knowledge is still relevant, as we still use these old systems today, as I discuss in my blog post.