|
Data migration is a key element to consider when adopting any new system, either through purchase or new development. One would think that any two systems that maintain the same sort of data must have performed similar tasks. Therefore, information from one system should map to the other with ease. However, this is rarely the case.
Although migrating data can be a fairly time-consuming process, the benefits can be worth the cost for those that "live and die" by trends in data. Additionally, old applications need not be maintained.
We includes a discussion of the following topics:
Data migration definition, Migrating legacy data decision, and How to migrate data.
Some key terms in understanding data migration are:
The "Do we migrate legacy data?" question has been asked ever since the first companies put data in one repository and decided to change systems. Here are some commonly asked questions:
When deciding on data migration, all factors should be examined before making the assumption that the whole dataset or none of the dataset should be moved over to the new system. The proof is in whether these data records will be used and acted upon when the new system and process is in place. Two key variables to consider in deciding on data migration include data volume and data value.
Data volume is the easiest variable in the decision process. How many data records are we talking about: 1000, 10,000, 100,000, 250,000? How many are expected to come into the new system on a weekly/monthly basis to replenish this supply? Check to see if there are any technical barriers to bringing over a certain amount of data and also if large databases will affect performance of system functions like searching. If not, then 10 records or 100,000 records should not make any difference.
If volume is low, then it may be well worth doing a migration so there is some database for users and for trend analysis. If volume is high, then it may make sense to examine the age/value of the data and start filtering on certain criteria.
Data value is a much harder variable in the decision process. Many times there are different perceptions concerning what value the existing data provide. If users are not working with older data in the current system, chances are they may not work with older data in the new system even with improved search functionality. If migrating, you may want to look at shorter-term date parameters - why bog down a system's performance with data that are never used?
Criteria, as discussed in the questions above, can be date parameters, but can also include other factors. Extracting the exact data based on some of these factors will depend on the abilities of your current system and database as well as the ability to write the detailed extraction script. Keeping it simple when possible is the best approach. However, there may be circumstances where filtering data may make sense.
Once you have determined which data you want to migrate, then determining what parts of the data record will also be important.
Once the decision is made to perform data migration and before migration can begin the following analyses must be performed:
To analyze and define source and target structures, analysis must be performed on the existing system as well as the new system to understand how it works, who uses it, and what they use it for. A good starting point for gathering this information is in the existing documentation for each system. This documentation could take the form of the original specifications for the application, as well as the systems design and documentation produced once the application was completed. Often this information will be missing or incomplete with legacy applications, because there may be some time between when the application was first developed and now.
You may also find crucial information in other forms of documentation, including guides, manuals, tutorials, and training materials that end-users may have used. Most often this type of material will provide background information on the functionality exposed to end-users but may not provide details of how the underlying processes work.
For this part of the analysis, you may actually need to review and document the existing code. This may be difficult, depending on the legacy platform. For example, if data are being migrated from an AS/400 application written in RPG, assistance from an experienced RPG programmer will be required, if those skills are not available in-house. This can be an expensive part of the analysis process, because a resource to perform this analysis for you may be necessary, but it is a vital part of the process, especially if the application has been running and maintained over a long period of time. Undocumented code or fixes that are critical to the application and that have not been documented elsewhere may exist.
Another key area to examine is how the data in the system are stored (i.e., in flat files, files, or tables). What fields are included in those files/tables and what indexes are in use? Also, a detailed analysis of any server processes that are running that may be related to the data must be performed (e.g., if a nightly process runs across a file and updates it from another system).
Now that the source and target structures are defined, the mapping from the legacy to the target should fall into place fairly easily. Mapping should include documentation that specifically identifies fields from the legacy system mapped to fields in the target system and any necessary conversion or cleansing.
Once the analysis and mapping steps are completed, the process of importing the data into the new system must be defined. This process may be a combination of automation and manual processes or may be completely automated or may be completely manual. For example, a process may:
Bottom Line. Data migration is a key element to consider when adopting any new system either through purchase or new development. Data migration is not a simplistic task and if not given due consideration early in the process of developing or purchasing a new system it could become a very expensive task.
Migrating PM Data from the TIMS Legacy System. In TIMS, information is linked by the patient (known as client) to PM and Surveillance data. TIMS is a client-centered (or patient-centered) application. The following is a diagram of data for TIMS PM activities associated with a patient.
The following types of medical data can be entered and maintained for a patient:
Basic demographic information is stored in the following tables in TIMS:
TIMS provides two options for extracting data: an export facility and an ad-hoc query system built into the system module of the application.
The Comma Separated Value (CSV) is a text format with commas that separate field values. This is a commonly used file format recognized by the greatest number of software products, and is therefore recommended.
|
Tuesday, 17 April 2012
Subscribe to:
Posts (Atom)