Migrating-in-Place for ECM Consolidation
We recently covered why organizations are moving swiftly to Next Gen Repositories. No longer are modern organizations satisfied with the old “system of record” with all of its upkeep, overhead, and user charges. They want something from which relevant, actionable data can be derived and pulled out by Gen X & Y users as they demand it. A perfect example of this new, “system of engagement” is being delivered as part of the actual migration from the old repository to the new. Let me explain.
Instead of simply mass-migrating the existing data to the new target system, many are choosing to “Migrate in Place.” In other words, they leave the existing archive in place and allow users to continue to pull and view content. What results is a process by which users dictate the content that is actually required in the new system. Behind the scenes, the data being actively selected gets migrated dynamically to the new environment. If you are from the document scanning world, this is similar to “scan on demand” which left paper archives in place until retrieved and scanned. Once retrieved, they live again as a digital asset, and are ingested into workflow and repository systems with specific retention periods.
One big assumption here is that the organization has decided that keeping the legacy system in place for some period is desirable. This may be due to factors such as:
· Legacy system can be accessed, with or without API’s
· Legacy system maintenance is no longer required to maintain system access
· Knowledgeable staff is no longer around to properly administer the legacy system (“How can we shut down what we don’t even understand?”)
What happens to data that never gets selected? It simply ages off, or is migrated based upon specific configurable requirements. For example, a life insurance company may have policies in force for life of the insured (100 years?) + seven years, so those policy documents would obviously be chosen to move to the new system even if not chosen by user retrievals. This type of “Migrate Based on Retention” process can be configured and automated.
One other important note is that it’s not necessary to own API’s to the legacy system because of the advent of new access methods such as the CMIS Interface, (LINK TO INFO ON CMIS) an open access protocol to many ECM systems, and due to repository adapters which can be easily built by savvy integrators, with or without a CMIS interface.
One of the dangers of any mass data migration is what happens when you find entire libraries of data that don’t appear to have any disposition requirements. “What is this data and to whom does it belong?” Worse than that: “Who do we even ask to find out who owns the data?” Vital to any successful migration is maintaining a regular communication channel with key user communities which access this data. Think about setting up a spreadsheet or table with content type, user group and “go-to” individuals’ contact details so that you can make these important disposition decisions expeditiously. The cost of this process is not in running the process but in deciding what to do with “orphan” data and how to bring the business and IT together long enough to decide. At the end of the day, you have many fewer orphan decisions to make when you migrate-in-place.
“Isn’t setting all this up really expensive?” is one obvious question. The answer is: “not really.” The key is deploying a relatively light weight repository with excellent process flow and decision capabilities. If done right, you can easily access both the legacy and new systems with zero coding.
Here are the Do’s and Don’ts for Migrating-in-Place:
· Implement a light-weight repository with excellent process flow capabilities so that decision making can be automated
· Don’t worry about using or buying API’s to the legacy system as there are other ways to access the content
· Avoid orphan document situations by establishing regular touch points with business users
· Set up simultaneous access to both systems making it transparent to users