At one point or another you might find your organisation in a position where data migration is critical. Businesses spend millions migrating data between information-intensive applications. Yet up to 75 percent of data migrations fail to meet expectations. Because data migration in itself is not seen as an investment, the effort involved is often underestimated and procedure oversimplified.
Data migrations generally result from the introduction of a new system involving application migrations or replacement of legacy systems or deploying additional systems that will sit alongside existing applications. Whatever the specific nature of any data migration, the ultimate aim is to improve business performance and deliver a competitive advantage. Therefore, be on the lookout for data inaccuracies, unknowns, and duplicate material because these can jeopardise the success of your project. Furthermore, although the data in the source system may be perfectly adequate for its current use. It may be wholly inadequate, in terms of content and structure to meet the objectives of the target system. Without sufficient understanding of both source and target environment, in-depth, your exposure to risk will be increased.
To manage risk understand your data environments and be on the lookout for:
- Compliance requirements
- Data volumes
- Data diversity
- Data decay
- Data quality and
- Technical advances
Data migration strategies
To migrate data successfully, organisations can choose from several migration strategies depending on project requirements and available processing windows. Nonetheless, there two principal types of migrations:
- Big bang migrations
- Trickle migrations
Big bang migrations involve completing the entire migration in a small, defined processing window whereas trickle migrations take an incremental approach to migrating data. Rather than aim to complete the whole event in a short time window, a trickle migration involves running old systems in parallel and migrating the data in phases. This means that there is zero downtime and mission-critical applications stay operational 24/7. This makes trickle migrations more favourable to big bang data migrations because few organisations can live with a core system unavailable for long and sometimes the quality of the data is compromised.
After choosing your strategy below are the essential steps to success.
- Planning: the key to success here is defining the scope of data migration to filter out any surplus data and understand what is achievable and reasonable. Aim to migrate the smallest amount of data required to run the target system effectively. Put together a budget and a timeline estimates and include all time and material costs of auditing, profiling data, write migration code, build data transformation and cleansing rules and loading a testing data. A typical project for medium to large businesses is managed over six months to two years. Last but not least, data governance; data migration should be led by business data owners and users.
- Understanding the data: Apply top-down, target-driven rationale, prioritizing scoping decisions in line with value to your organisation using criteria such as region, line of business, and product type. This stage is also for detecting possible conflicts and drilling down any issues and inconsistencies. this involves creating a single repository for all analysis regardless of source pieces. Gain clear visibility into the data and access all problems with the ability to investigate anomalies to any required depth. Then carry out a cost-benefit analysis for every data marked for migration; the more data to be migrated the more the project cost.
- Design and building: Results from the data audit should be used as the foundation to develop rules for transferring all designed source data and ensuring it is manipulated and fit for the target. Before any code is written, the mapping specifications and supporting documentation must be clearly understood. The design should also embrace the principle that ‘small is beautiful’. Data migration projects run more efficiently when segmented into increments as few projects require a big bang approach. Utilization of traditional ETL tools is powerful but these tools are limiting with free text fields and more complex requirements. In such cases, a data quality tool with parsing and matching capabilities is required to separate and restructure content to the target.
- Executing: Data is extracted from the source system, transformed, cleansed, and loaded into the target system, using the migration rules.
- Testing: aim to volume test all data in the scope as early as possible. The key objective here should be to get to the full-volume upload and online application test stage as early as possible for each unit of work and allow time for volume testing and issue resolution.
- Follow-up and maintenance: once established within the new environment, a data audit can be implemented at any time on any set of data and at any point in the data migration cycle to assess whether the project is still on track and still its scope. Data quality tools can be deployed to maintain quality and a state of readiness.
Above all, keep a total focus on the business objectives and cost/benefits throughout the migration process. This will as a guiding star to your data migration project.