How Can Legacy Healthcare Systems Transition to Cloud-Native Solutions?

November 20, 2024

In the rapidly evolving landscape of healthcare technology, transitioning from legacy on-premises systems to cloud-native services is not just a trend but a necessity for scalability, sustainability, and improved performance. The journey is often complex, requiring a blend of immediate problem-solving and strategic long-term planning. This article delves into the challenges and strategies involved in digitizing legacy healthcare systems, with a particular focus on Livi’s modernization of its Mjog patient messaging product. Livi, a pioneering digital healthcare service, connects patients with General Practitioners (GPs) through online appointments, catering to National Health Service patients and private consultations. The company’s acquisition of MJog, a patient relationship management system designed to improve GP-patient communications, presented an opportunity to integrate with legacy Electronic Medical Records (EMRs) and enhance data synchronization.

Necessity and Complexity of Migration

Transitioning MJog from an on-premises model to a cloud-based architecture was driven by the need for scalability and better performance. The process was multifaceted, involving a thorough assessment of migration needs, redesigning the architecture for a cloud framework, balancing cloud-native features with legacy system requirements, embedding robust monitoring tools for visibility and reliability, and defining clear requirements to map the functionality of the legacy system to its cloud counterpart. Initially, MJog’s system was a complex two-layer architecture with a mix of outdated technologies that included Delphi, Java, PHP 5, and both SQLite and MS SQL databases. This Windows-based infrastructure, with services directly accessing databases, resulted in scattered business logic and inefficiencies, contributing to significant technical debt.

Post-COVID, there was a heightened demand for remote access without VPNs and compatibility with non-Windows devices. This urgency was compounded by the impending departure of a key Delphi developer, necessitating immediate modernization to retain customers and improve the user experience. Livi faced two primary strategies for this modernization: a quick lift-and-shift to the cloud or a comprehensive cloud-native rewrite. The former, while expedient, was expensive and did not address long-term issues, particularly the reliance on Delphi. The latter promised cost reductions and future-proofing but risked customer churn due to the extended timeline.

Strategic Approach

Given the complexity of the overhaul, Livi adopted a hybrid approach, combining a swift lift-and-shift to meet immediate demands and a subsequent cloud-native rewrite to ensure scalability and sustainability. This dual strategy allowed for immediate operational stability while laying the groundwork for future enhancements. Post lift-and-shift, the focus shifted to rewriting MJog’s cloud architecture. An initial step was categorizing existing services by language and purpose, revealing that Delphi handled critical EMR connections, Java managed task scheduling, and PHP controlled the user interface. This understanding facilitated a logical redesign, allowing for a streamlined development process.

The recognition that sync services could be templated into reusable components helped streamline future development. Task scheduling was simplified using AWS EventBridge, and EMR connections were isolated as standalone services. This modularization reduced system complexity and separated data services into APIs for easier access and clearer separation of concerns. Rethinking integration, data synchronization, and workload management was essential to transitioning MJog’s legacy system to the cloud, integrating with legacy EMRs posed unique challenges, primarily due to their entrenched and varied architectures.

Cloud-Native Services Development

Integrating MJog with legacy EMRs posed unique challenges, primarily due to their entrenched and varied architectures. The first EMR required HTTP connections wrapped in a 32-bit Windows DLL, maintaining reliance on expensive Windows infrastructure. Responses in intricate XML structures necessitated further processing, impacting application performance. The second EMR demanded an on-premise connection through direct TCP calls and customer-specific VPNs. Despite managing similar data types, the differing XML formats and models complicated creating a unified integration strategy. Managing these disparate systems required developing a proxy API with adapters to standardize data models and query languages, facilitating consistent EMR connections.

Synchronization processes varied widely in performance and load requirements. Messaging services managed data synchronization efficiently, while fragile EMR integrations depended on intermittently available on-premises systems. Adopting a microservices architecture allowed each sync process to scale independently, ensuring adaptive load handling without system-wide impact. A critical decision involved choosing between orchestration and choreography. Livi primarily implemented orchestration for scheduled tasks, with specific functions managed through event-driven choreography. This blend allowed for flexibility and adaptability in adding or removing functionalities as requirements evolved.

Observability and Monitoring

Establishing robust observability was critical. Previous systems lacked comprehensive observability, relying on difficult-to-analyze audit logs. Implementing correlation IDs for API calls and structured logging facilitated better system behavior tracking and metric generation. Regular updates to monitoring tools and dashboards reflected the evolving needs of the cloud environment. Accurately defining requirements was a significant challenge. Initially, efforts to replicate legacy Delphi code led to widespread errors. Shifting focus to thoroughly understanding underlying requirements before implementing them in Java led to clearer user stories and substantial performance improvements. This underscored the importance of in-depth requirement analysis.

Effective Requirement Definition

Defining clear and effective requirements is essential for any modernization project. Initial attempts to replicate legacy Delphi code directly led to widespread errors, revealing the complexity of understanding implicit business logic built into dated systems. Livi’s team needed to focus on thoroughly understanding underlying requirements before implementation. This phased approach involved several stages, from identifying critical workflows to engaging with stakeholders to validate their needs and expectations. This deep understanding enabled the creation of more precise user stories, facilitating smoother development phases and ultimately resulting in substantial performance improvements.

An essential part of the requirement definition was continuous communication with the development team, ensuring everyone was aligned on project goals and expectations. Involving cross-functional teams early on, including those responsible for delivering new features, helped avoid miscommunications and minimized the risk of requiring extensive rework later in the process. Learning from early mistakes, the project team adapted by ensuring comprehensive documentation and incremental releases that allowed for ongoing adjustments based on real-time feedback. This adaptive methodology is mirrored in Agile practices, emphasizing flexibility, collaboration, and constant iteration.

Key Takeaways

Establishing robust observability was crucial for the system. Previous setups lacked thorough monitoring, often depending on audit logs that were tough to analyze. Introducing correlation IDs for API calls and implementing structured logging significantly improved the ability to track system behavior and generate metrics. To keep up with the evolving cloud environment, monitoring tools and dashboards were updated regularly. Accurately defining the requirements presented a notable challenge. Initially, the team attempted to replicate legacy Delphi code, which led to several errors. Shifting the focus to deeply understanding the underlying requirements before implementing them in Java resulted in more precise user stories and marked performance enhancements. This experience shed light on the critical importance of comprehensive requirement analysis.

In addition to the technical improvements, the team also recognized the need for better documentation and clear communication among team members. Conducting regular review meetings helped in identifying areas that required further clarification and potential optimization. The integration of automated testing and continuous integration (CI) pipelines ensured that new changes were promptly verified, significantly reducing the risk of future errors and enhancing overall system reliability. This holistic approach not only improved the system’s observability but also fostered a culture of continual improvement and learning within the team.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later