Ismt s-599 Capstone Seminar in Enterprise Systems Summer 2015 Team 3



Download 214.01 Kb.
Page5/12
Date08.01.2017
Size214.01 Kb.
#7835
1   2   3   4   5   6   7   8   9   ...   12

3.2Software Solution


The proposed software solution uses a microservices-based composite application, which will utilize both the existing system and the new services. It will consist of six components and an interface element.

Component

Datastore

Location

Functionality

Patient Enrollment

Local

Frontend

Creates and updates patient profiles with billing information. This takes a feed from the existing patient system to fill in common fields. In this way, the enrollment agent will not have to re-enter the patient data.

Provider Enrollment

Local

Frontend

Registers, updates and manages billing for providers. This component will capture external provider information such as; address, contact information, and data interface types.

Monitoring

Remote

Frontend

Collects patient records, based on scheduled updates from EMR. The component will connect to the external aggregation service and search for new data from the enrolled patients (e.g. a new blood pressure reading). This will be built to support multiple aggregators for future expansion.

Analytics


Remote

Backend

Provides analytics for two operational issues; scheduling for patient data polling, and analysis of received data for notifications. The component uses both physician orders (i.e., rules) and heuristics about the patient’s previous readings to determine health issues. Rules are set by domain experts, machine learnings will be generated using Mahout10 and metrics will be generated for continuous monitoring.

Notification

Local

Frontend

Sends notifications based on alarms from the analytics system. It will adapt to multiple alarm severities and take actions based on the rules written for these severities. Examples of notifications include call outs to the patient, notifications to AHC agents, notifications to ambulatory care providers, and data transfer to a receiving Emergency Room system.

Interfaces

NA

Both

Restful interfaces to existing system, as well as future proofing to allow integration of new microservices.

Billing

Remote

Backend

To keep a non-invasive approach, the project will use the existing billing system, but will develop its own system to record the extra fields required for the AHC service.

Development Platform



Figure : AHC Development Cycle
GlocoHCP needs a continuous software development and delivery platform that enables it to seize anywhere healthcare market share and improve patient care in the shortest possible time. GlocoHCP needs a development platform, which will be matched with continuous business planning. It should support collaborative development by providing continuous; testing, release and deployment. The platform should be flexible enough to change quickly. Development will be linked with monitoring of the system, system faults or defects will be found by the continuous monitoring platform, which enables quick fixing and releases, to improve the user satisfaction.

For the development and deployment of microservices the project will use Vert.x 311http://vertx.io/, which is an open source, polyglot, event-driven application framework running on a Java Virtual Machine (JVM), contained within a Docker container. It uses point-to-point and publish/subscribe messaging to integrate the enterprise microservices, which in the Vert.x world are called ‘verticles’. GlocoHCP will be able to leverage the skills of both frontend and backend software developers, when building these modules as both Java and JavaScript are supported by Vert.x, along with Groovy and Ruby.

Each microservices will have a; resource layer, service layers, application logic and domain components. These will be interfaced with a gateway which will utilize sub-HTTP client and Live HTTP client. Event bus will be implemented for managing intra microservices event signaling. The status of each method execution will be sent to the event bus using Vert.x and AMQP will be used to exchange data among services. RESTful API will be used to allow external services or a UI to access the microservices. Each microservice will have its own local data store. However, a few microservices such as monitoring and analytics will be connected to larger external data sources. A Data Mapper or ORM (Object Relationship Model) will be used to interface data with application logic.

Initially for the medical data aggregators, GlocoHCP will take advantage of the Microsoft HealthVault12 web-platform, which maintains health and fitness information retrieved from a wide array of physical devices, applications, and services. HealthVault provides the necessary APIs in order to build applications in .NET, Java, Python, and PHP, which gives GlocoHCP's IT team the flexibility in choosing the right technology for the job.



Figure : Deployment platform



System development and deployment is composed of the following; when the software engineers have finished their development task, they will push their local branch to a remote git repository. This will trigger Jenkins, the Continuous Integration server, to build and test the code. The team will integrate Selenium with Jenkins for testing any web-facing application. After the building process, Jenkins stores the built artifact in an ‘artifactory’. At this point, Chef (the configuration management tool), will first create the Docker container image with the newly built artifact, then it will provision and configure the host environment, and finally, it will launch the Docker instance.

Architecture Views


In the following architecture diagram there are 19 components, including 6 microservices:

  1. Patient health monitoring devices are connected to a data aggregator (e.g. Microsoft HealthVault).

  2. The patient monitoring service pulls down the patient records.

  3. The scheduler receives a list of patients that need to be monitored from the data aggregators. It utilizes analytics to prepare a list based on billing records and the frequency needed for data polling.

  4. The Analytics service tracks each patient record and their data frequency. For example, a diabetic patient may take readings three times a day: before breakfast, after launch, and before sleep. The analytics will schedule polling based on patient data type, medical expert rules and the expected time to receive the data, this will avoid unnecessary data requests. The project will use Apache Mahout to build scalable machine learning and data mining algorithms for analyzing the events retrieved from Microsoft HealthVault. For any messaging functionality that is not being covered by the Event Bus within a Vert.x instance, GlocoHCP will be using Apache RabbitMQ, with includes JMS 1.1 and J2EE 1.4 for; legacy systems support, failover transport capabilities for enhanced reliability, and full support for the Enterprise Integration Patterns (EIP).

  5. The Monitoring service stores all patient records to long-term data storage for machine learning algorithm to run in the analytics services.

  6. Once the patient record is received by the monitoring service, it will be sent to the Patient records service where it will be encrypted as required by regulations. The context aware analytics will use the patient readings to detect and decide if the data requires a notification. This will be based on the physician’s orders, and the learning-machine routines. When a patient is enrolled in the system, a Contact Center agent enters the physician’s instructions about the frequency and severity levels of the patient. If the reading requires a notification, it creates and alarm. Additionally, the system will alarm if the patient has not had a measurement taken in a physician-defined time period. If this occurs, an alarm will be generated to have an agent contact the patient and ask them to take a measurement.

All alarms are managed in the alarm services that communicate with the notification service. If the issue is not resolved, the alarm will persist. The status of an event can be management in the alarm services. Alarms are generated based on severity levels, on a 1 to 10 scale:

      • Severity of over 8 will send a notification to the ambulatory care services.

      • Severity 4 to 8 will request a clinical partner to attend the case.

      • Severity 1 to 4 will send a notification to the patient directly for an action.

The patient calling service (contact center) will receive all notifications for each patient.

  1. Metering and billing provides the current list of subscribers, services delivered by partners, and system usage by patients. It also provides revenue reports for the business.

  2. Notifications for all parties; Patient, Nurses, Ambulatory care, and contact center.

  3. The Contact center receives and places calls to the subscribers to help them with their health issues (e.g. blood pressure) and to modify their account (e.g. change their phone number).

  4. Provider management system enrolls new providers and manages existing providers. A mobile-based provider management system is connected to the provider self-service system.

  5. The patient enrollment system is used to enroll new patients. It is expected that existing patients of the hospitals will be added to the AHC system with direct data fetching from the existing system to the new system. If the patient does not exist in the existing system, then the system will register them as a new patient for AHC.

  6. This is enabled for further interfacing with any other systems as it gives secure access to the microservices gateway.

  7. Ambulatory care services will have access to the service using the mobile ambulatory care application.

  8. User Interfaces are mobile or desktop applications. Patients will access the services via their mobile phones, as will the providers. A web interface will be developed for contact center assistance.

  9. Home healthcare based nurses can access the system using their mobile application. This application will enable them to schedule regular visits to patients, and resolve notification alarms.

  10. Legacy system integration by connecting to an XML based SOA gateway, and provides a data connection to any microservices.

  11. Existing systems which will be part of future phases. However, it will be interfaced with the AHC services.

  12. The application container management system (Docker) will perform resource scheduling and moving loads from failed hosts to live hosts.

  13. Host management and cluster management function will utilize a hub and spoke method to manage a large number of clusters. The deployment calls for 8 hosts, two blade chassis, and two storage controllers. The host management system will manage them and will create a private cloud environment to provide high availability and elasticity to create or removed services as needed.

Process Flow


The flow of data through the environment for a medical device reading is detailed below.


1. Process starts with patient device being connected to HV. If it cannot connect, process ends.

2. HV pushes notification to monitoring service. Data can also be pulled.

3. Received data triggers notification by business rules if threshold is met.

4. Analytics uses patient record for context aware notifications.






Figure : Process Flow

Deployment Model


For the purposes of this project, the goal of release and deployment management is to minimize operational risk by controlling software, infrastructure, and services while making sure the right services are implemented in the right environment at the right time. TIP is recommending GlocoHCP undertake a “Canary Release” strategy (See Glossary of Terms).

A “Canary Release” will make new and updated software and services available to a limited, random set of users to ensure the software does not affect the entire population. This serves to ensure that “most” of the users are unaffected by the initial change, but that the group is representative of the overall population.

Both the old and the new systems will coexist for the first year. Eventually, all monolithic applications will be converted to microservices in a phased conversion. The new system will utilize a HL-7 interface with the legacy medical record system to fetch customer details and load data into the new billing and monitoring system. The following are the steps of the migration:



  1. Patient Enrollment: A global GlocoHCP patient profile microservice will be created under patient enrollment and will be shared across all services. Any microservice can request patient demographics information using global unique Patient ID and the patient master record will return the details for that patient.

  2. Monitoring: Monitoring service will utilize patient enrollment. The hospital has a legacy in-patient monitoring system, however this system will not be linked to it.

  3. Electronic Medical Records: Electronic Medical Records will be created by storing patient monitoring information in NoSQL-based Hadoop clusters. The data will be protected in accordance with regulations for patient data. Patient demographic information will be linked with the old system but new information will be kept only in the new system.

  4. Billing System: TIP will not migrate the existing billing system. However, AHC revenue streams will be interfaced with hospital billing system for improved reporting.

  5. Other Services: New services such as analytics with rule engine, scheduler, alarms and notification system, and events management system will be deployed.

Logical Model

The following are the key elements of the proposed architecture:



  1. Service Platform Management Element: A cluster management, distributed task scheduler for the microservices management platform. The technologies include:

    1. Cluster Management: Apache Mesos13 as the distributed Kernel for cluster management with Apache Zookeeper. Apache Mesos supports multiple frameworks, which includes both Docker containers and a Hadoop cluster. Mesos will provide unified and improved management and resource allocation for both containers and Hadoop clusters. Multiple master and slaves in a hub-and-spoke structure will improve the scalability and performance of the microservices.

    2. Distributed Infrastructure Scheduling: Apache Aurora14 will be used to run Docker containers across a shared pool of machines, and is responsible for keeping them running. When machines experience failure, Aurora intelligently reschedules those jobs onto healthy machines.

    3. vSwitch: The project will utilize vSwitch technology supported by Dockers, which implements VXLAN (Virtual Extensible LAN) protocols and can be managed by physical switches and the firewall of GlocoHCP.

Figure : Logical Architecture (Deployment)



  1. Service API Gateway: A service API gateway will provide; authentication, rate limit, logging, and plugin facilities. Each service will be registered to a NGINX Kong-based service API gateway to receive requests from the client. Kong will act as an intermediary between clients and the microservices. An NGINX load balancer for Rest API will also be used in this element of the architecture.

  2. System Internals: System internals will host the microservices to provide scalability, availability, and easy integration with the legacy system, as well as among the services. Except for blob storage, all components on the system internals will utilize the Docker; machine, engine, swarm, registry, compose, and Kitematic15. The following are the components of the system internals:

    1. Front Services: These services will have two roles; quicker response to UIUX, and fetching records from device data aggregators. The front-end services will have a RestAPI for the UI and other provider integration. A RabbitMQ based message queue will forward long running jobs to backend services.

    2. Backend Services: Backend or long-running services will perform queries to the Hadoop data store. They will also generate reports and monthly billing. The backend services will receive requests via the message queue and interact with other services via the RestAPI.

    3. Schema Store: This is shared storage in cache used for blob stored data. The Front services will receive a request from external elements and respond immediately or transfer the request by serializing it to a message queue using a common shared schema for messages. The backend service will receive data from the queue and utilize the same schema for de-serializing the message to an object. This will reduce complexity of change management.

    4. Message Queue: The proposed solution will utilize HTTP and AMQP message queue using the Java Jersey framework, Web.API, and RabbitMQ. There will be two channels for each queue; one channel will be used to transmit data, and the other as a control channel. Once a message is received by a service, due to data packet limitations, it can query the control channel to see if it is the full message or part of a larger message, which needs to be combined.

  3. Long-term data store: Each service contains its own data and logic; this creates a need for long-term data storage for analytics and machine learning. Hadoop clusters for long-term data storage will be used with a Cloudera cluster management system. Apache Sqoop will be utilized to exchange data between Hadoop and RDBMS. Apache Kafka will be utilized for streaming of data when necessary to long-term storage.

  4. Legacy system: The legacy system will be interfaced with the microservices using RESTful API.

  5. External Elements: Three elements are considered:

    1. User Interfaces for following applications, e.g. Mobile Nurses

    2. Other Services

    3. Data Aggregators

Physical Model

The project will use both private and public cloud deployments:




Figure : Deployment models

Cloud-based servers will provide scalability and high availability. In this model, underlying physical hardware is not a limit, in the event that the internal capacity for the hosting platform is exhausted by an increase in demand, the additional capacity can be rented from a public cloud by establishing a secured Virtual Private Computing (VPC) connection.

The system will be deployed using two blade servers with Storage Area Network system. They will use VMware’s vCloud to manage VMotion and high availability features, and Chef for deployment and configuration. The services will run in containers on VMware.


System Metrics





Performance Issues

System Capacity

1

Transaction Volume

Each subscribers will utilize 100 Mb data storage which will require 10 TB data storage. Redundancy will be achieved by two blade chassis. Based on sizing of the hardware we propose 8 blade servers, each having 128 GB RAM and 8 core dual processors. So, we have a limit of 1 TB of memory and 128 virtual sockets which will manage at least 100,000 subscribers, depending on network bandwidth support of the traffic.

2

Concurrent Users

AHC has the ability to support 50,000 users.

3

Availability

The proposed private cloud environment provides high availability by virtualizing the platform inside a data center. If required, further availability can be provided by utilizing a public cloud environment. It is feasible to provide 99.99% availability of the proposed system.

4

Response Time

The internal microservices will have low latency response time: 10 Gbps server side network and 1 Gbps external network. The separation of the web role and worker role in services will provide quick responses to the User Interface and Experience (UIUX), while long running processes can continue in the background.





Download 214.01 Kb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   12




The database is protected by copyright ©ininet.org 2024
send message

    Main page