Ismt s-599 Capstone Seminar in Enterprise Systems Summer 2015 Team 3



Download 214.01 Kb.
Page6/12
Date08.01.2017
Size214.01 Kb.
#7835
1   2   3   4   5   6   7   8   9   ...   12

3.3Integration


Integration is the major challenge in Healthcare technologies16. In the proposed solution, integration ability is considered in every stage of service development. The end users are connected to the system via HTTPS, with user identification and password authentication. The Security module provides access to the services and functions offered by the AHC. All user interfaces connects to the frontend microservice using the microservices gateway. Event bus is proposed to manage asynchronous process orchestration. Advanced Message Queuing Protocol AMQP17 will be used to exchange data between frontend services and backend services. Health Level 7 (HL-7) gateway is used to communicate with the existing hospital system. DICOM18 (Digital Imaging and Communications in Medicine) gateway will be used for connecting to PACS. Integration technology will use simple RESTish protocols19 rather than complex protocols such as WS-Choreography and BPEL (Business Process Execution Language), or orchestration by a central tool. This integration reflects a plan to capture and leverage more data at the point of care and to; standardize care, improve clinical & patient satisfaction, and financial gain. The integration and interfacing roles are separated and will function in the following way:

  1. Data integration essentially begins at the “frontend” with the user entering data into the system. The frontend role is to provide quick feedback to the User Interface & external services, and to receive external data quickly from user’s devices through the aggregators. It creates message queue data using RabbitMQ20. As AMQP is limited in packet size, it will create a control queue if the data is required to be split into chunks. It will use the schema from a blob store and use this schema to serialize the data.

  2. The backend service will receive data from the queue and check the control queue for chunk information. It will combine the chunks if there are multiple parts in the message and it will use the same schema for de-serializing the data.

  3. The backend system will receive data from AMQP and stream the data to long-term Hadoop data stores. Big data APIs will be used to execute different Map-Reduce algorithms in the Hadoop cluster.

  4. The Hadoop cluster will execute commands and produce output files, which will be accessed by microservices using Hadoop WebHDFS RestAPI. The Hadoop files can be browsed and downloaded using a RestAPI.

  5. The microservices will communicate with GlocoHCP’s legacy systems using a new integration service based on RestAPI with JSON.

Legacy Enterprise System Integration


In the GlocoHCP setup there are two types of system from an integration point of view. The hospital systems that are based on HL7 standards and the systems based on DICOM standards. Figure 8 below shows the proposed legacy system integration diagram. The project is proposing two new interfacing modules to integrate existing legacy system.

HL7 standards used in Electronic Medical Records utilize Electronic Data Interchange – EDI. TIP will develop an EDI mapper with JSON data formatting option, to enable any REST query received in a JSON format to be changed to EDI format and request and respond back in REST API. This will provide flexibility in interfacing with existing legacy systems.

The project will also utilize DMW (DICOM Modality Worklist) engine to receive requests using REST API, perform the DMW operation, and respond back using JSON format. The microservices HTTP client will be able to utilize both HL7 and DICOM based standard systems using RestAPI.

Critical elements will be interfaced with a RestAPI at the beginning, with more elements and functions being added as the transformation of the hospital system continues.



Figure : Legacy Hospital System Integration


3.4Data Design and Management

Entities, Attributes and Relationship


The proposed Anywhere Healthcare system is positioned in the middle of two systems; the current monolith enterprise systems for GlocoHCP and the data aggregators (e.g. Microsoft HealthVault). HealthVault supports a number of health vocabulary such as Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT), and International Classification of Diseases (ICD), which are accessible through API calls. HealthVault also supports the HIPAA act and customers right to access their data21. The services will only need to build the patient medical information based on the standard medical vocabulary, provided by the data aggregator(s). There are four major groups of entities in the project:

  1. Institutions: Hospital, insurance, government and Ambulatory care providers, are the main institution type of entities. With entity name, and contact details being the major attributes.

  2. Process Entities: All processes starting from registration of a patient to discharges are the process entities.

  3. Monitoring Info Entities: The data aggregators provide these entities. Details for the initial aggregator, Microsoft HealthVault, can be found at https://developer.healthvault.com/Vocabularies.

  4. People Entities: Patients, care providers (doctor, nurses, & support staff), and partner nurses are the people entities.

The project proposes using dynamic schema management for the data structure of each entity. Further attributes of each group of entities can be collected during system development. The following diagram shows some of the entities:

Figure : AHC Data Entities


Data type, Storage and Transaction Flow


The aggregator provides medical data types. The Microsoft HealthVault data types can be found at https://developer.healthvault.com/DataTypes. For example, blood pressure entities attributes are given below:

Three types of data are used in this project; configuration data, transaction data, and master data. Configuration data will be used to define custom functionality of the system. Transaction data is kept in the microservices temporarily and streamed to the Hadoop cluster according to a schedule. At this point patient demographics with unique identifier is the major master data.



Data transaction flow starts when the patient record is received by the aggregator and pulled by the AHC monitoring service. This is XML based data use as standard for HealthVault information. The patient monitoring service then pushes data to the patient record system. This data is a JSON formatted record, storing the data types received from HealthVault.

Data is then forwarded to business (physician) rules to determine if there is a need for an alarm. If yes, it will be sent to the notification service in JSON format. The notification service then sends the JSON formatted data to the user interfaces of the patient, nurse, and hospital care users.



Download 214.01 Kb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   12




The database is protected by copyright ©ininet.org 2024
send message

    Main page