Camunda under FHIR

by

In the medical domain one of the most important interoperability topic is FHIR (https://www.hl7.org/fhir/). It standardizes formats for different medical data, like patient information. With the help of FHIR, information can be exchanged between different medical systems easily.

On the other hand, the medical domain is highly process driven. The medical treatment of a patient involves different activities (medical examination, therapies, operations, rehab …). These activities have to be coordinated and orchestrated in a meaningful way. Other more administrative processes like a billing process can be identified easily as well. Often these processes will be automatized to gain extra benefits.

If you are involved in process automation within the medical domain, you will face FHIR sooner or later for sure.

For process automation, Camunda (https://camunda.com) is a strong player in the field. It offers a comprehensive solution for the process automation lifecycle based on standardized BMPN 2.0 process definitions.

In this blog post, we will show how easy it is to integrate a FHIR data source within Camunda 8.

But first let’s have a look into our general example BPMN process definition we will rely on:

The demo process definition focuses on a typical integration tasks. It fetches some patient data from a FHIR server. This data is then put into a third-party system like SAP.

In this blog post we will show you two different approaches to fetch patient data from a FHIR source within a Camunda process.

  • Implementing a job handler by using the popular HAPI FHIR client (https://hapifhir.io)
  • A no code solution with the provided camunda REST connector

Implementing a Custom Job Handler

This approach relies on implementing a customized job handler which exchanges the data with a FHIR server. We rely in this example on HAPI, a very common FHIR library in the java ecosystem.

Our process definition consists of two service tasks. Each service task is implemented by a corresponding job handler. The first one fetches patient data from a FHIR server. It then populates the data as process variables. The second service task simulates writing the patient data into another 3rd party system like SAP. For demo purposes, it only logs out the patient data.

The custom job handler in our example is implemented in Kotlin. We will provide a java version at the end of this post. This is done within a SpringBoot environment.

package de.akquinet.camunda.fhir.handler
 
import de.akquinet.camunda.fhir.service.PatientData
import de.akquinet.camunda.fhir.service.PatientFHIRService
import io.camunda.zeebe.spring.client.annotation.JobWorker
import io.camunda.zeebe.spring.client.annotation.VariablesAsType
import org.slf4j.Logger
import org.slf4j.LoggerFactory
import org.springframework.stereotype.Component
 
@Component
class GetPatientAddressHandler(val patientService: PatientFHIRService) {
 
    private val log: Logger = LoggerFactory.getLogger(GetPatientAddressHandler::class.java)
 
    @JobWorker(type = "get-patient-address")
    fun getPatientAddress(@VariablesAsType patientData: PatientData): PatientData {
        log.info("patientId: {}", patientData.patientId)
 
        return patientService.getPatientData(patientData)
    }
 
}

This handler registers itself as a JobWorker under the handle get-patient-address. This handle is referenced in the BPMN diagram in the corresponding service task.

To get a patient’s data we do have to know the Id of this patient. This Id is fetched in the handler from the process instance variable. In our case, this variable will be provided when starting the process instance. In this handler implementation, we use the Typed Variables feature of Camunda. This allows us to model the process data into a bean class (here a Kotlin data class). So we can fully rely on typed process variables. The handler delegates the FHIR logic to a service which does the actual FHIR communication. The result is a fully filled patientData instance. It is populated as the new process instance variables to the workflow engine.

The PatientFHIRService class will do all the FHIR handling:

package de.akquinet.camunda.fhir.service
 
import ca.uhn.fhir.context.FhirContext
import org.hl7.fhir.r4.model.Patient
import org.slf4j.Logger
import org.slf4j.LoggerFactory
import org.springframework.beans.factory.annotation.Value
import org.springframework.stereotype.Service
 
@Service
class PatientFHIRService(val fhirContext: FhirContext,
                         @Value("\${hapi.fhir.serverbase}") val serverBase: String) {
 
    private val log: Logger = LoggerFactory.getLogger(PatientFHIRService::class.java)
 
 
    fun getPatientData(patientData: PatientData): PatientData {
        // Create a client
        val client = fhirContext.newRestfulGenericClient(serverBase)
 
        // Read a patient with the given ID
        val patient = client.read().resource(Patient::class.java).withId(patientData.patientId).execute()
 
        // Print the output
        val string = fhirContext.newJsonParser().setPrettyPrint(true).encodeResourceToString(patient)
        log.info("Patient from FHIR: {}", string)
 
        val firstAddress = patient.addressFirstRep
        patientData.city = firstAddress.city
        patientData.country = firstAddress.country
        patientData.state = firstAddress.state
        patientData.zip = firstAddress.postalCode
        patientData.street = firstAddress.line.joinToString { stringType -> stringType.value }
        patientData.use = firstAddress.use.name
 
        return patientData
    }
}

We are using the popular HAPI FHIR client library (https://hapifhir.io). Please note that we inject an Instance of FhirContext. We create it in a separate provider bean to ensure that only one instance of the context is available. This approach is preferred due the resourceintensive nature of creating a FhirContext (https://hapifhir.io/hapi-fhir/apidocs/hapi-fhir-base/ca/uhn/fhir/context/FhirContext.html). The rest is straightforward querying patient data via the hapi library. For demo purposes, we are transforming the multi-line street values into a single line here. We will handle this later in the second approach again.

Using The Camunda REST Connector

Implementing a custom job handler might not be necessary every time. You may not always need complete freedom about processing and querying the FHIR server. For simple FHIR access you can just use the available Camunda 8 REST connector (https://docs.camunda.io/docs/components/connectors/protocol/rest/). Camunda connectors are out of the box parametrizable service tasks for specific use cases. Here we are using the REST connector. FHIR is in fact a REST based service and thus a perfect match for the REST connector. By using this connector no code is necessary, all is done via configuration.

The process definition for this approach looks like this:

We have one additional service task at the very beginning which fetches the FHIR server URL from the environment. This makes it very easy to adjust the URL over time.

The second Task in a REST Connector task. This can be configured in the Camunda modeler. To use a REST connector, you need to convert your task. Do this by turning it into a REST connector task within the Camunda modeler.

After that the connector has to be configured:

There are two configurations done for this example. The first one is the URL to be called. The second one handles the response. It maps the response to process variables.

This URL parameter is not statically defined, but dynamically set by a process variable. For introducing some more dynamic values Camunda supports the expression language FEEL (https://docs.camunda.io/docs/components/modeler/feel/what-is-feel/). With the help of FEEL we can implement simple logic for interacting with data. In our case we use two different process variables to populate the full FHIR URL.

The response handling is a little bit trickier. Here the entire Result expression is shown:

{
  use: response.body.address[1].use,
  street: string join(response.body.address[1].line),
  city: response.body.address[1].city,
  state: response.body.address[1].state,
  zip: response.body.address[1].postalCode,
  country: response.body.address[1].country
}

Camunda itself treats all process variables as JSON objects. In the result expression block we are defining the JSON object which does then populate as process variables for future usage for instance in subsequent process tasks.

The REST connector offers several variables in this scope to be used. These are in our case the response object and the body object which is part of the response itself. The body is the result of the call to the FHIR server responded. Since FHIR is REST based, the result is a JSON object by its own. The complete FHIR JSON result object is available here and can be easily used to fill up our own variables. 

FEEL provides several additional functionalities. It comes in handy for result mappings. These mappings involve some transformation logic.

In our case, the result of the FHIR server for the street value is modeled as an array of strings. But we would like to map the street value into a single string for ease of further processing. To achieve that, we use a specific FEEL expression function.

...
street: string join(response.body.address[1].line), 
...

You are good to go with the REST connector. This is true as long as your desired transformations can be expressed by the FEEL language in Camunda. Otherwise, you can easily switch to your customized job handler implementation and have all the freedom you need.

Wrapping Up

We have presented two easy approaches to interact with FHIR server within a process automation in the Camunda 8 environment. Both have their pros and cons and can be exchanged easily at any time if required. 

If you want to have a closer look feel free to check out the our example projects at GitHub. There you can find an implementation of each of the approaches discussed in this blogpost. For further details please have a look at the corresponding README files.

Discover more from Spree-Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

Cookie Consent Banner by Real Cookie Banner