GithubHelp home page GithubHelp logo

project-monai / monai-deploy-informatics-gateway Goto Github PK

View Code? Open in Web Editor NEW
23.0 18.0 10.0 10.55 MB

MONAI Deploy Informatics Gateway facilitates integration with DICOM compliant systems, enables ingestion of imaging data, helps triggering of workflows with the MONAI Deploy Workflow Manager and offers pushing the output to PACS systems.

Home Page: https://monai.io/monai-deploy-informatics-gateway/

License: Apache License 2.0

Dockerfile 0.07% C# 98.92% Gherkin 0.72% Shell 0.30%
dicom fhir fhir-client dotnet healthcare medical-imaging ai fo-dicom csharp dicomweb-client

monai-deploy-informatics-gateway's People

Contributors

coco-ben avatar dbericat avatar dependabot[bot] avatar ericspod avatar greyseawolf avatar lillie-dae avatar mbaltrimas avatar migle-markeviciute avatar mocsharp avatar neildsouth avatar samrooke avatar woodheadio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monai-deploy-informatics-gateway's Issues

Export services to consume Export Request messages

Is your feature request related to a problem? Please describe.
The export services were designed to poll export tasks from WM. However, given architectural changes, IG needs to be updated to subscribe to the Export Request events.

Describe the solution you'd like

  • Define the message body definition
  • Update IG to consume the messages

Describe alternatives you've considered
NA

Additional context
NA

Storage Abstraction Layer - Local FS

Is your feature request related to a problem? Please describe.
As the MONAI Deploy Platform utilizes shared storage that could be on a local files system, a mounted volume, a NAS device, or a cloud storage service. MIS needs an abstraction layer defined that allows users to configure the storage that fits their needs or to extend the APIs for other storage services.

Describe the solution you'd like
Define an abstraction layer as described in the SRS and implement the default storage access using local FS. This default option supports local FS, docker mounted volume, and any volume accessible with the local file scheme.

Describe alternatives you've considered
Direct access to the storage layer may not be extensible and therefore we are adding an abstraction layer.

Enable logging to file

Enable logging to file with the ability to set file size limit and roll over to new files.

Uppercase MonaiScu

Is your feature request related to a problem? Please describe.
On Connecathon, the community mentioned that making the Monai SCU would be best practice i.e MONAISCU. Believe this is just in appsettings.json

Describe the solution you'd like
As desribed above

Include additional fields in the md.workflow.request message

Is your feature request related to a problem? Please describe.
In order for Workflow Manager to properly process an incoming request, the Calling AET and Called AET shall be included in the md.workflow.request payload.

Describe the solution you'd like
Include 2 additional properties in the payload:

Property Type Description
CallingAet string Sender/Calling AE Title of the DICOM dataset. For ACR requests, this is the transaction ID.
CalledAet string The Informatic Gateway AET that received the DICOM dataset. For ACR requests, this field is empty.

Ref: Project-MONAI/monai-deploy-workflow-manager#70

DICOM SCU to handle none dicom file types better

Is your feature request related to a problem? Please describe.
When an export.request is sent to MIG, it contains files[]. This list may include unspecified file types as MWM will export .dcm, .DCM and unspecified file types to cater for model outputs. However sometime these unspecfied file types will not be dicoms and currently cause the Export Task to be marked as failed in the workflow.

MIG should be updated to cater for these files not being dicom.

Describe the solution you'd like
Open to a discussion on this. Needs to go through refinement

  1. Suppress those errors completely and send a export.complete as succeeded
    Q. What happens if all the files are unspecified and are all not dicom???

  2. Suppress those errors completely and send a export.complete as succeeded but include unsuccessful files in the metadata which s appended to the task

Allow dynamic key-value pair configuration section for the storage service

Is your feature request related to a problem? Please describe.
The current configuration section designed for MinIO is hardcoded in a data structure and may stop developers from extending their own storage services.

Describe the solution you'd like
User a key-value pair (dictionary) similar to the message broker configuration.

Describe alternatives you've considered

Additional context
Meeting action from January 25, 2022

Export data via FHIR

Is your feature request related to a problem? Please describe.
Given that ACR API supports fetching FHIR resources and algorithms may be producing results in FHIR format, IG shall be able to export those results to an external FHIR server.

Describe the solution you'd like
Enable export FHIR data.

Describe alternatives you've considered
FHIR is a standard that is widely used in the hospital environment so it's better to follow the standard instead of implementing a proprietary service.

Additional context

Export DICOM via DICOMweb (STOW)

Is your feature request related to a problem? Please describe.
Given that the ACR API currently supports pulling of DICOM via DICOMweb, IG shall also support export via DICOMweb.

Describe the solution you'd like
Enable DICOMweb STOW as an option for exporting DICOM results.

Describe alternatives you've considered
IG currently supports export over C-STORE SCU but in some environments, traditional DIMSE service may not be available.

Additional context

Use allow-list for expected SOP classes

Maybe we should consider specifying the expected SOP Class(es) instead of ignored/disallowed, simply because there are many storage classes to be added in the list to make it close to complete. A workflow typically accepts only a single or a few SOP classes.

Originally posted by @MMelQin in #31 (comment)

ExportRequestMessage missing list of files & workflows

Description

The ExportRequestMessage is missing payloads (list of files) and workflows specified in the AET configuration.

Expected behavior

The Payload property shall include a list of files and workflows (if any) in the payload.

Actual behavior

Both files & workflows are empty.

Configuration

  • Informatics Gateway version/commit: 0.1.0

Message Broker Abstraction Layer - RabbitMQ

Is your feature request related to a problem? Please describe.
As the MONAI Deploy Platform utilizes RabbitMQ as the default publish-subscriber messaging service, users with different environment setup may want to use their existing messaging broker service or a service provided by cloud vendors. Adding an abstraction layer allows users to configure the messaging broker service and extend the abstraction layer to fit their needs.

Describe the solution you'd like
Define an abstraction layer as described in the SRS and implement the default messaging broker service, RabbitMQ.

Describe alternatives you've considered
Direct integration of RabbitMQ is possible but won't be easily switched to another service if needed.

Enable workflow triggering with a patient, a study or a series

Is your feature request related to a problem? Please describe.
There may be times when DICOM data is received in bulk containing multiple patients, studies, and series. However, oftentimes, a model may be designed only for a study or a series. Therefore, MIG must be able to sort and group the incoming data and trigger based on the user's configuration.

Describe the solution you'd like
Allow users to group data and submit them by patient, study, or series for DICOM files.
If grouping is set to study and 3 studies are received, then 3 jobs are triggered.

Describe alternatives you've considered
The other option is to have Workflow Manager handle the data grouping.

Enable logging to ECS

Is your feature request related to a problem? Please describe.

To aggregate logs from all MONAI Deploy services, MIG shall provide a mechanism to deliver logs to external services.

Describe the solution you'd like

An option to deliver logs to ECS

Investigation: Performance with uploading instances to MinIO

Description

Currently in an environment we are seeing that saving a study to MinIO is taking around 1 second per slice. This ticket is to track the investigation of that. Unsure where the issue currently lies.

Steps to reproduce

  1. Deploy MIG and MinIO to an environment
  2. Send a study to benchmark

Expected behavior

Study is uploaded to storage within an acceptable amount of time

Actual behavior

Study taking > 10 mins in some cases to save

HL7v2 MLLP Server

An HL7v2 MLLP listener that receives & stores HL7 messages for triggering a workflow.

Tasks

  • HL7v2 MLLP Listener (#103)
  • Notifies payload assembly & uploads messages to storage backend
  • Integration test
  • User documentation

Acceptance Criteria

  • Ability to receive HL7v2 MLLP messages on the specified port
  • Ability to receive single or multiple messages on a single connection
  • Ability to send acknowledgment per message received
  • Allow concurrent connections and ability to control maximum number of connections

Collect Requirements to make IG compatible with other MONAI projects

Is your feature request related to a problem? Please describe.
There are other projects in the MONAI community that consume DICOM data but do not currently have a DICOM listener that is compatible.

Describe the solution you'd like
Provide an extensible architecture in the current IG design so other MONAI projects may reuse IG to accept DICOM instances over DIMSE plus other protocols.

Failure to upload to storage service

Description

The upload service occasionally fails to upload (C-STORE) received data.

<4> 10:35:18 Monai.Deploy.InformaticsGateway.Services.Storage.ObjectUploadService[4000] => File ID=1.2.826.0.1.3680043.2.1125.1.19616861412188316212577695277886020/1.2.826.0.1.3680043.2.1125.1.34918616334750294149839565085991567/1.2.826.0.1.3680043.2.1125.1.68845315069119806578612768053737857, Correlation ID=8c3a300c-aca0-45fd-9907-4b8f77f6f4c1 Failed to upload file 1.2.826.0.1.3680043.2.1125.1.19616861412188316212577695277886020/1.2.826.0.1.3680043.2.1125.1.34918616334750294149839565085991567/1.2.826.0.1.3680043.2.1125.1.68845315069119806578612768053737857; added back to queue for retry. System.NullReferenceException: Object reference not set to an instance of an object.    at Minio.MinioClient.PutObjectAsync(PutObjectArgs args, CancellationToken cancellationToken) in /root/.q/sources/minio-dotnet/Minio/ApiEndpoints/ObjectOperations.cs:line 710    at Monai.Deploy.Storage.MinIO.MinIoStorageService.PutObjectUsingClient(MinioClient client, String bucketName, String objectName, Stream data, Int64 size, String contentType, Dictionary`2 metadata, CancellationToken cancellationToken)    at Monai.Deploy.Storage.MinIO.MinIoStorageService.PutObjectAsync(String bucketName, String objectName, Stream data, Int64 size, String contentType, Dictionary`2 metadata, CancellationToken cancellationToken)    at Monai.Deploy.InformaticsGateway.Services.Storage.ObjectUploadService.<>c__DisplayClass22_0.<<UploadData>b__1>d.MoveNext() in /app/src/InformaticsGateway/Services/Storage/ObjectUploadService.cs:line 226 --- End of stack trace from previous location ---    at Polly.AsyncPolicy.<>c__DisplayClass40_0.<<ImplementationAsync>b__0>d.MoveNext() --- End of stack trace from previous location ---    at Polly.Retry.AsyncRetryEngine.ImplementationAsync[TResult](Func`3 action, Context context, CancellationToken cancellationToken, ExceptionPredicates shouldRetryExceptionPredicates, ResultPredicates`1 shouldRetryResultPredicates, Func`5 onRetryAsync, Int32 permittedRetryCount, IEnumerable`1 sleepDurationsEnumerable, Func`4 sleepDurationProvider, Boolean continueOnCapturedContext)    at Polly.AsyncPolicy.ExecuteAsync(Func`3 action, Context context, CancellationToken cancellationToken, Boolean continueOnCapturedContext)    at Monai.Deploy.InformaticsGateway.Services.Storage.ObjectUploadService.UploadData(String identifier, StorageObjectMetadata storageObjectMetadata, String source, List`1 workflows, CancellationToken cancellationToken) in /app/src/InformaticsGateway/Services/Storage/ObjectUploadService.cs:line 228    at Monai.Deploy.InformaticsGateway.Services.Storage.ObjectUploadService.ProcessObject(FileStorageMetadata blob) in /app/src/InformaticsGateway/Services/Storage/ObjectUploadService.cs:line 179

Steps to reproduce

Please share a clear and concise description of the problem.

  1. C-STORE a DICOM dataset
  2. Review the logs
    ...

Expected behavior

All files are uploaded to storage service

Actual behavior

One or more files failed to upload

Need support for Google Cloud

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Allow processing of data before export

Sometimes an export destination will require data to undergo some processing before it is sent to it. It would be useful to be able to write plug-ins that have the opportunity to edit export data before it is being sent to the export destination. It should then be possible to configure which plug-ins are active for which export destination (and maybe data type).

Note, it is important that the plug-in has access as part of the same instantiation to all data that needs to be exported as part of this export request. This is so that it can use the same study ID for all items, the same series ID for all items within each series, and so on.

Need
This is required because the internal PACS system used within AIDE requires IDs to be randomised, so that different executions where different data for the same patient is sent don't end up being aggregated.

Describe alternatives you've considered
There are two other options:

  1. Doing this on the receiving side. For AIDE, this means having a DICOM proxy between the Informatics Gateway and the internal PACS system. This will not be as effective, because it will be a lot of duplication of effort, or a compromise in quality, because we'll lose the functionality that the Informatics Gateway has around retries on failure handling.
  2. Doing this within the workflow. This means creating another task plug-in which can edit the data in this way. The downside of this approach is that all data will need to be copied, causing a duplication of data-in the case of large series, this will be significant. The internal AIDE PACS system expects to receive the original data as well as any output from AI applications, meaning that a lot of data will be duplicated.

The benefit of doing this in the Informatics Gateway is that it is possible to edit the DICOM series while they are already in memory, between reading them from desk and before sending them to the DICOM destination. This will offer the best performance.

CI: Create Release pipeline

Ahead of the first release, something should be in place to package IG and publish it.
Currently the Github Actions only build and test.

For decent reference, look at MONAI Labels GitHub action to publish.

Ideally, MONAI releases are never done manually.

Use K8s DNS Service name for endpoint in Appsettings.json instead of IP address

Is your feature request related to a problem? Please describe.
The problem with IP address is that it renders K8s horizontal scaling mute. We would like to make sure all services that provide an endpoint such as RabbitMQ Cluster and MinIO tenants are scalable and services can scale up and down. If IG only goes to the IP address that limits which pod it goes to.

Describe the solution you'd like
endpoint field in appsettings.yaml should be in the form of <my-svc.my-namespace.svc.cluster-domain.example> as described in K8s documentation.

cc @mocsharp

ExportRequestEvent looping

Description

When MIG is trying to action an ExportRequestEvent, if the port on the destination is incorrect then the action seems to loop excessively but also other ExportRequestEvents are not picked up from the queue and sit in ready status.

Steps to reproduce

  1. Register a destination with correct ip but incorrect port for receiving DICOM server
  2. Create Workflow with an export task with that destination referenced
  3. Send data to MIG

Expected behavior

Task is marked as failed after some attempts to create an association and next message on the queue is picked up.

Actual behavior

Task is still in dispatched state and logs show that the attempted assiciation happens a lot.

Configuration

Regression?

Other information

AD shared the workflow and logs. @mocsharp one for us to Test next week to investigate.

DICOM SCU - C-FIND/C-MOVE Support

Is your feature request related to a problem? Please describe.
In order to be able to retrieve DICOM studies triggered by an HL7/FHIR order, IG needs to be able to retrieve DICOM studies from other DICOM devices.

Describe the solution you'd like
Implement C-FIND and C-MOVE services.

Describe alternatives you've considered

Additional context
Meeting action from February 8, 2022

MIG gives null pointer exception and does not retry when MinIO is down

Description

We currently have an e2e test for resiliency that requires briefly putting MinIO down and seeing how the retry is handled. Right now MIG seems to give an exception and then not retry.

Steps to reproduce

  1. Ensure all required containers are up minus MinIO
  2. Send a dicom through MIG

Expected behavior

MIG would not be able to proceed but would retry the request a configurable number of times with a configurable timeout

Actual behavior

The exception below is given and the payload is deleted instantly

2022-09-29T10:26:55.600071009Z [INFO] ORTHANC -> C-Store response [1]: Success
 2022-09-29T10:26:55.601679439Z [INFO] ORTHANC <- Association release request
 2022-09-29T10:26:55.601900042Z <6> 10:26:55 MONAI[209] => Association=#f1eac5db-c506-4857-bf62-68e9dd8c24b1 10.233.119
 .56:58240 Association release request received.
 2022-09-29T10:26:55.602642291Z [INFO] ORTHANC -> Association release response
 2022-09-29T10:26:55.603977867Z [INFO] Connection closed
 2022-09-29T10:26:58.361016184Z <4> 10:26:58 Monai.Deploy.InformaticsGateway.Services.Storage.ObjectUploadService[4000]
  => File ID=1.3.6.1.4.1.5962.99.1.2968617883.1314880426.1493322302363.3.0/1.3.6.1.4.1.5962.99.1.2968617883.1314880426.
 1493322302363.4.0/1.3.6.1.4.1.5962.99.1.2968617883.1314880426.1493322302363.2.0, Correlation ID=f1eac5db-c506-4857-bf6
 2-68e9dd8c24b1 Failed to upload file 1.3.6.1.4.1.5962.99.1.2968617883.1314880426.1493322302363.3.0/1.3.6.1.4.1.5962.99
 .1.2968617883.1314880426.1493322302363.4.0/1.3.6.1.4.1.5962.99.1.2968617883.1314880426.1493322302363.2.0; added back t
 o queue for retry. System.NullReferenceException: Object reference not set to an instance of an object.    at Minio.Mi
 nioClient.PutObjectAsync(PutObjectArgs args, CancellationToken cancellationToken) in /root/.q/sources/minio-dotnet/Min
 io/ApiEndpoints/ObjectOperations.cs:line 710    at Monai.Deploy.Storage.MinIO.MinIoStorageService.PutObjectUsingClient
 (MinioClient client, String bucketName, String objectName, Stream data, Int64 size, String contentType, Dictionary`2 m
 etadata, CancellationToken cancellationToken)    at Monai.Deploy.Storage.MinIO.MinIoStorageService.PutObjectAsync(Stri
 ng bucketName, String objectName, Stream data, Int64 size, String contentType, Dictionary`2 metadata, CancellationToke
 n cancellationToken)    at Monai.Deploy.InformaticsGateway.Services.Storage.ObjectUploadService.<>c__DisplayClass22_0.
 <<UploadData>b__1>d.MoveNext() in /app/src/InformaticsGateway/Services/Storage/ObjectUploadService.cs:line 226 --- End
  of stack trace from previous location ---    at Polly.AsyncPolicy.<>c__DisplayClass40_0.<<ImplementationAsync>b__0>d.
 MoveNext() --- End of stack trace from previous location ---    at Polly.Retry.AsyncRetryEngine.ImplementationAsync[TR
 esult](Func`3 action, Context context, CancellationToken cancellationToken, ExceptionPredicates shouldRetryExceptionPr
 edicates, ResultPredicates`1 shouldRetryResultPredicates, Func`5 onRetryAsync, Int32 permittedRetryCount, IEnumerable`
 1 sleepDurationsEnumerable, Func`4 sleepDurationProvider, Boolean continueOnCapturedContext)    at Polly.AsyncPolicy.E
 xecuteAsync(Func`3 action, Context context, CancellationToken cancellationToken, Boolean continueOnCapturedContext)
  at Monai.Deploy.InformaticsGateway.Services.Storage.ObjectUploadService.UploadData(String identifier, StorageObjectMe
 tadata storageObjectMetadata, String source, List`1 workflows, CancellationToken cancellationToken) in /app/src/Inform
 aticsGateway/Services/Storage/ObjectUploadService.cs:line 228    at Monai.Deploy.InformaticsGateway.Services.Storage.O
 bjectUploadService.ProcessObject(FileStorageMetadata blob) in /app/src/InformaticsGateway/Services/Storage/ObjectUploa
 dService.cs:line 179
 2022-09-29T10:27:01.079697046Z <3> 10:27:01 Monai.Deploy.InformaticsGateway.Services.Connectors.PayloadAssembler[3014]
  => Correlation ID=f1eac5db-c506-4857-bf62-68e9dd8c24b1 Payload deleted due to upload failure(s) 1.3.6.1.4.1.5962.99.1
 .2968617883.1314880426.1493322302363.3.0.

Add ability to update destinations via PUT request

Describe the solution you'd like
Updates to destinations currently have to be made by deleting existing destinations and then adding another with the updated details. It would be ideal if a simple PUT request could be made to handle this as a single process.

Implement 409 Conflict where AETitle exists - POST /config/ae

Is your feature request related to a problem? Please describe.
For Aide it would be useful for the endpoint POST /config/ae to return a 409 Conflict when it already exists. If this is not the case we wouldn't have a way to check if it exists aside from making another call.

Describe the solution you'd like
When AETitle exists, a 409 is returned.

Informatics Gateway CLI

User Story
As a (PACS) administrator, I would like to configure the Informatics Gateway, check the status of it via a command-line interface so that I don't have to call into the RESTful APIs.

Background
Informatics Gateway provides RESTful APIs to configure AE Titles, DICOM sources, and destinations as well as the health of the application and its internal services. However, it is ideal that these can be done via simple commands instead of composing RESTful call, e.g., using curl.

Success Criteria
The MONAI Informatics Gateway CLI shall be able to do the following tasks via the CLI:

  • Configure MONAI SCP Application Entities
  • Configure DICOM Sources
  • Configure DICOM Destinations
  • Check application/service health

Commands

# Configures API endpoint
$ mig config --endpoint HTTP://localhost:5000

# Start the MONAI Informatics Gateway with custom configuration
$ mig start 

# Restart the MONAI Informatics Gateway
$ mig restart [-y | --yes]

# Stop the MONAI Informatics Gateway
$ mig stop  [-y | --yes]

# Add (SCP) AE Title (with optional application mapping)
$ mig aet add [-n NAME] -a AE_TITLE [--apps liver,brain,ABC123]

# Delete (SCP) AE Title
$ mig aet rm -n NAME

# List all (SCP) AE Title
$ mig aet ls

# Add DICOM Source for SCP
$ mig source add [-n NAME] -a AE_TITLE -i HOSTNAME_IP

# Delete DICOM Source of SCP
$ mig source rm -n NAME

# List all DICOM Sources of SCP
$ mig source ls 

# Add DICOM Destination for Clara SCU
$ mig dest add [-n NAME] -a AE_TITLE -i HOSTNAME_IP -N NAME -p PORT

# Delete DICOM Destination of SCU
$ mig dest rm -n NAME

# List all DICOM Destinations of SCU
$ mig dest ls

# Get MONAI Informatics Gateway health and status
$ mig status

Allow customizing temporary storage to use storage service

Is your feature request related to a problem? Please describe.
In some deployments, storage space may not be sufficient.

Describe the solution you'd like
Allow users to customize temporary storage similarly to storage service.

Describe alternatives you've considered
Map a network volume for temporary storage.

Additional context
Meeting action from January 25, 2022.

Requirements

  • Allow users to use the storage service for storing incoming objects before they are assembled by the payload assembler.
  • Users shall be able to configure the bucket and the path for storing temporary objects.
  • The storage service must support moving objects between buckets.

Affected Components: Storage Service, Admin API, IG StorageInfoProvider.

With the change to use storage service, the Storage Admin API must provide an API to get available disk space and total disk space for IG to determine if it shall accept incoming data or if the data retrieval service shall retrieve data.

Generate study & series level JSON files

Is your feature request related to a problem? Please describe.

The Workflow Manager needs the series or the study level DICOM tags available for filtering. However, MIG currently provides only instance-level DICOM models.

Describe the solution you'd like

Provide a [study-instance-uid].json and a [series-instance-uid].json file for each study/series with study/series level tags respectively.

For private tags, we have a few options:

  1. Let the user define and register the private tags at each level so the MIG knows which tags to include at each level.
  2. For each level, scan all private tags, if values are all the same across all instances for the level, it is assumed the tag is associated with the level.
  3. Simply include all private tags at all levels, if values are different, change VM to multiple.
  4. Output a single JSON that includes all instances for that payload.
  5. Have a workflow step handle the filtering (out of scope)
  6. Read the first JSON file from instance level given that attributes are the same across study-level & series-level. (unless there are multiple series, aka grouping by patient or study)
  7. Parse workflow definition and find out which tags are needed, register them with MIG to parse them.
    e.g.
study-instance-uid-1:
   series-instance-uid-1:
      instance-instance-uid-1:
         slice-thickness: 5mm
   series-instance-uid-2:
      instance-instance-uid-2:
         slice-thickness: 3mm

Configuration

Allow configuration override to turn on/off JSON generation at each level (study, series, instance).

Notes

Even though most DICOM libraries include a list of all DICOM tags/attributes, none of them provides information on which tags are at the patient/study/series levels, we may need to define a dictionary and allow users to customize it. Moreover, attributes may be different across different modalities. E.g (0012,0050) Clinical Trial Time Point ID Attribute is a study level attribute that exists in CR but not for CT.

The output of the current instance-level JSON files can be configured through DicomConfiguration.WriteDicomJson with the options to either export all attributes or non-binary (others).

Support DICOMweb (STOW-RS) as a server

Is your feature request related to a problem? Please describe.
DICOMweb is the DICOM Standard for web-based medical imaging. It provides a set of RESTful services, allowing users & devices to access healthcare images over the web for ease of integration for on-prem, cloud, and especially for a hybrid deployment.

Describe the solution you'd like
Provide STOW-RS functionalities for triggering workflows.

Design

The STOW-RS( RESTful service) in MIG will be implemented based on DICOM specifications to enable storing DICOM instances for inference workloads. Moreover, the service will provide a couple of base URIs to enable different use-cases, e.g. to associate the dataset to a workflow.

Base URI / Endpoints

  • POST /dicomweb/studies
  • POST /dicomweb/workflow-name/studies

The [workflow-name] segment is optional; when provided, MIG issues a md.workflow.request with the provided workflow name. Otherwise, Workflow Manager will figure out which workflow(s) to trigger.

Content Type

The initial implementation will support multipart/related; type=application/dicom and each part in the multipart body represents a DICOM SOP Instance with the the content-type application/dicom in the HTTP headers.

Assumptions

For this release, we assume the entire DICOM dataset is pushed to MIG in a single POST request. MIG will not provide the option to wait for additional instances like the C-STORE service.

Make retry logic configurable

Is your feature request related to a problem? Please describe.
MIG performs retries on operations that are likely to fail, especially when interacting with external systems/services. MIG already performs retry upon any failure but would be nice if users can define their own retry policy.

Describe the solution you'd like
Allow users to define a number of retires and how long to wait between each retry.

Describe alternatives you've considered
NA

Auth & LDAP integration

Is your feature request related to a problem? Please describe.

  • Access to APIs and the use of IG CLI do not currently require authentication.
  • Access to MinIO & RabbitMQ do not currently require authentication.
  • RabbitMQ integration currently requires users to configure a user account with the service and the credentials are stored in clear text in the config file.

Describe the solution you'd like
Access to the APIs provided by IG shall require authentication and shall authenticate users with LDAP.
Access to MinIO & RabbitMQ shall be authenticated and the services shall be configured to use specified credentials.

The ideal scenario is having Workflow Manager integrate with the LDAP server and let the users authenticate to retrieve a token to access IG-provided APIs.

Support FHIR as a server

Is your feature request related to a problem? Please describe.
A method to receive FHIR resources to trigger workflows is required given that model that requires both DICOM & EHR data on the rise. E.g. the EXAM model takes both DICOM & EHR data to predict the likelihood that a person showing up in the emergency room will need supplemental oxygen.

Describe the solution you'd like
An FHIR server that accepts & stores FHIR resources, as is, in either JSON or XML (based on the incoming payload.
Provide minimum functions of reading required data fields, e.g. ID of the resource, patient ID, etc... for cross reference only.

Tasks

  • FHIR listener #118
  • Unit test
  • Integration test
  • Documentation
  • Changelog

Improve upload speed

Is your feature request related to a problem? Please describe.
MIG uploads file one at a time to the storage service which can be slow.

Describe the solution you'd like
Enable concurrent uploads and let the user configure the maximum number of concurrent uploads.

Simple HL7 Interface for orders

The use case would be, simple HL7 Order Messages being sent by legacy systems. For example, an order can tell the imaging study as well as the AI algorithm that needed to be executed (on the study).

Hence, an order-based workflow to run AI algorithms could be connected to Legacy systems that might lack FHIR capabilities.

When Rabbit goes down a dicom is sent and comes back up, MIG cannot reconnect

Description

I have been testing resiliency, one of the tests is to temporarily disconnect RabbitMQ. The behaviour while it is disconnected is correct but when i bring Rabbit back up. MIG proceeds to not be able to connect to it.

Steps to reproduce

Please share a clear and concise description of the problem.

  1. Have MIG/MWM and all required containers running
  2. Diconnect RabbitMQ
  3. Try to send a dicom through the system
  4. Reconnect RabbitMQ
  5. Try to send a dicom through the system

Expected behavior

Dicom would go through workflow as before Rabbit is disconnected

Actual behavior

MIG complains it cannot connect to RabbitMQ and payload is deleted

Other information

Weirdly this seems to only happen if a dicom is attempted to be sent whilst Rabbit is offline.

2022-09-29T15:39:52.436062946Z <4> 15:39:52 Monai.Deploy.InformaticsGateway.Services.Connectors.PayloadNotificationAct
ionHandler[706] => Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf =
> Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf => Payload=faa61cb
d-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf Failed to publish workflow request
for payload faa61cbd-5a5c-4a7a-9c6b-578a2be600ac; added back to queue for retry. RabbitMQ.Client.Exceptions.BrokerUnre
achableException: None of the specified endpoints were reachable  ---> System.AggregateException: One or more errors o
ccurred. (Connection failed)  ---> RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed  ---> System.
Net.Sockets.SocketException (111): Connection refused    at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.Th
rowException(SocketError error, CancellationToken cancellationToken)    at System.Net.Sockets.Socket.AwaitableSocketAs
yncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)    at System.Threading.Tasks.Value
Task.ValueTaskSourceAsTask.<>c.<.cctor>b__4_0(Object state) --- End of stack trace from previous location ---    at Ra
bbitMQ.Client.Impl.TcpClientAdapter.ConnectAsync(String host, Int32 port)    at RabbitMQ.Client.Impl.TaskExtensions.Ti
meoutAfter(Task task, TimeSpan timeout)    at RabbitMQ.Client.Impl.SocketFrameHandler.ConnectOrFail(ITcpClient socket,
 AmqpTcpEndpoint endpoint, TimeSpan timeout)    --- End of inner exception stack trace ---    at RabbitMQ.Client.Impl.
SocketFrameHandler.ConnectOrFail(ITcpClient socket, AmqpTcpEndpoint endpoint, TimeSpan timeout)    at RabbitMQ.Client.
Impl.SocketFrameHandler.ConnectUsingAddressFamily(AmqpTcpEndpoint endpoint, Func`2 socketFactory, TimeSpan timeout, Ad
dressFamily family)    at RabbitMQ.Client.Impl.SocketFrameHandler.ConnectUsingIPv4(AmqpTcpEndpoint endpoint, Func`2 so
cketFactory, TimeSpan timeout)    at RabbitMQ.Client.Impl.SocketFrameHandler..ctor(AmqpTcpEndpoint endpoint, Func`2 so
cketFactory, TimeSpan connectionTimeout, TimeSpan readTimeout, TimeSpan writeTimeout)    at RabbitMQ.Client.Framing.Im
pl.IProtocolExtensions.CreateFrameHandler(IProtocol protocol, AmqpTcpEndpoint endpoint, ArrayPool`1 pool, Func`2 socke
tFactory, TimeSpan connectionTimeout, TimeSpan readTimeout, TimeSpan writeTimeout)    at RabbitMQ.Client.ConnectionFac
tory.CreateFrameHandler(AmqpTcpEndpoint endpoint)    at RabbitMQ.Client.EndpointResolverExtensions.SelectOne[T](IEndpo
intResolver resolver, Func`2 selector)    --- End of inner exception stack trace ---    at RabbitMQ.Client.EndpointRes
olverExtensions.SelectOne[T](IEndpointResolver resolver, Func`2 selector)    at RabbitMQ.Client.Framing.Impl.Autorecov
eringConnection.Init(IEndpointResolver endpoints)    at RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointRe
solver endpointResolver, String clientProvidedName)    --- End of inner exception stack trace ---    at RabbitMQ.Clien
t.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver, String clientProvidedName)    at RabbitMQ.Cli
ent.ConnectionFactory.CreateConnection(String clientProvidedName)    at RabbitMQ.Client.ConnectionFactory.CreateConnec
tion()    at Monai.Deploy.Messaging.RabbitMQ.RabbitMQConnectionFactory.<>c__DisplayClass6_0.<CreatConnection>b__1()
 at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode) --- End of stack trace from previous location ---    at System
.Lazy`1.CreateValue()    at Monai.Deploy.Messaging.RabbitMQ.RabbitMQConnectionFactory.<>c__DisplayClass5_0.<CreateChan
nel>b__1(String updateKey, Lazy`1 updateConnection)    at System.Collections.Concurrent.ConcurrentDictionary`2.AddOrUp
date(TKey key, Func`2 addValueFactory, Func`3 updateValueFactory)    at Monai.Deploy.Messaging.RabbitMQ.RabbitMQConnec
tionFactory.CreateChannel(String hostName, String username, String password, String virtualHost, String useSSL, String
 portNumber)    at Monai.Deploy.Messaging.RabbitMQ.RabbitMQMessagePublisherService.Publish(String topic, Message messa
ge)    at Monai.Deploy.InformaticsGateway.Services.Connectors.PayloadNotificationActionHandler.NotifyPayloadReady(Payl
oad payload) in /app/src/InformaticsGateway/Services/Connectors/PayloadNotificationActionHandler.cs:line 137    at Mon
ai.Deploy.InformaticsGateway.Services.Connectors.PayloadNotificationActionHandler.NotifyAsync(Payload payload, ActionB
lock`1 notificationQueue, CancellationToken cancellationToken) in /app/src/InformaticsGateway/Services/Connectors/Payl
oadNotificationActionHandler.cs:line 76
2022-09-29T15:39:52.441135463Z <6> 15:39:52 Monai.Deploy.InformaticsGateway.Services.Connectors.PayloadNotificationAct
ionHandler[711] => Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf =
> Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf => Payload=faa61cb
d-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf Publishing workflow request message
 ID=f754afd5-08fd-4828-bff8-bcec6f07010f...
2022-09-29T15:39:52.442214223Z <6> 15:39:52 Monai.Deploy.Messaging.RabbitMQ.RabbitMQMessagePublisherService[10000] =>
Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf => Payload=faa61cbd-
5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf => Payload=faa61cbd-5a5c-4a7a-9c6b-57
8a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf => Message ID=f754afd5-08fd-4828-bff8-bcec6f07010f. Ap
plication ID=16988a78-87b5-4168-a5c3-2cfc2bab8e54. Publishing message to rabbitmq-monai/monaideploy. Exchange=monaidep
loy, Routing Key=md.workflow.request.
2022-09-29T15:39:52.442280732Z <3> 15:39:52 Monai.Deploy.InformaticsGateway.Services.Connectors.PayloadNotificationAct
ionHandler[728] => Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf =
> Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf => Payload=faa61cb
d-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf Reached maximum number of retries f
or notifying payload faa61cbd-5a5c-4a7a-9c6b-578a2be600ac ready, giving up.
2022-09-29T15:39:52.474908911Z <6> 15:39:52 Monai.Deploy.InformaticsGateway.Services.Connectors.PayloadNotificationAct
ionHandler[720] => Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf =
> Payload=faa61cbd-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf => Payload=faa61cb
d-5a5c-4a7a-9c6b-578a2be600ac, Correlation ID=c9bf0c3b-e9d8-49ae-ad53-3346f436febf Payload faa61cbd-5a5c-4a7a-9c6b-578
a2be600ac deleted.

Test Verification System

As a software engineer, I would like to create an automated test suite to test all features available so we can reduce the amount of manual testing.

The test suite shall be able to:

  • launch RabbitMQ
  • launch MinIO
  • utilize a test agent that
    • sends DIMSE C-STORE requests to IG
    • confirm workflow request events
    • send ACR inference requests to IG
    • issue export requests to SCU export & DICOMweb export services
    • verified stored files

The automated test suite should be hosted in an environment where the community can see the test reports/results.

Extra DICOM metadata as JSON for storage

Is your feature request related to a problem? Please describe.
By extracting DICOM headers as JSON, the Workload Manager could reduce the amount of data being loaded when doing data DICOM header-based data filtering.

Describe the solution you'd like
Extract DICOM into JSON format, using the same filename schemes, and store next to the original DICOM file.

Describe alternatives you've considered
Store in the database; however, IG & WM may be using different database solutions and it's better to use separate dbs.

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.