GithubHelp home page GithubHelp logo

cheto01 / medical_clip Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 9 KB

This medical Contrastive Language-Image Pre-Training will be used on MIMIC dataset to predict classes

License: MIT License

Python 100.00%

medical_clip's Introduction

medical_CLIP

This medical Contrastive Language-Image Pre-Training will be used on MIMIC dataset to predict classes

lstm_clip

  1. Define our custom image and text encoders. In this example, we will use ResNet50 for image encoding and a simple LSTM for text encoding. These encoders will be passed to the CLIP model later.

  2. Load our dataset. In this example, we will use the CIFAR10 dataset as an example.

  3. Initialize our model with the custom encoders. Here we will initialize our CLIP model with our custom image and text encoders.

  4. Define our loss function and optimizer. In this example, we will use the contrastive loss function and AdamW optimizer.

  5. Train our model. In this example, we will train the model for 10 epochs.

  6. In the training code , we first evaluate the model on the validation set and calculate the F1 score. To do this, we set the model to evaluation mode using model.eval(), then we loop over the validation set and calculate the cosine similarity between the image and text embeddings. We then threshold the similarities at 0.5 to obtain binary predictions, and compute the F1 score using these predictions and the ground truth labels. Finally, we set the model back to training mode using model.train(). We repeat this evaluation and F1 score calculation after each epoch of training.

Roberta_clip_optuna

To use the Roberta model for text encoding and add a hyperparameter search using Optuna, we proceeded as follow:

  1. Define our custom image and text encoders. In this example, we will use ResNet50 for image encoding and Roberta for text encoding. These encoders will be passed to the CLIP model later

  2. Load our dataset. In this example, we will use the CIFAR10 dataset as an example.

  3. Initialize your model with the custom encoders. Here we will initialize our CLIP model with our custom image and text encoders.

  4. Define your loss function and optimizer. In this example, we will use the contrastive loss function and AdamW optimizer.

  5. Define a function to train the model with Optuna hyperparameter search. We will define the number of epochs, learning rate, and batch size as hyperparameters to search over.

  6. Inside the training loop, we will train the model using the current hyperparameters.

  7. At the end of each epoch, we will compute the validation loss and return it as the objective value for Optuna to minimize.

  8. Run the Optuna hyperparameter search.

  9. Get the best hyperparameters found by Optuna and retrain the model using those hyperparameters.

  10. Using a function, we calculated the F1 score on the test set by comparing the binary predictions to the ground truth labels using the compute_f1_score function. Note that we need to convert the predictions and labels back to CPU before computing the F1 score.

Stacking models

In stack_model.py, we define a new ImageEncoder class that loads ResNet152, ResNet18, and DenseNet from the torchvision.models module, removes their last fully connected layer, and concatenates their features. We freeze all parameters of these models to prevent them from being trained during training.

We can use the same dataset and the Roberta model for text encoding as before, and we can use the same training and evaluation code with the new ImageEncoder class.

To do so, import stack_model.ImageEncoder and replace the image_encoder in the roberta_clip_optuna.py (row 63) to use the staking model instead of ResNet50.

medical_clip's People

Contributors

cheto01 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.