GithubHelp home page GithubHelp logo

atten_deep_mil's Introduction

utayao.github.io

atten_deep_mil's People

Contributors

thomashenckel avatar utayao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atten_deep_mil's Issues

Any reason for taking out one image?

Hi,

I noticed that there was one image from the original dataset that you didn't include in the processed data, any reason for that?

Cheers,
Mark

feeding the learnt embedding as input to a new CNN

Thanks a lot for the code.

I would like to learn an embedding from the MIL attention network and feed this as input to another CNN.

Essentially, I would like to chop off the last sigmoid right after computing the sum

x = K.sum(x, axis=0, keepdims=True)

in the custom_layers.py

However, when I compute the sum, the dimension gets reduced to (1,27,27,3) from (?,27,27,3) which is not accpeted by the conv2d layer which throws an error:

ValueError: Input 0 is incompatible with layer conv2d_34: expected ndim=4, found ndim=3

How can I get rid of this problem?

Will a reshapping or expanding dimensions help?

Kind regards,

how to input data

i want to know how to create datasets ,bags and put those bags into neural network ,it is so diffcult to figure out how to present mil's input data

about the mil-layer

Would you please help me to solve the problem I encountered when training? Thank you very much!
`
File "/home/test/anaconda3/envs/smy-ab2/lib/python2.7/site-packages/keras/engine/saving.py", line 83, in _serialize_model
model_config['config'] = model.get_config()

File "/home/test/anaconda3/envs/smy-ab2/lib/python2.7/site-packages/keras/engine/network.py", line 860, in get_config
layer_config = layer.get_config()

File "/home/test/abnormal/utl/custom_layers.py", line 94, in get_config
'v_initializer':initializers.serialize(self.V.initializer),

File "/home/test/anaconda3/envs/smy-ab2/lib/python2.7/site-packages/keras/initializers.py", line 496, in serialize
return serialize_keras_object(initializer)

File "/home/test/anaconda3/envs/smy-ab2/lib/python2.7/site-packages/keras/utils/generic_utils.py", line 117, in serialize_keras_object
raise ValueError('Cannot serialize', instance)

ValueError: ('Cannot serialize', <tf.Operation 'alpha/v/Assign' type=Assign>)`

Data

I am new to deep learning. I wonder may you show how to attach data and how to run the code properly. I really appreciate!

Result score

Thank you for sharing such a great work. I tried to implement this work on the processed data that you provided in the drive, it works but shows me much lower result (acu). can you tell me why I am getting a big gap between my implementation and yours. Your reply and help would be highly appreciated.

Create the patches

Thanks for sharing your code. I am able to run the code with the patches downloaded from google drive.

Can you point me to the function in your code which produces the patches from the original Colon cancer images?

Thank you.

Question about handling of bag and instance labels

Hello,

I've got a short question about the network data input shapes during training (and testing).
Regarding the original pytorch implementation of Ilse et al. it seems that there is a bag of instances [n,x,y,z] with a single label attached to it as a single value. Then you have multiple bags m and in total m labels. Makes sense so far when reading the description in the original publication.

However, now in this implementation we have the same bag of instances [n,x,y,z] but instead of a single label there is array of label instances [n,]. This label instance is indeed derived from the binary bag label (0 or 1) but extended to each instance.

Why is that the case? Is it not possible in tensorflow to train the model in a similar way to the original publication? Maybe I am missing something here but I would be happy if anyone could help me out.

Thank you for your work on this implementation!

Kind regards,
Joshua

input shape

Hello,
I want to deeply thank you for your work on this implementation.
I can make it work, but there's something getting ouf of my head, it's the difference between input of the model and the ouput of DataGenarator.
DataGenarator output a list of HW3 ndarray, right?
The input model waits for a HW3 shape, so how it can works? Does the model infer there's N images and it inputs sequentially the images in the model? But, in this case, how the Attention layer can work?
Because, I thought to input a sequence of image you had to use the TimeDistributed class from Keras to do that.
Kind regards.

AUC Metric

Hello!

I've been using your code and I would like to know which changes or additions must be done in order to obtain the AUC metric at least for the testing phase.

Best regards,
João Moura

Val loss always lower than train loss

I tried 4 fold cross validation on the Processed patches

In all 4 folds, val loss is almost always lower than than train loss, which is weird. (I used 60% in train set and 40% in val set). The result was ths same when i used 90,10 split too.

But, the val accuracy is always less than train accuracy which makes sense.
Why is the val loss always lower than the train loss in all the folds?###

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.