GithubHelp home page GithubHelp logo

gregwchase / eyenet Goto Github PK

View Code? Open in Web Editor NEW
195.0 195.0 76.0 47.03 MB

Identifying diabetic retinopathy using convolutional neural networks

Home Page: https://www.youtube.com/watch?v=pMGLFlgqxuY

License: MIT License

Python 98.43% Shell 1.57%
deep-learning keras machine-learning neural-network retinopathy tensorflow

eyenet's Introduction

Technology Stack

This my current technology stack, which is what I use in my professional and personal development.

Package Use Cases Notes
AWS Cloud computing services
  • EC2
  • S3
  • ECS
Dask Distributed computing
  • Lighter than Spark
  • Follows Pandas API
  • GPU compatible
FastAPI REST API's
  • Asynchronous
  • Faster than Flask
  • Easier to test
Poetry Project package management and publishing
  • Resolves package dependencies
RAPIDS GPU data science libraries
  • cuDF
  • cuML
  • Dask cuDF

Contact Information

Website Username
LinkedIn gregwchase
Medium gregorywchase
Twitter @gregwchase

eyenet's People

Contributors

gregwchase avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eyenet's Issues

Next Steps

Sir , Can you help me out with the commands to run after running the conversion script please?

Layer conv2d_51 was called with an input that isn't a symbolic tensor

cnn.py is showing the value error. while running the main, it showing the following error

Splitting data into test/ train datasets
Reshaping Data
X_train Shape: (422, 256, 256, 3)
X_test Shape: (106, 256, 256, 3)
Normalizing Data
y_train Shape: (422, 2)
y_test Shape: (106, 2)
Training Model
Traceback (most recent call last):

File "", line 45, in
nb_classes, nb_gpus=8)

File "", line 24, in cnn_model
input_shape=(img_rows, img_cols, channels), activation="relu"))

File "C:\Users\naveenn\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\models.py", line 467, in add
layer(x)

File "C:\Users\naveenn\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\topology.py", line 575, in call
self.assert_input_compatibility(inputs)

File "C:\Users\naveenn\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\topology.py", line 448, in assert_input_compatibility
str(inputs) + '. All inputs to the layer '

ValueError: Layer conv2d_51 was called with an input that isn't a symbolic tensor. Received type: <class 'tensorflow.python.framework.ops.Tensor'>. Full input: [<tf.Tensor 'conv2d_51_input:0' shape=(?, 256, 256, 3) dtype=float32>]. All inputs to the layer should be tensors.

Some questions about rotate_images.py

Hi! I find that the DR dataset has been rotated 90,120,180,270 degrees, but the DM dataset hasn't been rotated. Is it matter whether to rotate or not?
Or some ways like to rotate randomly will help more for training.
I would appreciate it if you could answer my questions. Thanks a lot!

System Requirements for Model Training

Hi Greg,
I have a few queries, Please do answer

  1. I am running your code and I always get Insufficient Memory or other error.I wish to know at what Configuration will your Model codes work.I am Opting for Google Cloud. So I need Exact Configuration like RAM,GPU, Processor, HDD?
  2. I am running it in My laptop with 8GB RAM, 4GB Nvidia 940MX when executing cnn.py It show CUDA Allocation Error Insufficient Memory.Is it possible to run in this Laptop?
  3. cnn.py,cnn_class.py,cnn_multi.py. What are the difference? What are their Individual System Requirements?How many GPU is minimum for running CNN Model?
  4. Is a Confusion Matrix Implementaion Possible in your Code?
  5. Is it possible to give input sample data image after Training the model completely and get predict and predict_classes?How to Do it?
  6. After Training the model and saving it can we Copy the model and Run it in a Lower Configuration Laptop like Above?Can you Add Trained_Complete_Model.h5 to Repo?

Please Do reply.

`X_train_256_v2.npy` saved?

In image_to_array.py I see you savedX_train.npy, but I don't see where you saved X_train_256_v2.npy even though you end up loading it as the training data in all of your CNN models. Is the code to save X_train_256_v2.npy missing?

I feel like you meant to save X_train.npy as X_train_256_v2.npy since you ended up using the labels from trainLabels_master_256_v2.csv when you called convert_images_to_arrays_train, but I am not sure.

file missing the file X_train_256_v2.npy can't be build?

sir,I'am trying to run eyenet-2018,here is my running series:

resize_images.py → preprocess_images.py → rotate_images.py→ reconcile_labels.py → image_to_array.py

when i finished running image_to_array.py, it build a file--X_train.npy,

but when I turn to try cnn.py, I found that it needs X_train_256_v2.npy insted of X_train.npy,

but I can't find any .py file to build this file. How can I build this file?

GPU problem

ValueError: To call multi_gpu_model with gpus=8, we expect the following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1', '/gpu:2', '/gpu:3', '/gpu:4', '/gpu:5', '/gpu:6', '/gpu:7']. However this machine only has: ['/cpu:0', '/xla_cpu:0', '/xla_gpu:0', '/gpu:0']. Try reducing gpus

Files missing 'X_train_256_v2.npy'

Hi Greg, i'm trying eyenet-2020, but when i run cnn.py, i'm getting this error:

No such file or directory: X_train_256_v2.npy. Where can i found or generate it?.
First I downloaded the data, then preprocess the images, now i'm trying to run cnn.py, am I in the correct order?. Thanks.

rotation script

Hello gregwchase, can you please tell me which data is stored in trainLabels_master.csv?

question

Which algorithm can be used for detecting Diabetic Retinopathy using image processing algorithms? in python

Not getting enough accuracy

When I tried to train on the whole dataset which of almost 20Gb's then I ran out of Memory! so I split the dataset into 4 batches each contain 26600 images i.e 26600 + 26600 + 26600 + 26586 = 106,386. And so I have to split the dataset into 4 batches and make a slight adjustment in the code.

To load the save model with trained weights from the previous batch I use the keas load_weight() method. Here, I'm working on cnn.py for all 5 classes

model.add(Dense(nb_classes, activation = 'softmax'))
model.summary()

model.load_weights(model_name + '.h5' )

model.compile(loss = 'categorical_crossentropy', optimizer = 'adam',
                    metrics = ['accuracy'])

When I train on the first batch which contain 26600 images I got

loss: 1.0042 - acc: 0.6248 - val_loss: 1.0625 - val_acc: 0.6029

For the second batch of 26600 images I got

loss: 0.9026 - acc: 0.6563 - val_loss: 1.1008 - val_acc: 0.6114

For the third batch of 26600 images I got

loss: 0.8860 - acc: 0.6666 - val_loss: 0.9988 - val_acc: 0.6330

For the fourth batch of 26586 images I got

loss: 0.8227 - acc: 0.6888 - val_loss: 1.0289 - val_acc: 0.6356

Question 1: If you see there is not a much significant change in the score. Can you identify where's the problem is occurring? If you want then I can provide you the code which I have slightly altered from the original.

Question 2: As I have split the dataset into individual .npy arrays could this be a reason for not seeing much improvement in the score?

Question 3; As you mentioned in previous issues that you train on p2.8x large AWS instance. If I train on the same instance how much time does it takes to train the whole network?

Question 4: You have also mentioned that you use the VGG arch but the VGG contain more layers then you have used in cnn.py OR cnn_multi.py could it be the reason that model is not extracting enough feature to learn?

Question 5: When I try to train the cnn.py for binary classification on the first batch which contains 26600 images then I got the 99% accuracy after epoch which shows that model is obviously overfitting. Again, As I have split the dataset into individual arrays could this be the reason for getting 99% accuracy?

O/p after first epoch using binary classification :

loss: 0.0088 - acc: 0.9934 - val_loss: 8.1185e-05 - val_acc: 1.0000

Thanks! Please do answer Sir! :)

X_train_256_v2.npy

Hi Greg,
The X_train_256_v2.npy was implemented in all three classification files. But it was never created in the code that you provided. I only created X_train.npy file while preprocessing. By using that in cnn.py, the model is over fitting to the training data. The different implementation like categorical cnn_class.py and cnn_multi.py, but we never separated the images into 5 classes. We only did it for two classes. I am so confused Greg. Can you please help me.
Thanks,
Vamsi.

MemoryError

Hi,my configure is 👍
ubuntu10.04
python3
but,when I run the scipt:image_to_array.py,
the error occur.
Writing Train Array Traceback (most recent call last): File "image_to_array.py", line 62, in <module> X_train = convert_images_to_arrays_train('../data/train-resized-256/', labels) File "image_to_array.py", line 38, in convert_images_to_arrays_train return np.array([np.array(Image.open(file_path + img)) for img in lst_imgs]) MemoryError

Thanks in advance.

IOError: [Errno 2] No such file or directory: '../data/train-resized-256/19_left.jpeg'

Hi I am running the !python preprocess_images.py
I used training.zip.001 and test.zip.001 only as per the script "resize_images.py"
Should i have to process / rotate all the train and test images before running the pre-process ?
TRACE of running preprocess_images.py

Started Image preprocess
Traceback (most recent call last):
File "preprocess_images.py", line 31, in
trainLabels['black'] = find_black_images('../data/train-resized-256/', trainLabels)
File "preprocess_images.py", line 21, in find_black_images
return [1 if np.mean(np.array(Image.open(file_path + img))) == 0 else 0 for img in lst_imgs]
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 2312, in open
fp = builtins.open(filename, "rb")
IOError: [Errno 2] No such file or directory: '../data/train-resized-256/19_left.jpeg'
Sat Oct 20 08:42:36 UTC 2018
Ended Image preprocess

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.