Comments (4)
I run brats2015.py and have this error.
from brain-tumor-segmentation-using-deep-learning.
try following code in place of def of unet mode
def unet_model():
inputs = Input((2, img_size, img_size))
conv1 = Conv2D(64, (3, 3), activation='relu', padding='same') (inputs)
batch1 = BatchNormalization(axis=1)(conv1)
conv1 = Conv2D(64, (3, 3), activation='relu', padding='same') (batch1)
batch1 = BatchNormalization(axis=1)(conv1)
pool1 = MaxPooling2D((2, 2)) (batch1)
conv2 = Conv2D(128, (3, 3), activation='relu', padding='same') (pool1)
batch2 = BatchNormalization(axis=1)(conv2)
conv2 = Conv2D(128, (3, 3), activation='relu', padding='same') (batch2)
batch2 = BatchNormalization(axis=1)(conv2)
pool2 = MaxPooling2D((2, 2)) (batch2)
conv3 = Conv2D(256, (3, 3), activation='relu', padding='same') (pool2)
batch3 = BatchNormalization(axis=1)(conv3)
conv3 = Conv2D(256, (3, 3), activation='relu', padding='same') (batch3)
batch3 = BatchNormalization(axis=1)(conv3)
pool3 = MaxPooling2D((2, 2)) (batch3)
conv4 = Conv2D(512, (3, 3), activation='relu', padding='same') (pool3)
batch4 = BatchNormalization(axis=1)(conv4)
conv4 = Conv2D(512, (3, 3), activation='relu', padding='same') (batch4)
batch4 = BatchNormalization(axis=1)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2)) (batch4)
conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same') (pool4)
batch5 = BatchNormalization(axis=1)(conv5)
conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same') (batch5)
batch5 = BatchNormalization(axis=1)(conv5)
up6 = Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same') (batch5)
up6 = concatenate([up6, conv4], axis=1)
conv6 = Conv2D(512, (3, 3), activation='relu', padding='same') (up6)
batch6 = BatchNormalization(axis=1)(conv6)
conv6 = Conv2D(512, (3, 3), activation='relu', padding='same') (batch6)
batch6 = BatchNormalization(axis=1)(conv6)
up7 = Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same') (batch6)
up7 = concatenate([up7, conv3], axis=1)
conv7 = Conv2D(256, (3, 3), activation='relu', padding='same') (up7)
batch7 = BatchNormalization(axis=1)(conv7)
conv7 = Conv2D(256, (3, 3), activation='relu', padding='same') (batch7)
batch7 = BatchNormalization(axis=1)(conv7)
up8 = Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same') (batch7)
up8 = concatenate([up8, conv2], axis=1)
conv8 = Conv2D(128, (3, 3), activation='relu', padding='same') (up8)
batch8 = BatchNormalization(axis=1)(conv8)
conv8 = Conv2D(128, (3, 3), activation='relu', padding='same') (batch8)
batch8 = BatchNormalization(axis=1)(conv8)
up9 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (batch8)
up9 = concatenate([up9, conv1], axis=1)
conv9 = Conv2D(64, (3, 3), activation='relu', padding='same') (up9)
batch9 = BatchNormalization(axis=1)(conv9)
conv9 = Conv2D(64, (3, 3), activation='relu', padding='same') (batch9)
batch9 = BatchNormalization(axis=1)(conv9)
conv10 = Conv2D(1, (1, 1), activation='sigmoid')(batch9)
model = Model(inputs=[inputs], outputs=[conv10])
model.compile(optimizer=Adam(lr=LR), loss=dice_coef_loss, metrics=[dice_coef])
return model
from brain-tumor-segmentation-using-deep-learning.
Thank you
i change image_size=240 and change create_data('/E:/brats2015/BRATS2015_Training/HGG/', '/Flair.mha', label=False, resize=(155,img_size,img_size))
create_data('/E:/brats2015/BRATS2015_Training/HGG/', '/OT.mha', label=True, resize=(155,img_size,img_size))
#%%
load numpy array data
x = np.load('E:/tamrins_python/x_{}.npy'.format(img_size))
y = np.load('E:/tamrins_python/y_{}.npy'.format(img_size))
#%%
#training
num = 31100
model = unet_model()
history = model.fit(x, y, batch_size=16, validation_split=0.2 ,nb_epoch= num_epoch, verbose=1, shuffle=True)
but i have another error:
ValueError: Error when checking input: expected input_8 to have 4 dimensions, but got array with shape (0, 1)
from brain-tumor-segmentation-using-deep-learning.
Take input size in the power of 2( 2^x where x is +ve integer) take input size as 128.
ValueError: "concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 512, 14, 14), (None, 256, 15, 15)]
can be solved by downgrading the keras version to 2.0.5 #22
Or by using
from keras.layers import concatenate
concatenate([up6, conv4], axis=1)
from brain-tumor-segmentation-using-deep-learning.
Related Issues (14)
- Missing file HOT 1
- How to cite HOT 3
- ..
- How to Cite this ?
- Parameters for model Training
- CT Image
- training set for tumor core
- please help in this regard. HOT 11
- pretrained models HOT 1
- Problem of weights file HOT 1
- "TypeError": 'module' object is not callable......
- Not able to load weight file.
- train the u-net model tumor core and et HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from brain-tumor-segmentation-using-deep-learning.