Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding subimage patch and label size #23

Closed
pranavkantgaur opened this issue Oct 5, 2018 · 7 comments
Closed

Regarding subimage patch and label size #23

pranavkantgaur opened this issue Oct 5, 2018 · 7 comments

Comments

@pranavkantgaur
Copy link

Hi,

I was curious as to how you have selected sub-image patch size of [19, 144, 144, 4]? Is it based on cross-validation? Further as already asked here: (#20) , why the corresponding label has 11 units along depth? [11, 144, 144, 1] as opposed to 19 in input sample?

@wellescastro
Copy link

+1

@HowieMa
Copy link

HowieMa commented Oct 7, 2018

+1
That is so wired!
I use the dataset of BRATS15, the original data shape is 155 * 240 * 240, but the sub image shape is 19 * 144 * 144.

According to his function ,
center_point = get_random_roi_sampling_center(volume_shape, sub_label_shape, batch_sample_model, boundingbox) #

and also
sub_data_moda = extract_roi_from_volume(transposed_volumes[moda],center_point,sub_data_shape)
it seems that your randomly get a sub image by cropping the original image, which I think may miss some information of the tumor.
How do you guarantee this cropping method will get all the tumor information we want?

@leigaoyi
Copy link

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

@HowieMa
Copy link

HowieMa commented Oct 10, 2018

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

Well, it is the problem of the model if self !
You could revise util/MSNet.py like

if __name__ == '__main__':
    x = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 1])
    y = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 2])
    net = MSNet(num_classes=2)
    predicty = net(x, is_training = True)
    print(x)
    print(predicty)
    print (Y)

ant run it like

python util/MSNet.py

You will find that the result is

shape .....
(1, 96, 96, 96, 1)
(1, 88, 96, 96, 2)
(1, 96, 96, 96, 2)

I hope this could help you solve the problem

@leigaoyi
Copy link

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

Well, it is the problem of the model if self !
You could revise util/MSNet.py like

if __name__ == '__main__':
    x = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 1])
    y = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 2])
    net = MSNet(num_classes=2)
    predicty = net(x, is_training = True)
    print(x)
    print(predicty)
    print (Y)

ant run it like

python util/MSNet.py

You will find that the result is

shape .....
(1, 96, 96, 96, 1)
(1, 88, 96, 96, 2)
(1, 96, 96, 96, 2)

I hope this could help you solve the problem

我又思考了一遍,沿着axial轴截了19(155),但是后续沿着coronal, sagittal方向,也是截了19,三个长方体叠起来,是不是覆盖了大部分范围。

@HowieMa
Copy link

HowieMa commented Oct 11, 2018

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

Well, it is the problem of the model if self !
You could revise util/MSNet.py like

if __name__ == '__main__':
    x = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 1])
    y = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 2])
    net = MSNet(num_classes=2)
    predicty = net(x, is_training = True)
    print(x)
    print(predicty)
    print (Y)

ant run it like
python util/MSNet.py
You will find that the result is

shape .....
(1, 96, 96, 96, 1)
(1, 88, 96, 96, 2)
(1, 96, 96, 96, 2)

I hope this could help you solve the problem

我又思考了一遍,沿着axial轴截了19(155),但是后续沿着coronal, sagittal方向,也是截了19,三个长方体叠起来,是不是覆盖了大部分范围。

yeah! As you know, the shape of raw data is 155 * 240 * 240, but we only randomly select some data with shape of 19 * 144 * 144 from the raw data. This is because with a large iteration (like here is 20000), we can cover the whole data (155 * 240 * 240)probabilistically speaking.

I guess he did so because it could help save the memory use when training or testing!

@taigw
Copy link
Owner

taigw commented Dec 3, 2018

To save memeory, the training and testing were based on image patches, not the entire image size. The convolution in the z-axis was based on 'valid' mode, that's why the output size is reduced by 8 in z-axis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants