Extracting custom patch sizes for lung CT nodule segmentation demo

Hi,

I’m following the ai-notebooks tutorial and working through the updated notebook.

My question is not related to a technical difficulty but rather a design choice -

  • The paper that is referenced in the notebook talks about using patches of size 50x50x50 for the clinical prognosis pipeline. Why does the notebook use a patch size of 150?

  • So I tried to recreate what is in the paper and modified the code to select patches of size 50. I get the following error: ValueError: Error when checking input: expected convolution3d_1_input to have shape (50, 50, 50, 1) but got array with shape (0, 0, 0, 1)
    It seems that plastimatch does not create output patches of size 50. When I open the COM+GTV image I’m able to see the patch with the correct size, how ever when I read the nrrd file using get_input_volume(input_ct_nrrd_path = ct_res_crop_path), I get the size of the volume as 0,0,0. Where could I be going wrong?

Thank you very much for creating these notebooks and for all your support.

1 Like

@adwaykanhere welcome to the IDC forum! Good questions!

@denbonte can you help here please?

1 Like

Hey Adway!

  • The paper that is referenced in the notebook talks about using patches of size 50x50x50 for the clinical prognosis pipeline. Why does the notebook use a patch size of 150?

The reason why the notebook, in one of the steps, exports a 150x150x150 cube is simply to make sure everything looks ok (e.g., the center of mass was computed correctly, and therefore the data were loaded correctly, and so on). For instance, the png snapshots exported are of that 150x150x150 cube.

If you check the get_input_volume function defined here (based the original function from Hosny et Al., available here) you will see the cube that gets used for inference is 50x50x50. Everything should work out of the box - without having to implement anything extra (… at least it does for me)!

I think the other problem might also be solved following what I wrote above - but if that’s not the case and something is still not working (or simply you still have question), please go ahead and I will try to help you on those :slight_smile:

1 Like

Hey Dennis!

Everything works out of the box! I was just curious about experimenting with larger patch sizes to see what the results of the pre-trained model would be for larger patches. Do you think I will need to re-train the model for a larger/smaller patch size?
Thanks for all your support, and I will get back to you if I’m still unable to figure it out.

Adway,


Everything works out of the box!

Good to know, thanks for the feedback :slight_smile:


I was just curious about experimenting with larger patch sizes to see what the results of the pre-trained model would be for larger patches. Do you think I will need to re-train the model for a larger/smaller patch size?

Short answer

Yes!

Long answer

Given a fixed image resolution (e.g., 1x1x1mm, as in this case), the feature extractors/kernels of the convolutional layers are in principle independent on the dimension on the input (they will simply “slide more” and produce a bigger activation map).

However, if you look at the model architecture, you will see that some of the layers depend on the dimension of the input data - crucially one, i.e., the 7M units dense_1 fully connected layer after the last 3D Max Pooling maxpooling3d_2 . If the input dimension changes, since the pooling kernel has a fixed dimension, the number of deep features after the flatten_1 changes, and therefore the forward pass cannot be completed (you will have less or more features than 13824).

This could have been taken are of, e.g., by using a 3D adaptive average pooling layer instead of the max pooling (in that case, the kernel size gets computed on the fly based on the number of features going in and the required number of features going out). The choice of keeping the input fixed-size is arbitrary - and to be fair, you will see this in most of the cases/published models anyway. In this case, the input size choice, as described in the paper, depended on the size of the bounding boxes drawn around the tumours of the NSCLC patients in the three datasets.

What Hosny et Al. did though, was testing, for instance, how robust the model was to perturbations in the segmentation masks (and therefore perturbation of the center of mass around which the 50x50x50 patch is cropped).


Do you think I will need to re-train the model for a larger/smaller patch size?

Although I wouldn’t advise this, If one really wanted to do it I would strongly suggest to keep the first part of the model (the one extracting the deep features) intact (with some fiddling it’s definitely possible), and re-train only the fully connected layers. Keep in mind this is a hard task to do (not from the learning perspective, but simply because you will need a training, a validation and a testing dataset to validate your results and make sure the prognostic power of the model is not lost after the operation)!

Let me know if I can help with something else!

1 Like

This makes sense, thank you Dennis!

Hi @denbonte, sorry to bother you!

I’m trying to run the colab notebook locally and I have tf v2 installed. I know that the notebook is using tf v1 as the model weights and architecture were written in tf v1. Is there a way to still natively load the model architecture and weights in tf v2? When I try to load the model in tf v2, I get the error:

TypeError: __init__() missing 2 required positional arguments: 'filters' and 'kernel_size'

The notebook runs fine on Colab, but I’d like to make it run without using Colab.

Hey @adwaykanhere!

No worries :slight_smile:

I’m trying to run the colab notebook locally and I have tf v2 installed. I know that the notebook is using tf v1 as the model weights and architecture were written in tf v1. Is there a way to still natively load the model architecture and weights in tf v2?

There definitely is a way, yes! Since the two versions of TF are very different (e.g., with TF2 allowing for eager execution), errors like the one you encountered are quite common (… even between different 1.x or 2.x versions you can get a lot of warnings, sometimes).

However, TF2 still includes TF1’s functions under the tf.compat.v1 module (so you should be able to load the model using something like tf.compat.v1.saved_model.load - or one of the function they provide along these lines).

Let me know if you manage to solve it!

Thanks @denbonte,
using tf.compat.v1 still gives another error.
I’m using model = tf.compat.v1.keras.models.model_from_json(model_json) and I get TypeError: the JSON object must be str, bytes or bytearray, not dict

Is the code to build the model available? I can create the model myself and then try to load the weights.

Thanks for your help!

Hey @adwaykanhere!

TypeError: the JSON object must be str, bytes or bytearray, not dict

This error looks completely independent from the one you got before, though - and a much easier to solve one! Have you tried, for instance, converting the model_json dictionary to a string as suggested by the TypeError (e.g., model_json_str = json.dumps(model_json), and then tf.compat.v1.keras.models.model_from_json(model_json_str))?

Is the code to build the model available? I can create the model myself and then try to load the weights.

I’m pretty sure it is not - but if you really want to do it, the model JSON should contain all the information you need (i.e., all the layers information); alternatively, you can load it with TF1 (e.g., Colab Notebooks) and recreate the model from the output of model.summary().

I think this will be unnecessary, anyway - the TypeError you pasted above should be solvable by simply figuring out what the tf.compat.v1.keras function expects as an input :slight_smile:

Thanks for your quick response.

Yes, I did try to convert the json dict to a string using model_json_2 = json.dumps(model_json)
But I’m getting the same error loading the model from the new json -
TypeError: __init__() missing 2 required positional arguments: 'filters' and 'kernel_size'

1 Like

Yes, I did try to convert the json dict to a string using model_json_2 = json.dumps(model_json)
But I’m getting the same error loading the model from the new json -
TypeError: __init__() missing 2 required positional arguments: 'filters' and 'kernel_size'

The other possible source of the problem could be what I was mentioning earlier:

(… even between different 1.x or 2.x versions you can get a lot of warnings, sometimes).

i.e., the model was likely generated using a version of TF that defined some layers in a slightly different way with respect to what the version under tf.compat.v1.keras expects.

The easiest way to go, in my opinion, would be to generate a new conda environment with just the right packages (of the right version) to run the model - and go from there! Trying to port models usually work, but can also lead to results which are from slightly different to quite different!

Alright, thank you very much!

Hey @denbonte, good morning!

I just noticed that Colab has removed support for Tensorflow 1x since August 1st 2022 which means we won’t be able to load the Hosny model as per the tutorial. Is there a workaround for it on Colab or will we just not be able to run the model in the notebook at all anymore?

1 Like

Hi @adwaykanhere !

I just noticed that Colab has removed support for Tensorflow 1x since August 1st 2022 which means we won’t be able to load the Hosny model as per the tutorial.

Indeed. I was out of the office when this happened, so sorry for the late reply! Thank you for bringing this to our attention!

Is there a workaround for it on Colab or will we just not be able to run the model in the notebook at all anymore?

The best way to run the notebook would be to install the required Tensorflow and Keras versions at the very start. I have just tried to load the model after adding this two-liners at the start:

!pip install tensorflow==1.15
!pip install keras==2.3.1

And everything works as it should :slight_smile:

I also updated the tutorial notebook and ran it from start to finish again to make sure everything is working (it is!)

1 Like

Great! Thank you very much @denbonte

1 Like