\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Automatic segmentation of the femur and tibia bones from X-ray images based on pure dilated residual U-Net

  • * Co-Corresponding author: Shoujun Zhou and Yuanquan Wang

    * Co-Corresponding author: Shoujun Zhou and Yuanquan Wang 

These authors contribute equally to this paper

The first author is supported by NSFC grant 61976241
Abstract Full Text(HTML) Figure(7) / Table(2) Related Papers Cited by
  • X-ray images of the lower limb bone are the most commonly used imaging modality for clinical studies, and segmentation of the femur and tibia in an X-ray image is helpful for many medical studies such as diagnosis, surgery and treatment. In this paper, we propose a new approach based on pure dilated residual U-Net for the segmentation of the femur and tibia bones. The proposed approach employs dilated convolution completely to increase the receptive field, in this way, we can make full use of the advantages of dilated convolution. We conducted experiments and evaluations on datasets provided by Tianjin hospital. Comparison with the classical U-net and FusionNet, our method has fewer parameters, higher accuracy, and converges more rapidly, which means the high performance of the proposed method.

    Mathematics Subject Classification: Primary: 68T07; Secondary: 68T20.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  The Architecture of PDR U-Net. f represents the number of filters.d represents the dilated rate.The keep rate of dropout is 0.7

    Figure 2.  The details of the standard block and residual block. f represents the number of filters. d represents the dilated rate

    Figure 3.  the illustration on the left is an unfilled femur label and on the right is a filled tibia label

    Figure 4.  The whole process of data augmentation

    Figure 5.  The caption on the left is the loss of PDRU-Net on the training set, and that on the right is the loss of PDRU-Net on the validation set

    Figure 6.  Segmentation results of the first three training epochs

    Figure 7.  The first three columns show the segmentation results of U-Net, FusionNet and PDR U-Net respectively.The third and fourth columns show the corresponding ground truth images and input images respectively

    Table 1.  The receptive field of each block in the encoding path of PDRU-Net

    Block Type Convolutional Layer Receptive Field
    standard block 1 conv1_1 1-1+1$ \times $2+1=3
    dilated rate = 1 conv1_2 3-1+1$ \times $2+1=5
    residual block 2 conv2_1 5-1+2$ \times $2+1=9
    dilated rate = 2 conv2_2 9-1+2$ \times $2+1=13
    residual block 3 conv3_1 13-1+4$ \times $2+1=21
    dilated rate = 4 conv3_2 21-1+4$ \times $2+1=29
    residual block 4 conv4_1 29-1+8$ \times $2+1=45
    dilated rate = 8 conv4_2 45-1+8$ \times $2+1=61
    residual block 5 conv5_1 61-1+16$ \times $2+1=93
    dilated rate = 16 conv5_2 93-1+16$ \times $2+1=125
    residual block 6 conv6_1 125-1+32$ \times $2+1=189
    dilated rate = 32 conv6_2 189-1+32$ \times $2+1=253
    residual block 7 conv7_1 253-1+32$ \times $2+1=317
    dilated rate = 32 conv7_2 317-1+32$ \times $2+1=381
    residual block 8 conv8_1 381-1+32$ \times $2+1=445
    dilated rate = 32 conv8_2 445-1+32$ \times $2+1=509
     | Show Table
    DownLoad: CSV

    Table 2.  Comparison of the PDRU-Net, U-Net and FusionNet

    model parameters Dice Coefficient Pixel Accuracy Recall Precision F1 score
    U-Net~33M0.9180.9430.8390.9870.907
    FusionNet~78M0.9440.9690.8770.9970.933
    PDRU-Net~0.36M0.9730.9870.9530.9760.964
     | Show Table
    DownLoad: CSV
  • [1] S. Y. AbabnehJ. W. Prescott and M. N. Gurcan, Automatic graph-cut based segmentation of bones from knee magnetic resonance images for osteoarthritis research, Medical Image Anal., 15 (2011), 438-448.  doi: 10.1016/j.media.2011.01.007.
    [2] V. Badrinarayanan, A. Kendall and R. Cipolla, Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 2481–2495. doi: 10.1109/TPAMI.2016.2644615.
    [3] O. BandyopadhyayA. Biswas and B. B. Bhattacharya, Long-bone fracture detection in digital x-ray images based on digital-geometric techniques, Comput. Methods Programs Biomed., 123 (2016), 2-14.  doi: 10.1016/j.cmpb.2015.09.013.
    [4] J. Carballido-Gamio, et al., Automatic multi-parametric quantification of the proximal femur with quantitative computed tomography, Quantitative Imaging in Medicine and Surgery, 5 (2015), 552-568.
    [5] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy and A. L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE Trans. Pattern Anal. Mach. Intell., 40 (2018), 834–848. doi: 10.1109/TPAMI.2017.2699184.
    [6] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox and O. Ronneberger, 3d u-net: Learning dense volumetric segmentation from sparse annotation, in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016 - 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II, Lecture Notes in Computer Science, 9901, 2016,424–432. doi: 10.1007/978-3-319-46723-8_49.
    [7] C. M. Deniz, S. Hallyburton, A. Welbeck, S. Honig, K. Cho and G. Chang, Segmentation of the proximal femur from MR images using deep convolutional neural networks, Sci. Rep., 8 (2018), 16485. doi: 10.1038/s41598-018-34817-6.
    [8] F. Ding, W. K. Leow and T. S. Howe, Automatic segmentation of femur bones in anterior-posterior pelvis x-ray images, in Computer Analysis of Images and Patterns, 12th International Conference, CAIP 2007, Vienna, Austria, August 27-29, 2007, Proceedings (eds. W. G. Kropatsch, M. Kampel and A. Hanbury), Lecture Notes in Computer Science, 4673, Springer, 2007,205–212. doi: 10.1007/978-3-540-74272-2_26.
    [9] L.-H. Fan, J.-G. Han, Y. Jia, C. Zhao and B. Yang, Segmentation of femurs in x-ray image with generative adversarial networks, DEStech Transactions on Engineering and Technology Research, 289–295. doi: 10.12783/dtetr/ecae2018/27745.
    [10] I. J. Goodfellow, et al., Generative adversarial nets, in Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada (eds. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence and K. Q. Weinberger), 2014, 2672–2680
    [11] S. Guan, A. A. Khan, S. Sikdar and P. V. Chitnis, Fully dense unet for 2d sparse photoacoustic tomography artifact removal, preprint, arXiv: 1808.10848. doi: 10.1109/JBHI.2019.2912935.
    [12] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. C. Courville, Improved training of wasserstein gans, in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA (eds. I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan and R. Garnett), 2017, 5767–5777
    [13] K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016 doi: 10.1109/CVPR.2016.90.
    [14] G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017 doi: 10.1109/CVPR.2017.243.
    [15] R. Jiang, J. Meng and P. Babyn, X-ray image segmentation using active contour model with global constraints, 2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing, (2007), 240–245.
    [16] A. Krizhevsky, I. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems, 2012 doi: 10.1145/3065386.
    [17] H. Li, A. Zhygallo and B. H. Menze, Automatic brain structures segmentation using deep residual dilated u-net, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries - 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I (eds. A. Crimi, S. Bakas, H. J. Kuijf, F. Keyvan, M. Reyes and T. van Walsum), Lecture Notes in Computer Science, 11383, Springer, 2018,385–393. doi: 10.1007/978-3-030-11723-8_39.
    [18] M. Lin, Q. Chen and S. Yan, Network in network, in 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings (eds. Y. Bengio and Y. LeCun), preprint
    [19] M. Liu, T. Breuel and J. Kautz, Unsupervised image-to-image translation networks, in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA (eds. I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan and R. Garnett), 2017,700–708
    [20] X. Liu, et al., Msdf-net: Multi-scale deep fusion network for stroke lesion segmentation, IEEE Access, 7 (2019), 178486–178495. doi: 10.1109/ACCESS.2019.2958384.
    [21] J. Long, E. Shelhamer and T. Darrell, Fully convolutional networks for semantic segmentation, in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015 doi: 10.1109/CVPR.2015.7298965.
    [22] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang and S. P. Smolley, Least squares generative adversarial networks, in IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017 doi: 10.1109/ICCV.2017.304.
    [23] F. Milletari, N. Navab and S. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, in Fourth International Conference on 3D Vision, 3DV 2016, Stanford, CA, USA, October 25-28, 2016 doi: 10.1109/3DV.2016.79.
    [24] O. Oktay, et al., Attention u-net: Learning where to look for the pancreas, preprint, arXiv: 1804.03999.
    [25] C. N. Öztürk and S. Albayrak, Automatic segmentation of cartilage in high-field magnetic resonance images of the knee joint with an improved voxel-classification-driven region-growing algorithm using vicinity-correlated subsampling, Comp. Bio. Med., 72 (2016), 90-107.  doi: 10.1016/j.compbiomed.2016.03.011.
    [26] T. T. Peng, et al., Detection of femur fractures in x-ray images, Master of Science Thesis, National University of Singapore.
    [27] A. Pries, P. J. Schreier, A. Lamm, S. Pede and J. Schmidt, Deep morphing: Detecting bone structures in fluoroscopic x-ray images with prior knowledge, preprint, arXiv: 1808.04441.
    [28] T. M. Quan, D. G. C. Hildebrand and W. Jeong, Fusionnet: A deep fully residual convolutional neural network for image segmentation in connectomics, preprint, arXiv: 1612.05360.
    [29] O. Ronneberger, P. Fischer and T. Brox, U-net: Convolutional networks for biomedical image segmentation, in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015-18th International Conference Munich, Germany, October 5-9, 2015, Proceedings, Part III (eds. N. Navab, J. Hornegger, W. M. W. III and A. F. Frangi), Lecture Notes in Computer Science, 9351, Springer, 2015,234–241. doi: 10.1007/978-3-319-24574-4_28.
    [30] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford and X. Chen, Improved techniques for training gans, in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain (eds. D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon and R. Garnett), 2016, 2226–2234
    [31] P. Santhoshini, R. Tamilselvi and R. Sivakumar, Automatic segmentation of femur bone features and analysis of osteoporosis, Lecture Notes on Software Engineering, 194–198. doi: 10.7763/LNSE.2013.V1.44.
    [32] K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (eds. Y. Bengio and Y. LeCun), preprint
    [33] R. Smith, Segmentation and fracture detection in x-ray images for traumatic pelvic injury.,
    [34] C. Stolojescu-Crisan and S. Holban, An interactive x-ray image segmentation technique for bone extraction, in International Work-Conference on Bioinformatics and Biomedical Engineering, IWBBIO 2014, Granada, Spain, April 7-9, 2014 (eds. I. Rojas and F. M. O. Guzman), Copicentro Editorial, 2014, 1164–1171
    [35] H. Sun, et al., Aunet: Attention-guided dense-upsampling networks for breast mass segmentation in whole mammograms, Phys. Med. Biol., 65 (2020), 055005. doi: 10.1088/1361-6560/ab5745.
    [36] C. Szegedy, et al., Going deeper with convolutions, in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015 doi: 10.1109/CVPR.2015.7298594.
    [37] A. TackA. Mukhopadhyay and S. Zachow, Knee menisci segmentation using convolutional neural networks: Data from the osteoarthritis initiative, Osteoarthritis and Cartilage, 26 (2018), 680-688.  doi: 10.1016/j.joca.2018.02.907.
    [38] W. WangY. WangY. WuT. LinS. Li and B. Chen, Quantification of full left ventricular metrics via deep regression learning with contour-guidance, IEEE Access, 7 (2019), 47918-47928.  doi: 10.1109/ACCESS.2019.2907564.
    [39] J. WuA. BelleR. H. HargravesC. CockrellY. Tang and K. Najarian, Bone segmentation and 3d visualization of CT images for traumatic pelvic injuries, Int. J. Imaging Syst. Technol., 24 (2014), 29-38.  doi: 10.1002/ima.22076.
    [40] X. Xiao, S. Lian, Z. Luo and S. Li, Weighted res-unet for high-quality retina vessel segmentation, in 2018 9th International Conference on Information Technology in Medicine and Education (ITME), 2018,327–331.
    [41] Y. XueT. XuH. ZhangL. R. Long and X. Huang, Segan: Adversarial network with multi-scale L 1 loss for medical image segmentation, Neuroinformatics, 16 (2018), 383-392.  doi: 10.1007/s12021-018-9377-x.
    [42] F. Yokota, T. Okada, M. Takao, N. Sugano, Y. Tada and Y. Sato, Automated segmentation of the femur and pelvis from 3d CT data of diseased hip using hierarchical statistical shape model of joint structure, in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2009, 12th International Conference, London, UK, September 20-24, 2009, Proceedings, Part II (eds. G. Yang, D. J. Hawkes, D. Rueckert, J. A. Noble and C. J. Taylor), Lecture Notes in Computer Science, 5762, Springer, 2009,811–818. doi: 10.1007/978-3-642-04271-3_98.
    [43] F. Yu and V. Koltun, Multi-scale context aggregation by dilated convolutions, in 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings (eds. Y. Bengio and Y. LeCun), preprint
    [44] K. ZhangW. Lu and P. Marziliano, Automatic knee cartilage segmentation from multi-contrast mr images using support vector machine classification with spatial dependencies, Magnetic Resonance Imaging, 31 (2013), 1731-1743.  doi: 10.1016/j.mri.2013.06.005.
    [45] Z. ZhangC. DuanT. LinS. ZhouY. Wang and X. Gao, GVFOM: A novel external force for active contour based image segmentation, Inf. Sci., 506 (2020), 1-18.  doi: 10.1016/j.ins.2019.08.003.
    [46] Y. Zhou, W. Huang, P. Dong, Y. Xia and S. Wang, D-unet: A dimension-fusion U shape network for chronic stroke lesion segmentation, preprint, arXiv: 1908.05104. doi: 10.1109/TCBB.2019.2939522.
    [47] J. Zhu, T. Park, P. Isola and A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, in IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017 doi: 10.1109/ICCV.2017.244.
    [48] Keras: Deep learning library for theano and tensorflow, https://github.com/keras-team/keras, 2015.
    [49] Lableme, http://labelme.csail.mit.edu/Release3.0/.
  • 加载中
Open Access Under a Creative Commons license

Figures(7)

Tables(2)

SHARE

Article Metrics

HTML views(2514) PDF downloads(1332) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return