Advanced Ultrasound in Diagnosis and Therapy ›› 2018, Vol. 2 ›› Issue (2): 82-93.doi: 10.37015/AUDT.2018.180804
• Original Research • Previous Articles Next Articles
Jinlian Ma, PhDa, Dexing Kong, PhDb,*()
Received:
2018-07-02
Online:
2018-08-18
Published:
2018-08-19
Contact:
Dexing Kong, PhD,
E-mail:dryanghua99@163.com
Jinlian Ma, PhD, Dexing Kong, PhD. Deep Learning Models for Segmentation of Lesion Based on Ultrasound Images. Advanced Ultrasound in Diagnosis and Therapy, 2018, 2(2): 82-93.
Table 1
Architecture of the convolutional neural network model used in this study."
Input | filter | padding | stride | Output | |
---|---|---|---|---|---|
Conv1 | 1 | 13 × 13, 96 | 6 × 6 | 2 × 2 | 96 |
Max-plooling1 | 96 | 3 × 3 | 1 × 1 | 2 × 2 | 96 |
Conv2a | 96 | 5 × 5, 256 | 2 × 2 | 2 × 2 | 256 |
Conv2b | 256 | 5 × 5, 256 | 2 × 2 | 1 × 1 | 256 |
Max-pooling2 | 256 | 3 × 3 | 0 × 0 | 2 × 2 | 256 |
Conv3 | 256 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv4 | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5a | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5b | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5c | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5d | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5e | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5f | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5g | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5h | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 384 |
Conv5i | 384 | 3 × 3, 384 | 1 × 1 | 1 × 1 | 256 |
Conv6 | 64 | 3 × 3, 1 | 1 × 1 | 1 × 1 | 1 |
Figure 5
Segmentation probability maps of thyroid images generated by pre-trained convolutional neural network. Typical cases of thyroid nodule and normal thyroid images are shown in the first row; manually delineated ground truth masks and the segmentation probability maps generated by the PCNN are shown in the second and third rows, respectively."
Table 2
Performance of the PCNN, NCNN, ZeilerNet, VggNet and ResNet used for thyroid nodules in this study."
Method | OM | DR | TP | FP |
---|---|---|---|---|
PCNN | 0.8943 ± 0.0030 | 0.9558 ± 0.0013 | 0.9694 ± 0.0043 | 0.0569 ± 0.0029 |
NCNN | 0.8782 ± 0.0029 | 0.9498 ± 0.0010 | 0.9722 ± 0.0027 | 0.0718 ± 0.0036 |
ZeilerNet | 0.7230 ± 0.0548 | 0.6782 ± 0.0965 | 0.7595 ± 0.1730 | 2.7587 ± 3.0885 |
VggNet | 0.6951 ± 0.0604 | 0.7198 ± 0.1207 | 0.7522 ± 0.1639 | 1.0561 ± 7.2501 |
ResNet | 0.7895 ± 0.4910 | 0.7940 ± 0.4370 | 0.9491 ± 0.0222 | 1.3207 ± 3.9200 |
Figure 6
Segmentation probability maps of breast images generated by pre-trained convolutional neural network (PCNN). Typical cases of breast nodule and normal breast images are shown in the first row; manually delineated ground truth masks and the segmentation probability maps generated by the PCNN are shown in the second and third rows, respectively."
Table 3
Statistical test results in comparing the PCNN with NCNN, ZeilerNet, VggNet and ResNet, respectively."
Method | OM | DR | TP | FP |
---|---|---|---|---|
PCNN vs. NCNN | 0.0913 | 0.0075 | 0.6947 | 0.0466 |
PCNN vs. ZeilerNet | 4.0785e-09 | 1.6400e-32 | 0.0000 | 2.6490e-10 |
PCNN vs. VggNet | 1.1538e-21 | 5.5443e-19 | 0.0000 | 3.0961e-04 |
PCNN vs. ResNet | 3.1224e-09 | 1.5021e-11 | 0.6411 | 7.1310e-07 |
Figure 7
Segmentation cases of thyroid nodule images by the pre-trained convolutional neural network (PCNN) and convolutional neural network without pre-training (NCNN), respectively. The original images are shown in the first row; the ground truths are green; the segmentation results of the PCNN and NCNN are red shown in the second row and in the third row, respectively."
Figure 8
Segmentation cases of breast nodule images by the pre-trained convolutional neural network (PCNN) and convolutional neural network without pre-training (NCNN), respectively. The original images are shown in the first row; the ground truths are green; the segmentation results of the PCNN and the NCNN are red shown in the second row and in the third row, respectively."
Figure 9
Cases of segmentation for thyroid nodule images by our pre-trained convolutional neural network (PCNN) and the other convolutional neural network methods. The original images are shown in the first row. The ground truths are shown in green. The red contours of images in the second, third, fourth and fifth rows show the segmentation results by our PCNN, residual convolutional network, very deep convolutional network and Zeiler convolutional network, respectively."
Figure 10
Cases of incorrect segmentation for thyroid nodule images by the pre-trained convolutional neural network (PCNN) and convolutional neural network without pre-training (NCNN), respectively. The original images are shown in the first row; the segmentation results of the PCNN and the NCNN are shown in the second row and the third row, respectively, where the green is the ground truth, the red is the accurately automatic segmentation."
Figure 11
Cases of incorrect segmentation for breast nodule images by the pre-trained convolutional neural network (PCNN) and convolutional neural network without pre-training (NCNN). The original images are shown in the first row; the ground truths are green; the segmentation results of the PCNN and the NCNN are red shown in the second row and in the third row, respectively."
[1] | Bushberg JT, Boone JM. The essential physics of medical imaging. Lippincott Williams & Wilkins Publisher 2011. p163. |
[2] | Chikui T, Okamura K, Tokumori K, Nakamura S, Shimizu M, Koga M, et al. Quantitative analyses of sonographic images of the parotid gland in patients with sjögrens syndrome. Ultrasound Med Biol 2006; 32:617-22. |
[3] | Alimolu E, Bayraktar D, Bozkurt S, Eken K, Kabaaliolu, A, Apaydn A, et al. Follow-up versus tissue diagnosis in bi-rads category 3 solid breast lesions at us: a cost-consequence analysis. Diagn & Interv Radiol 2012; 18:3-10. |
[4] | Savelonas MA, Iakovidis DK, Legakis I, Maroulis D. Active contours guided by echogenicity and texture for delineation of thyroid nodules in ultrasound images. IEEE Trans Inf Technol Biomed 2009; 13:519-27. |
[5] | Iakovidis DK, Savelonas MA, Karkanis SA, Maroulis DE. A genetically optimized level set approach to segmentation of thyroid ultrasound images. Applied Intelligence 2007; 27:193-203. |
[6] | Koundal D, Gupta S, Singh S. Automated delineation of thyroid nodules in ultrasound images using spatial neutrosophic clustering and level set. Applied Soft Computing 2016; 40:86-97. |
[7] | Chang CY, Huang HC, Chen SJ. Automatic thyroid nodule segmentation and component analysis in ultrasound images. Biomed Eng: App, Basis and Comm 2010; 22:81-9. |
[8] | Chang CY, Lei YF, Tseng CH, Shih SR. Thyroid segmentation and volume estimation in ultrasound images. IEEE Trans Biomed Eng 2010; 57:1348-57. |
[9] | Selvathi D, Sharnitha V. Thyroid classification and segmentation in ultrasound images using machine learning algorithms. Thuckafay, India: IEEE International Conference on Signal Processing, Communication, Computing and Networking Technologies; 2011.p 836-41. |
[10] | Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 2012; 1:1097-1105. |
[11] | Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. Columbus, OH, USA : IEEE Conference on Computer Vision and Pattern Recognition; 2014. 580-587. |
[12] | Liu Z, Li X, Luo P, Loy CC, Tang X. Semantic image segmentation via deep parsing network. Santiago, Chile: IEEE International Conference on Computer Vision; 2016. 1377-1385. |
[13] | Long J., Shelhamer E., Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 2014; 39:640-51. |
[14] | Zheng S, Jayasumana S, Romera-Paredes B, Vineet V, Su Z, Du D, et al. Conditional random fields as recurrent neural networks. Santiago, Chile: IEEE International Conference on Computer Vision (ICCV); 2015.p 1529-1537. |
[15] | Ciresan D, Giusti A, Gambardella LM, Schmidhuber J. Deep neural networks segment neuronal membranes in electron microscopy images. Advances in neural information processing systems 2012; 25:2843-51. |
[16] | Ciresan DC, Giusti A, Gambardella LM, Schmidhuber J. Mitosis detection in breast cancer histology images with deep neural networks. Med Image Comput Comput Assist Interv 2013; 16:411-8. |
[17] | Wang L, Shi F, Gao Y, Li G, Gilmore JH, Lin W, et al. Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain mr image segmentation. NeuroImage 2014; 89:152-64. |
[18] | Zhang W, Li R, Deng H, Wang L, Lin W, Ji S, et al. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage 2015; 108:214-24. |
[19] | Ravishankar H, Prabhu SM, Vaidya V, Singhal N. Hybrid approach for automatic segmentation of fetal abdomen from ultrasound images using deep learning. Prague, Czech Republic:IEEE International Symposium on Biomedical Imaging 2016.p 779-782. |
[20] | Ma J, Wu F, Jiang T, Zhu J, Kong D. Cascade convolutional neural networks for automatic detection of thyroid nodules in ultrasound images. Med Phys 2017; 44:1678-91. |
[21] | Ma J, Wu F, Zhao Q, Kong D. Ultrasound image-based thyroid nodule automatic segmentation using convolutional neural networks. Int J Comput Assist Radiol Surg 2017; 12:1895-1910. |
[22] | He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Santiago, Chile: IEEE International Conference on Computer Vision (ICCV); 2015.p 1026-1034. |
[23] | Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R et al. Caffe: Convolutional Architecture for Fast Feature Embedding [accessed 16 May 2018 ]. |
[24] | Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. Springer: European Conference on Computer Vision- ECCV 2014; 2014. p 818-33. |
[25] | Simonyan K, Zisserman A. Very deep convolutional networks for largescale image recognition. [accessed 2014 . |
[26] | He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Las Vegas, NV, United States: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016. p 770-778. |
No related articles found! |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
Share: WeChat
Copyright ©2018 Advanced Ultrasound in Diagnosis and Therapy
Address:Room 409, Floor4, Building No. 3,100 Haining Road, Hongkou District,Shanghai 200080, P.R. China. | Email: publisher@audt.org
Advanced Ultrasound in Diagnosis and Therapy (AUDT) a>
is licensed under a Creative Commons Attribution 4.0 International License a>.