ADVANCED ULTRASOUND IN DIAGNOSIS AND THERAPY >
Application of the Virtual Reality in the Teaching of Ultrasonography
Received date: 2023-04-06
Revised date: 2023-04-10
Accepted date: 2023-04-22
Online published: 2023-04-27
This article discusses the potential benefits of using virtual reality (VR) technology in the teaching of ultrasonography. VR technology can provide an immersive learning experience, enabling students to interact with simulated environments and practice various tasks. Ultrasonography has the characteristics of convenient, rapid, real-time feedback, and dynamic, and is indispensable in practical clinical disease diagnosis applications. Combining VR and ultrasound technology can provide a unique and effective teaching method for medical students and medical professionals. This article mainly discusses the current situation, advantages, and challenges of virtual reality technology in the teaching of ultrasonography to ensure their successful implementation in an educational environment.
Key words: Ultrasonography; Education; Virtual reality technology
Zheng Zhang, MS , Li Liu, MD , Desheng Sun, MD , Dirong Zhang, MD , Fengbei Kong, MS , Yalin Wu, PhD , Yu Shi, MD . Application of the Virtual Reality in the Teaching of Ultrasonography[J]. ADVANCED ULTRASOUND IN DIAGNOSIS AND THERAPY, 2023 , 7(2) : 193 -196 . DOI: 10.37015/AUDT.2023.230026
[1] | Konge L, Albrecht-Beste E, Nielsen MB. Virtual-reality simulation-based training in ultrasound. Ultraschall Med 2014; 35:95-97. |
[2] | YUAN Yi-biao, YE Xin-hua, HUANG Hua-xing, LI Ao. Design of virtual learning system about ultrasound imaging based on network. EDUCATION TEACHING FORUM 2016:94-96. |
[3] | Huang Hai-ling, Zhuang Wen-xian, Zhou Yu-xia, Cheng Pi-xian. Inspiration of building higher medical education evaluation system in our country given by educational informatics evaluation criteria in Great Britain and America. China Higher Medical Education 2010:1-2,5. |
[4] | Mahmood F, Mahmood E, Dorfman RG, Mitchell J, Mahmood FU, Jones SB, et al. Augmented reality and ultrasound education: initial experience. J Cardiothorac Vasc Anesth 2018; 32:1363-1367. |
[5] | Rizvic S, Boskovic D, Okanovic V, Sljivo S, Zukic M. Interactive digital storytelling: bringing cultural heritage in a classroom. Journal of Computers in Education 2019; 6:143-166. |
[6] | Nincarean D, Alia MB, Halim ND. A, Rahman M.H.A. Mobile augmented reality: the potential for education. Procedia-Social and Behavioral Sciences 2013; 103:657-664. |
[7] | Philippe S, Souchet A D, Lameras P, Petridis P, Caporal J, Coldeboeuf G, et al. Multimodal teaching, learning and training in virtual reality: a review and case study. Virtual Reality & Intelligent Hardware 2020; 2:421-442. |
[8] | Flavián C, Ibá?ez-Sánchez S, Orús C. The impact of virtual, augmented and mixed reality technologies on the customer experience. Journal of Business Research 2019; 100:547-560. |
[9] | GUO Yan-li, CHENG Guo, ZHANG Shao-xiang, TAN Li-wen. Application of the virtual liver ultrasonographic system in the teaching of hepatic ultrasonography. Medical Education Research and Practice 2010; 18:960-963. |
[10] | Spitzer VM. The visible human:a new language for communication in healthcare education. Caduceus 1997; 13:42-48. |
[11] | Pommert A, H?hne KH, Pflesser B, Richter E, Riemer M, Schiemann T, et al. Creating a high--resolution spatial model of the inner organs based on the Visible Human. Med Image Anal 2001; 5:221-228. |
[12] | Fasel JH, Gingins P, Kalra P, Magnenat-Thalmann N, Baur C, Cuttat JF, et al. Liver of the “visible man”. Clin Anat 1997; 10:389-393. |
[13] | Zhang SX, Heng PA, Liu ZJ. Chinese visible human project. Clin Anat. 2006; 19:204-215. |
[14] | Yangfan Wang, Chen Wang, Peng Long, Yuzong Gu, Wenfa Li. Recent advances in 3D object detection based on RGB-D: a survey. Displays 2021; 70:102077. |
[15] | Chen Fang, Yuanbo Guo, Jiali Ma, Haodong Xie, Yifeng Wang. A privacy-preserving and verifiable federated learning method based on blockchain. Computer Communications 2022; 186:1-11. |
[16] | Yang Y, Chen F, Shen Y, Dong T. FoldingNet:Point cloud auto-encoder via deep grid deformation. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018:206-215. |
[17] | H Fan, Y Yang. PointRNN: Point recurrent neural network for moving point cloud processing. arXiv preprint arXiv:1910.08287. |
[18] | T Groueix, M Fisher, G Kim, B Russell, M Aubry. AtlasNet:A papier-mach approach to learning 3D surface generation, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018. |
[19] | Anny Yuniarti, Agus Zainal Arifin, Nanik Suciati. A 3D template-based point generation network for 3D reconstruction from single images. Applied Soft Computing 2021; 11:107749. |
[20] | G Yang, X Huang, Z Hao, MY Liu, S Belongie, B Hariharan. PoinFlow:3D point cloud generation with continuous normalizing flows. IEEE International Conference on Computer Vision (ICCV) 2019. |
[21] | R Cai, G Yang, H Averbuch-Elor, Z Hao, S Belongie, N Snavely, et al. Learning gradient fields for shape generation. European Conference on Computer Vision (ECCV) 2020. |
[22] | M Tatarchenko, S Richter, R Ranftl, Z Li, V Koltun, T Brox. What do single-view 3D reconstruction networks learn? in: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019;3405-3414. |
[23] | P Wang, L Liu, H Zhang, T Wang. CGNet: A cascaded generative network for dense point cloud reconstruction from a single image, Knowledge-Based Systems 2021; 223:107057. |
[24] | XT Chen, Y Li, J-H Fan, R Wang. RGAM: A novel network architecture for 3D point cloud semantic segmentation in indoor scenes. Information Sciences 2021; 51:87-103. |
[25] | Ringsted C, Hodges B, Scherpbier A. 'The research compass': an introduction to research in medical education: AMEE guide no. 56. Med Teach 2011; 33:695-709. |
/
〈 | 〉 |