Publication Type : Journal Article
Publisher : Springer Nature Switzerland
Source : Lecture Notes in Computer Science
Url : https://doi.org/10.1007/978-3-031-78195-7_17
Campus : Amaravati
School : School of Engineering
Department : Electronics and Communication
Year : 2024
Abstract : Ultrasound (US) technology has revolutionized prenatal care by offering noninvasive, real-time visualization of maternal-fetal anatomy. The accurate classification of maternal-fetal US planes is a critical segment of effective prenatal diagnosis. However, the inherent inter-class variance among different fetal US images presents a significant hurdle, making fetal anatomy detection a laborious and time-consuming task, even for experienced sonographers. This paper proposes a novel approach using a Hybrid Vision Transformer (H-ViT) for automated fetal anatomical plane classification to address these challenges. The proposed method utilizes hierarchical features extracted from DenseNet-121, which are then inputted into the vision transformer to analyze complex spatial relationships and patterns within fetal US images. By incorporating both global and local features, the proposed method enhances feature discriminability, thus alleviating low inter-class variance. The effectiveness of the H-ViT is evaluated using the largest publicly available maternal-fetal US image dataset. The experimental results rigorously demonstrate the superiority of our approach, achieving an accuracy of 96.60% compared to other state-of-the-art techniques.
Cite this Research Publication : Thunakala Bala Krishna, Ajay Kumar Reddy Poreddy, Kolla Gnapika Sindhu, Priyanka Kokil, Automated Maternal Fetal Ultrasound Image Identification Using a Hybrid Vision Transformer Model, Lecture Notes in Computer Science, Springer Nature Switzerland, 2024, https://doi.org/10.1007/978-3-031-78195-7_17