Comparative analysis of u-net architectures and variants for hand gesture segmentation in parkinson’s patients
Abstract
U-Net is a well-known method for image segmentation, and has proven effective for a variety of segmentation challenges. A deep learning architecture for segmenting hand gestures in parkinson’s disease is explored in this paper. We prepared and compared four custom models: a simple U-Net, a three-layer U-Net, an auto encoder-decoder architecture, and a U-Net with dense skip pathways, using a custom dataset of 1,000 hand gesture images and their corresponding masks. Our primary goal was to achieve accurate segmentation of parkinsonian hand gestures, which is crucial for automated diagnosis and monitoring in healthcare. Using metrics including accuracy, precision, recall, intersection over union (IoU), and dice score, we demonstrated that our architectures were effective in delineating hand gestures under different conditions. We also compared the performance of our custom models against pretrained deep learning architectures such as ResNet and VGGNet. Our findings indicate that the custom models effectively address the segmentation task, showcasing promising potential for practical applications in medical diagnostics and healthcare. This work highlights the versatility of our architectures in tackling the unique segmentation challenges associated with parkinson’s disease research and clinical practice.
Keywords
Autoencoder-decoder; Deep learning; Parkinson’s disease patients; Segmentation of hand gestures; U-Net framework
Full Text:
PDFDOI: http://doi.org/10.11591/ijict.v14i3.pp972-982
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Avadhoot Ramgonda Telepatil, Jayashree s Vaddin
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The International Journal of Informatics and Communication Technology (IJ-ICT)
p-ISSN 2252-8776, e-ISSN 2722-2616
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).