Using Hybrid Models for Action Correction in Instrument Learning Based on AI

Human action recognition has recently attracted much attention in computer vision research. Its applications are widely found in video surveillance, human-computer interaction, entertainment, and autonomous driving. In this study, we developed a system for evaluating online music performances. This...

Mô tả đầy đủ

Đã lưu trong:
Chi tiết về thư mục
Những tác giả chính: AVIRMED ENKHBAT, TIMOTHY K. SHIH, MUNKHJARGAL GOCHOO, PIMPA CHEEWAPRAKOBKIT, WISNU ADITYA, Thai, Duy Quy, HSINCHIH LIN, YU-TING LIN
Định dạng: Journal article
Ngôn ngữ:English
Được phát hành: 2024
Những chủ đề:
Truy cập trực tuyến:https://scholar.dlu.edu.vn/handle/123456789/3553
https://ieeexplore.ieee.org/document/10663716
Các nhãn: Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!
Thư viện lưu trữ: Thư viện Trường Đại học Đà Lạt
Miêu tả
Tóm tắt:Human action recognition has recently attracted much attention in computer vision research. Its applications are widely found in video surveillance, human-computer interaction, entertainment, and autonomous driving. In this study, we developed a system for evaluating online music performances. This system conducts experiments to assess performance of playing the erhu, the most popular traditional stringed instrument in East Asia. Mastering the erhu poses a challenge, as players often struggle to enhance their skills due to incorrect techniques and a lack of guidance, resulting in limited progress. To address this issue, we propose hybrid models based on graph convolutional networks (GCN) and temporal convolutional networks (TCN) for action recognition to capture spatial relationships between different joints or keypoints in a human skeleton, and interactions between these joints. This can assist players in identifying errors while playing the instrument. In our research, we use RGB video as input, segmenting it into individual frames. For each frame, we extract keypoints, encompassing both image and keypoint information, which serve as input data for our model. Leveraging our innovative model architecture, we achieve an impressive accuracy rate exceeding 97% across various classes of hand error modules, thus providing valuable insights into the assessment of musical performances and demonstrates the potential of AI-based solutions to enhance the learning and correction of complex human actions in interactive learning environments.