Abstract
The gradual increase in online fencing videos over the past decade has allowed for novel technical projects in fencing that rely heavily on data, such as artificial intelligence. This study resulted in a state-of-the-art lightweight Temporal Convolutional Network to referee fencing bouts and classify actions as either a touch for the fencer on the left, or the fencer on the right. To address this problem, we developed a pose estimation and audio analysis approach to autonomously referee fencing bouts. Using a custom dataset of international level fencing from the last 7 years, including ~4000 unique clips, our model achieved an accuracy of 89.1%, a 20% increase over previous state-of-the-art models. This model leverages advancements in human pose estimation to extract the position of both fencers and avoids high computational loads typically associated with CNNs. Additionally, it uses a novel technique to solve the issue of blade contact, a key component of refereeing fencing that was generally unaddressed in previous works. Our novel approach uses audio to ‘listen’ for the sound of blade contact rather than attempting to identify it visually.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献