Abstract
Fall detection can improve the security and safety of older people and alert when fall occurs. Fall detection systems are mainly based on wearable sensors, ambient sensors, and vision. Each method has commonly known advantages and limitations. Multimodal and data fusion approaches present a combination of data sources in order to better describe falls. Publicly available multimodal datasets are needed to allow comparison between systems, algorithms and modal combinations. To address this issue, we present a publicly available dataset for fall detection considering Inertial Measurement Units (IMUs), ambient infrared presence/absence sensors, and an electroencephalogram Helmet. It will allow human activity recognition researchers to do experiments considering different combination of sensors.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献