Repository logo
Communities
Research Outputs
Projects
Researchers
Statistics
  • Feedback
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. CRIS
  3. Publications
  4. Deep Learning for Multimodal Fall Detection
Details

Deep Learning for Multimodal Fall Detection

Journal
2019 IEEE International Conference on Systems, Man and Cybernetics (SMC)
Date Issued
2019
Author(s)
Pérez-Daniel, Karina Ruby
Type
Resource Types::text::conference output::conference proceedings::conference paper
DOI
10.1109/SMC.2019.8914429
URL
https://scripta.up.edu.mx/handle/20.500.12552/4133
Abstract
Fall detection systems can help providing quick assistance of the person diminishing the severity of the consequences of a fall. Real-time fall detection is important to decrease fear and time that a person remains laying on the floor after falling. In recent years, multimodal fall detection approaches are developed in order to gain more precision and robustness. In this work, we propose a multimodal fall detection system based on wearable sensors, ambient sensors and vision devices. We used long short-term memory networks (LSTM) and convolutional neural networks (CNN) for our analysis given that they are able to extract features from raw data, and are well suited for real-time detection. To test our proposal, we built a public multimodal dataset for fall detection. After experimentation, our proposed method reached 96.4% in accuracy, and it represented an improvement in precision, recall and F-{1}-score over using single LSTM or CNN networks for fall detection. © 2019 IEEE.
Subjects

Deep learning

Fall detection

Long short-term memor...

Multimodal data

Real-time system

Brain

Convolution

Hosting & Support by

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Accessibility settings
  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify