Now showing 1 - 6 of 6
No Thumbnail Available
Publication

Comparative Analysis of Artificial Hydrocarbon Networks and Data-Driven Approaches for Human Activity Recognition

2015 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Miralles-Pechuán, Luis

In recent years computing and sensing technologies advances contribute to develop effective human activity recognition systems. In context-aware and ambient assistive living applications, classification of body postures and movements, aids in the development of health systems that improve the quality of life of the disabled and the elderly. In this paper we describe a comparative analysis of data-driven activity recognition techniques against a novel supervised learning technique called artificial hydrocarbon networks (AHN). We prove that artificial hydrocarbon networks are suitable for efficient body postures and movements classification, providing a comparison between its performance and other well-known supervised learning methods.

No Thumbnail Available
Publication

Design and Analysis for Fall Detection System Simplification

2020 , Martinez-Villaseñor, Lourdes , Ponce, Hiram

This paper presents a methodology based on multimodal sensors to configure a simple, comfortable and fast fall detection and human activity recognition system that can be easily implemented and adopted. The methodology is based on the configuration of specific types of sensors, machine-learning methods and procedures. The protocol is divided into four phases: (1) database creation (2) data analysis (3) system simplification and (4) evaluation. Using this methodology, we created a multimodal database for fall detection and human activity recognition, namely UP-Fall Detection. It comprises data samples from 17 subjects that perform 5 types of falls and 6 different simple activities, during 3 trials. All information was gathered using 5 wearable sensors (tri-axis accelerometer, gyroscope and light intensity), 1 electroencephalograph helmet, 6 infrared sensors as ambient sensors, and 2 cameras in lateral and front viewpoints. The proposed novel methodology adds some important stages to perform a deep analysis of the following design issues in order to simplify a fall detection system: a) select which sensors or combination of sensors are to be used in a simple fall detection system, b) determine the best placement of the sources of information, and c) select the most suitable machine learning classification method for fall and human activity detection and recognition. Even though some multimodal approaches reported in literature only focus on one or two of the above-mentioned issues, our methodology allows simultaneously solving these three design problems related to a human fall and activity detection and recognition system. ©2020 Journal of visualized experiments : NLM (Medline)

No Thumbnail Available
Publication

Sensor Location Analysis and Minimal Deployment for Fall Detection System

2020 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Nuñez-Martinez, José

Human falls are considered as an important health problem worldwide. Fall detection systems can alert when a fall occurs reducing the time in which a person obtains medical attention. In this regard, there are different approaches to design fall detection systems, such as wearable sensors, ambient sensors, vision devices, and more recently multimodal approaches. However, these systems depend on the types of devices selected for data acquisition, the location in which these devices are placed, and how fall detection is done. Previously, we have created a multimodal dataset namely UP-Fall Detection and we developed a fall detection system. But the latter cannot be applied on realistic conditions due to a lack of proper selection of minimal sensors. In this work, we propose a methodological analysis to determine the minimal number of sensors required for developing an accurate fall detection system, using the UP-Fall Detection dataset. Specifically, we analyze five wearable sensors and two camera viewpoints separately. After that, we combine them in a feature level to evaluate and select the most suitable single or combined sources of information. From this analysis we found that a wearable sensor at the waist and a lateral viewpoint from a camera exhibits 98.72% of accuracy (intra-subject). At the end, we present a case study on the usage of the analysis results to deploy a minimal-sensor based fall detection system which finally reports 87.56% of accuracy (inter-subject).

No Thumbnail Available
Publication

Open Source Implementation for Fall Classification and Fall Detection Systems

2020 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Nuñez Martínez, José Pablo , Moya-Albor, Ernesto , Brieva, Jorge

Distributed social coding has created many benefits for software developers. Open source code and publicly available datasets can leverage the development of fall detection and fall classification systems. These systems can help to improve the time in which a person receives help after a fall occurs. Many of the simulated falls datasets consider different types of fall however, very few fall detection systems actually identify and discriminate between each category of falls. In this chapter, we present an open source implementation for fall classification and detection systems using the public UP-Fall Detection dataset. This implementation comprises a set of open codes stored in a GitHub repository for full access and provides a tutorial for using the codes and a concise example for their application. © 2020, Springer Nature Switzerland AG.

No Thumbnail Available
Publication

Analysis of Contextual Sensors for Fall Detection

2019 , Martinez-Villaseñor, Lourdes , Ponce, Hiram

Falls are a major problem among older people and often cause serious injuries. It is important to have efficient fall detection solutions to reduce the time in which a person who suffered a fall receives assistance. Given the recent availability of cameras, wearable and ambient sensors, more research in fall detection is focused on combining different data modalities. In order to determine the positive effects of each modality and combination to improve the effectiveness of fall detection, a detailed assessment has to be done. In this paper, we analyzed different combinations of wearable devices, namely IMUs and EEG helmet, with grid of active infrared sensors for fall detection, with the aim to determine the positive effects of contextual information on the accuracy in fall detection. We used short-term memory (LSTM) networks to enable fall detection from sensors raw data. For some activities certain combinations can be helpful to discriminate other activities of daily living (ADL) from falls. © 2019 IEEE.

No Thumbnail Available
Publication

A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset

2019 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Moya-Albor, Ernesto , Martinez-Villaseñor, Lourdes , Brieva, Jorge , Gutiérrez, Sebastián

The automatic recognition of human falls is currently an important topic of research for the computer vision and artificial intelligence communities. In image analysis, it is common to use a vision-based approach for fall detection and classification systems due to the recent exponential increase in the use of cameras. Moreover, deep learning techniques have revolutionized vision-based approaches. These techniques are considered robust and reliable solutions for detection and classification problems, mostly using convolutional neural networks (CNNs). Recently, our research group released a public multimodal dataset for fall detection called the UP-Fall Detection dataset, and studies on modality approaches for fall detection and classification are required. Focusing only on a vision-based approach, in this paper, we present a fall detection system based on a 2D CNN inference method and multiple cameras. This approach analyzes images in fixed time windows and extracts features using an optical flow method that obtains information on the relative motion between two consecutive images. We tested this approach on our public dataset, and the results showed that our proposed multi-vision-based approach detects human falls and achieves an accuracy of 95.64% compared to state-of-the-art methods with a simple CNN network architecture. © 2019 Elsevier Ltd