Now showing 1 - 9 of 9
No Thumbnail Available
Publication

Approaching Fall Classification Using the UP-Fall Detection Dataset: Analysis and Results from an International Competition

2020 , Ponce, Hiram , Martinez-Villaseñor, Lourdes

This chapter presents the results of the Challenge UP – Multimodal Fall Detection competition that was held during the 2019 International Joint Conference on Neural Networks (IJCNN 2019). This competition lies on the fall classification problem, and it aims to classify eleven human activities (i.e. five types of falls and six simple daily activities) using the joint information from different wearables, ambient sensors and video recordings, stored in a given dataset. After five months of competition, three winners and one honorific mention were awarded during the conference event. The machine learning model from the first place scored$$82.47\%$$ in$$F:1$$-score, outperforming the baseline of$$70.44\%$$. After analyzing the implementations from the participants, we summarized the insights and trends of fall classification. © 2020, Springer Nature Switzerland AG.

No Thumbnail Available
Publication

A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset

2019 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Moya-Albor, Ernesto , Martinez-Villaseñor, Lourdes , Brieva, Jorge , Gutiérrez, Sebastián

The automatic recognition of human falls is currently an important topic of research for the computer vision and artificial intelligence communities. In image analysis, it is common to use a vision-based approach for fall detection and classification systems due to the recent exponential increase in the use of cameras. Moreover, deep learning techniques have revolutionized vision-based approaches. These techniques are considered robust and reliable solutions for detection and classification problems, mostly using convolutional neural networks (CNNs). Recently, our research group released a public multimodal dataset for fall detection called the UP-Fall Detection dataset, and studies on modality approaches for fall detection and classification are required. Focusing only on a vision-based approach, in this paper, we present a fall detection system based on a 2D CNN inference method and multiple cameras. This approach analyzes images in fixed time windows and extracts features using an optical flow method that obtains information on the relative motion between two consecutive images. We tested this approach on our public dataset, and the results showed that our proposed multi-vision-based approach detects human falls and achieves an accuracy of 95.64% compared to state-of-the-art methods with a simple CNN network architecture. © 2019 Elsevier Ltd

No Thumbnail Available
Publication

Online Testing in Machine Learning Approach for Fall Detection

2020 , Martinez-Villaseñor, Lourdes , Ponce, Hiram , Nuñez-Martínez, José , Pacheco, Sofia

Robust fall detectors are needed to reduce the time in which a person can receive medical assistance, and mitigate negative effects when a fall occurs. Robustness in fall detection systems is difficult to achieve given that there are still many challenges regarding performance in real conditions. Fall detection systems based on smartphones present good results following a traditional methodology of collecting data, training and evaluating classification models using the same sensors and subjects, yet fail to experiment and succeed in different realistic conditions. In this paper, we propose a methodology to build a solution for fall detection, and online testing changing the sensors and subjects of evaluation in order to provide a more flexible and portable fall detector. © 2020 IEEE.

No Thumbnail Available
Publication

Sensor Location Analysis and Minimal Deployment for Fall Detection System

2020 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Nuñez-Martinez, José

Human falls are considered as an important health problem worldwide. Fall detection systems can alert when a fall occurs reducing the time in which a person obtains medical attention. In this regard, there are different approaches to design fall detection systems, such as wearable sensors, ambient sensors, vision devices, and more recently multimodal approaches. However, these systems depend on the types of devices selected for data acquisition, the location in which these devices are placed, and how fall detection is done. Previously, we have created a multimodal dataset namely UP-Fall Detection and we developed a fall detection system. But the latter cannot be applied on realistic conditions due to a lack of proper selection of minimal sensors. In this work, we propose a methodological analysis to determine the minimal number of sensors required for developing an accurate fall detection system, using the UP-Fall Detection dataset. Specifically, we analyze five wearable sensors and two camera viewpoints separately. After that, we combine them in a feature level to evaluate and select the most suitable single or combined sources of information. From this analysis we found that a wearable sensor at the waist and a lateral viewpoint from a camera exhibits 98.72% of accuracy (intra-subject). At the end, we present a case study on the usage of the analysis results to deploy a minimal-sensor based fall detection system which finally reports 87.56% of accuracy (inter-subject).

No Thumbnail Available
Publication

Open Source Implementation for Fall Classification and Fall Detection Systems

2020 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Nuñez Martínez, José Pablo , Moya-Albor, Ernesto , Brieva, Jorge

Distributed social coding has created many benefits for software developers. Open source code and publicly available datasets can leverage the development of fall detection and fall classification systems. These systems can help to improve the time in which a person receives help after a fall occurs. Many of the simulated falls datasets consider different types of fall however, very few fall detection systems actually identify and discriminate between each category of falls. In this chapter, we present an open source implementation for fall classification and detection systems using the public UP-Fall Detection dataset. This implementation comprises a set of open codes stored in a GitHub repository for full access and provides a tutorial for using the codes and a concise example for their application. © 2020, Springer Nature Switzerland AG.

No Thumbnail Available
Publication

Artificial hydrocarbon networks for freezing of gait detection in Parkinson’s disease

2020 , Martinez-Villaseñor, Lourdes , Ponce, Hiram , Nuñez Martínez, José Pablo

Freezing of gait (FoG) is one of the most impairing phenomenon experienced by Parkinson's disease (PD) patients. This phenomenon is associated with falls and is an important factor that limits autonomy and impairs quality of life of PD patients. Pharmacological treatment is difficult and do not always help to deal with this problem. Robust FoG detection systems can help monitoring and identifying when a patient needs aid providing external cueing to deal with FoG episodes. In this paper, we describe a comparative analysis of traditional machine learning techniques against Artificial Hydrocarbon Networks (AHN) for FoG detection. We compared four supervised machine learning classifiers and AHN for FoG event detection using a publicly available dataset, obtaining 88% of F-score metric with AHN. We prove that AHN are suitable for FoG detection. © 2020 IEEE.

No Thumbnail Available
Publication

Versatility of Artificial Hydrocarbon Networks for Supervised Learning

2018 , Ponce, Hiram , Martinez-Villaseñor, Lourdes

Surveys on supervised machine show that each technique has strengths and weaknesses that make each of them more suitable for a particular domain or learning task. No technique is capable to tackle every supervised learning task, and it is difficult to comply with all possible desirable features of each particular domain. However, it is important that a new technique comply with the most requirements and desirable features of as many domains and learning tasks as possible. In this paper, we presented artificial hydrocarbon networks (AHN) as versatile and efficient supervised learning method. We determined the ability of AHN to solve different problem domains, with different data-sources and to learn different tasks. The analysis considered six applications in which AHN was successfully applied. © Springer Nature Switzerland AG 2018.

No Thumbnail Available
Publication

Preface : Advances in Soft Computing : 22nd Mexican International Conference on Artificial Intelligence, MICAI 2023, Yucatán, Mexico, November 13–18, 2023, Proceedings, Part II

2024-01-01 , Calvo, Hiram , Martinez-Villaseñor, Lourdes , Ponce, Hiram

The Mexican International Conference on Artificial Intelligence (MICAI) is a yearly international conference series that has been organized by the Mexican Society for Artificial Intelligence (SMIA) since 2000. MICAI is a major international artificial intelligence (AI) forum and the main event in the academic life of the country’s growing AI community. This year, MICAI 2023 was graciously hosted by the Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS) and the Universidad Autónoma del Estado de Yucatán (UAEY). The conference presented a cornucopia of scientific endeavors. ©Springer.

No Thumbnail Available
Publication

Design and Analysis for Fall Detection System Simplification

2020 , Martinez-Villaseñor, Lourdes , Ponce, Hiram

This paper presents a methodology based on multimodal sensors to configure a simple, comfortable and fast fall detection and human activity recognition system that can be easily implemented and adopted. The methodology is based on the configuration of specific types of sensors, machine-learning methods and procedures. The protocol is divided into four phases: (1) database creation (2) data analysis (3) system simplification and (4) evaluation. Using this methodology, we created a multimodal database for fall detection and human activity recognition, namely UP-Fall Detection. It comprises data samples from 17 subjects that perform 5 types of falls and 6 different simple activities, during 3 trials. All information was gathered using 5 wearable sensors (tri-axis accelerometer, gyroscope and light intensity), 1 electroencephalograph helmet, 6 infrared sensors as ambient sensors, and 2 cameras in lateral and front viewpoints. The proposed novel methodology adds some important stages to perform a deep analysis of the following design issues in order to simplify a fall detection system: a) select which sensors or combination of sensors are to be used in a simple fall detection system, b) determine the best placement of the sources of information, and c) select the most suitable machine learning classification method for fall and human activity detection and recognition. Even though some multimodal approaches reported in literature only focus on one or two of the above-mentioned issues, our methodology allows simultaneously solving these three design problems related to a human fall and activity detection and recognition system. ©2020 Journal of visualized experiments : NLM (Medline)