Now showing 1 - 9 of 9
No Thumbnail Available
Publication

Distributed evolutionary learning control for mobile robot navigation based on virtual and physical agents

2020 , Ponce, Hiram , Moya-Albor, Ernesto , Martinez-Villaseñor, Lourdes , Brieva, Jorge

This paper presents a distributed evolutionary learning control based on social wound treatment for mobile robot navigation using an integrated multi-robot system comprised of simulated and physical robots. To do so, this work proposes an extension of the population-based metaheuristic wound treatment optimization (WTO) method into a distributed scheme. In addition, this distributed WTO method is implemented on the multi-robot system allowing them to experience the environment in their own and communicate their findings, resulting in an emergence intelligence. We implemented our proposal using the combination of five simulated robots with one physical robot for tuning a navigation controller to move freely in a workspace. Results showed that the solution found by this multi-robot system aims using the output controller in the physical robot for successfully achieving the goal to move the robot around a U-maze, without applying any transfer learning approach. We consider this proposal useful in evolutionary robotics, and of great importance to decrease the gap related to transfer knowledge in robotics from simulation to reality. © 2019 Elsevier B.V.

Thumbnail Image
Publication

Challenges and trends in multimodal fall detection for healthcare

2020 , Ponce, Hiram , Brieva, Jorge , Martinez-Villaseñor, Lourdes , Moya-Albor, Ernesto , HIRAM EREDIN PONCE ESPINOSA;376768 , JORGE EDUARDO BRIEVA RICO;121435

This book focuses on novel implementations of sensor technologies, artificial intelligence, machine learning, computer vision and statistics for automated, human fall recognition systems and related topics using data fusion. It includes theory and coding implementations to help readers quickly grasp the concepts and to highlight the applicability of this technology. For convenience, it is divided into two parts. The first part reviews the state of the art in human fall and activity recognition systems, while the second part describes a public dataset especially curated for multimodal fall detection. It also gathers contributions demonstrating the use of this dataset and showing examples. This book is useful for anyone who is interested in fall detection systems, as well as for those interested in solving challenging, signal recognition, vision and machine learning problems. Potential applications include health care, robotics, sports, human–machine interaction, among others.

No Thumbnail Available
Publication

UP-fall detection dataset : a multimodal approach

2019 , Martinez-Villaseñor, Lourdes , Ponce, Hiram , Brieva, Jorge , Moya-Albor, Ernesto , Nuñez-Martínez, José , Peñafort Asturiano, Carlos J.

Falls, especially in elderly persons, are an important health problem worldwide. Reliable fall detection systems can mitigate negative consequences of falls. Among the important challenges and issues reported in literature is the difficulty of fair comparison between fall detection systems and machine learning techniques for detection. In this paper, we present UP-Fall Detection Dataset. The dataset comprises raw and feature sets retrieved from 17 healthy young individuals without any impairment that performed 11 activities and falls, with three attempts each. The dataset also summarizes more than 850 GB of information from wearable sensors, ambient sensors and vision devices. Two experimental use cases were shown. The aim of our dataset is to help human activity recognition and machine learning research communities to fairly compare their fall detection solutions. It also provides many experimental possibilities for the signal recognition, vision, and machine learning community. ©2019 NLM (Medline).

No Thumbnail Available
Publication

A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset

2019 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Moya-Albor, Ernesto , Martinez-Villaseñor, Lourdes , Brieva, Jorge , Gutiérrez, Sebastián

The automatic recognition of human falls is currently an important topic of research for the computer vision and artificial intelligence communities. In image analysis, it is common to use a vision-based approach for fall detection and classification systems due to the recent exponential increase in the use of cameras. Moreover, deep learning techniques have revolutionized vision-based approaches. These techniques are considered robust and reliable solutions for detection and classification problems, mostly using convolutional neural networks (CNNs). Recently, our research group released a public multimodal dataset for fall detection called the UP-Fall Detection dataset, and studies on modality approaches for fall detection and classification are required. Focusing only on a vision-based approach, in this paper, we present a fall detection system based on a 2D CNN inference method and multiple cameras. This approach analyzes images in fixed time windows and extracts features using an optical flow method that obtains information on the relative motion between two consecutive images. We tested this approach on our public dataset, and the results showed that our proposed multi-vision-based approach detects human falls and achieves an accuracy of 95.64% compared to state-of-the-art methods with a simple CNN network architecture. © 2019 Elsevier Ltd

No Thumbnail Available
Publication

Open Source Implementation for Fall Classification and Fall Detection Systems

2020 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Nuñez Martínez, José Pablo , Moya-Albor, Ernesto , Brieva, Jorge

Distributed social coding has created many benefits for software developers. Open source code and publicly available datasets can leverage the development of fall detection and fall classification systems. These systems can help to improve the time in which a person receives help after a fall occurs. Many of the simulated falls datasets consider different types of fall however, very few fall detection systems actually identify and discriminate between each category of falls. In this chapter, we present an open source implementation for fall classification and detection systems using the public UP-Fall Detection dataset. This implementation comprises a set of open codes stored in a GitHub repository for full access and provides a tutorial for using the codes and a concise example for their application. © 2020, Springer Nature Switzerland AG.

No Thumbnail Available
Publication

Challenges and trends in multimodal fall detection for healthcare : Preface

2020 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Moya-Albor, Ernesto , Brieva, Jorge

This book focuses on novel implementations of sensor technologies, artificial intelligence, machine learning, computer vision and statistics for automated, human fall recognition systems and related topics using data fusion. It includes theory and coding implementations to help readers quickly grasp the concepts and to highlight the applicability of this technology. For convenience, it is divided into two parts. The first part reviews the state of the art in human fall and activity recognition systems, while the second part describes a public dataset especially curated for multimodal fall detection. It also gathers contributions demonstrating the use of this dataset and showing examples. This book is useful for anyone who is interested in fall detection systems, as well as for those interested in solving challenging, signal recognition, vision and machine learning problems. Potential applications include health care, robotics, sports, human–machine interaction, among others. ©2020 Springer Nature Switzerland AG.

No Thumbnail Available
Publication

A non-contact heart rate estimation method using video magnification and neural networks

2020 , Moya-Albor, Ernesto , Brieva, Jorge , Ponce, Hiram , Martinez-Villaseñor, Lourdes

Heart rate (HR) monitoring is a significant task in many medical, sports and aged care in assisted living applications, among other disciplines. In the literature, several works have reported effectiveness in addressing the measurement of HR using contact sensors such as adhesive or dry electro-conductive electrodes. However, there are several issues associated with contact sensors like portability problems, skin irritation, discomfort and body movement constraints. In this regard, this paper presents a non-contact HR estimation method using vision-based methods and neural networks. This work uses a bio-inspired Eulerian motion magnification approach to highlight the blood irrigation process of the cardiac pulse, which is later inputted to a feed-forward neural network trained to estimate the HR. For experimental analysis, we compare two magnification procedures, based on Gaussian and Hermite decomposition, over video recordings collected from the wrists of five subjects. Results show that the Hermite-based magnification method is robust under noise analysis (4.24 bpm of root mean squared-error in the worst case scenario). Furthermore, our results demonstrate that the Hermite-based method is competitive in the state-of-the-art (1.86 bpm in average of root mean squared-error) and can be implemented using a single camera for contactless HR estimation. ©2020 IEEE Instrumentation and Measurement Magazine, Institute of Electrical and Electronics Engineers Inc.

No Thumbnail Available
Publication

Application of Convolutional Neural Networks for Fall Detection Using Multiple Cameras

2020 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Gutiérrez, Sebastián , Martinez-Villaseñor, Lourdes , Moya-Albor, Ernesto , Brieva, Jorge

Currently one of the most important research issue for artificial intelligence and computer vision tasks is the recognition of human falls. Due to the current exponential increase in the use of cameras is it common to use vision-based approach for fall detection and classification systems. On another hand deep learning algorithms have transformed the way that we see vision-based problems. The Convolutional Neural Network (CNN) as deep learning technique offers more reliable and robust solutions on detection and classification problems. Focusing only on a vision-based approach, for this work we used images from a new public multimodal data set for fall detection (UP-Fall Detection dataset) published by our research team. In this chapter we present fall detection system using a 2D CNN analyzing multiple camera information. This method analyzes images in fixed time window frames extracting features using an optical flow method that obtains information of relative motion between two consecutive images. For experimental results, we tested this approach in UP-Fall Detection dataset. Results showed that our proposed multi-vision-based approach detects human falls achieving 95.64% in accuracy with a simple CNN network architecture compared with other state-of-the-art methods.

No Thumbnail Available
Publication

An Intelligent Human Fall Detection System Using a Vision-Based Strategy

2019 , Brieva, Jorge , Ponce, Hiram , Moya-Albor, Ernesto , Martinez-Villaseñor, Lourdes

Elderly people is increasing dramatically during the current years, and it is expected that this population reaches 2.1 billion of individuals by 2050. In this regard, new care strategies are required. Assisted living technologies have proposed alternatives to support professional caregivers and families to take care of elderly people, such as in risk of falls. Currently, fall detection systems are able to alleviate the latter problem and reduce the time a person who suffered a fall receives assistance. Thus, this paper proposes a fall detection system based on image processing strategy to extract motion features through an optical flow method. For classification, we use these features as inputs to a convolutional neural network. We applied our approach in a dataset comprises video recordings of one subject performing different types of falls. In experimental results, our approach showed 92% accuracy on the dataset used. © 2019 IEEE.