Now showing 1 - 5 of 5
No Thumbnail Available
Publication

A Methodology Based on Deep Q-Learning/Genetic Algorithms for Optimizing COVID-19 Pandemic Government Actions

2020 , Miralles-Pechuán, Luis , Jiménez, Fernando , Ponce, Hiram , Martinez-Villaseñor, Lourdes

Whenever countries are threatened by a pandemic, as is the case with the COVID-19 virus, governments need help to take the right actions to safeguard public health as well as to mitigate the negative effects on the economy. A restrictive approach can seriously damage the economy. Conversely, a relaxed one may put at risk a high percentage of the population. Other investigations in this area are focused on modelling the spread of the virus or estimating the impact of the different measures on its propagation. However, in this paper, we propose a new methodology for helping governments in planning the phases to combat the pandemic based on their priorities. To this end, we implement the SEIR epidemiological model to represent the evolution of the COVID-19 virus on the population. To optimize the best sequences of actions governments can take, we propose a methodology with two approaches, one based on Deep Q-Learning and another one based on Genetic Algorithms. The sequences of actions (confinement, self-isolation, two-meter distance or not taking restrictions) are evaluated according to a reward system focused on meeting two objectives: firstly, getting few people infected so that hospitals are not overwhelmed, and secondly, avoiding taking drastic measures which could cause serious damage to the economy. The conducted experiments evaluate our methodology based on the accumulated rewards during the established period. The experiments also prove that it is a valid tool for governments to reduce the negative effects of a pandemic by optimizing the planning of the phases. According to our results, the approach based on Deep Q-Learning outperforms the one based on Genetic Algorithms. © 2020 ACM.

No Thumbnail Available
Publication

Application of Convolutional Neural Networks for Fall Detection Using Multiple Cameras

2020 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Gutiérrez, Sebastián , Martinez-Villaseñor, Lourdes , Moya-Albor, Ernesto , Brieva, Jorge

Currently one of the most important research issue for artificial intelligence and computer vision tasks is the recognition of human falls. Due to the current exponential increase in the use of cameras is it common to use vision-based approach for fall detection and classification systems. On another hand deep learning algorithms have transformed the way that we see vision-based problems. The Convolutional Neural Network (CNN) as deep learning technique offers more reliable and robust solutions on detection and classification problems. Focusing only on a vision-based approach, for this work we used images from a new public multimodal data set for fall detection (UP-Fall Detection dataset) published by our research team. In this chapter we present fall detection system using a 2D CNN analyzing multiple camera information. This method analyzes images in fixed time window frames extracting features using an optical flow method that obtains information of relative motion between two consecutive images. For experimental results, we tested this approach in UP-Fall Detection dataset. Results showed that our proposed multi-vision-based approach detects human falls achieving 95.64% in accuracy with a simple CNN network architecture compared with other state-of-the-art methods.

No Thumbnail Available
Publication

Analysis of Contextual Sensors for Fall Detection

2019 , Martinez-Villaseñor, Lourdes , Ponce, Hiram

Falls are a major problem among older people and often cause serious injuries. It is important to have efficient fall detection solutions to reduce the time in which a person who suffered a fall receives assistance. Given the recent availability of cameras, wearable and ambient sensors, more research in fall detection is focused on combining different data modalities. In order to determine the positive effects of each modality and combination to improve the effectiveness of fall detection, a detailed assessment has to be done. In this paper, we analyzed different combinations of wearable devices, namely IMUs and EEG helmet, with grid of active infrared sensors for fall detection, with the aim to determine the positive effects of contextual information on the accuracy in fall detection. We used short-term memory (LSTM) networks to enable fall detection from sensors raw data. For some activities certain combinations can be helpful to discriminate other activities of daily living (ADL) from falls. © 2019 IEEE.

No Thumbnail Available
Publication

Deep Learning for Multimodal Fall Detection

2019 , Martinez-Villaseñor, Lourdes , Pérez-Daniel, Karina Ruby , Ponce, Hiram

Fall detection systems can help providing quick assistance of the person diminishing the severity of the consequences of a fall. Real-time fall detection is important to decrease fear and time that a person remains laying on the floor after falling. In recent years, multimodal fall detection approaches are developed in order to gain more precision and robustness. In this work, we propose a multimodal fall detection system based on wearable sensors, ambient sensors and vision devices. We used long short-term memory networks (LSTM) and convolutional neural networks (CNN) for our analysis given that they are able to extract features from raw data, and are well suited for real-time detection. To test our proposal, we built a public multimodal dataset for fall detection. After experimentation, our proposed method reached 96.4% in accuracy, and it represented an improvement in precision, recall and F-{1}-score over using single LSTM or CNN networks for fall detection. © 2019 IEEE.

No Thumbnail Available
Publication

An Explainable Tool to Support Age-related Macular Degeneration Diagnosis

2022 , Martinez-Villaseñor, Lourdes , Miralles-Pechuán, Luis , Ponce, Hiram , Martínez Velasco, Antonieta Teodora

Artificial intelligence and deep learning, in particu-lar, have gained large attention in the ophthalmology community due to the possibility of processing large amounts of data and dig-itized ocular images. Intelligent systems are developed to support the diagnosis and treatment of a number of ophthalmic diseases such as age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity. Hence, explainability is necessary to gain trust and therefore the adoption of these critical decision support systems. Visual explanations have been proposed for AMD diagnosis only when optical coherence tomography (OCT) images are used, but interpretability using other inputs (i.e. data point-based features) for AMD diagnosis is rather limited. In this paper, we propose a practical tool to support AMD diagnosis based on Artificial Hydrocarbon Networks (AHN) with different kinds of input data such as demographic characteristics, features known as risk factors for AMD, and genetic variants obtained from DNA genotyping. The proposed explainer, namely eXplainable Artificial Hydrocarbon Networks (XAHN) is able to get global and local interpretations of the AHN model. An explainability assessment of the XAHN explainer was applied to clinicians for getting feedback from the tool. We consider the XAHN explainer tool will be beneficial to support expert clinicians in AMD diagnosis, especially where input data are not visual. © 2022 IEEE.