Now showing 1 - 10 of 10
No Thumbnail Available
Publication

An Explainable Tool to Support Age-related Macular Degeneration Diagnosis

2022 , Martinez-Villaseñor, Lourdes , Miralles-Pechuán, Luis , Ponce, Hiram , Martínez Velasco, Antonieta Teodora

Artificial intelligence and deep learning, in particu-lar, have gained large attention in the ophthalmology community due to the possibility of processing large amounts of data and dig-itized ocular images. Intelligent systems are developed to support the diagnosis and treatment of a number of ophthalmic diseases such as age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity. Hence, explainability is necessary to gain trust and therefore the adoption of these critical decision support systems. Visual explanations have been proposed for AMD diagnosis only when optical coherence tomography (OCT) images are used, but interpretability using other inputs (i.e. data point-based features) for AMD diagnosis is rather limited. In this paper, we propose a practical tool to support AMD diagnosis based on Artificial Hydrocarbon Networks (AHN) with different kinds of input data such as demographic characteristics, features known as risk factors for AMD, and genetic variants obtained from DNA genotyping. The proposed explainer, namely eXplainable Artificial Hydrocarbon Networks (XAHN) is able to get global and local interpretations of the AHN model. An explainability assessment of the XAHN explainer was applied to clinicians for getting feedback from the tool. We consider the XAHN explainer tool will be beneficial to support expert clinicians in AMD diagnosis, especially where input data are not visual. © 2022 IEEE.

No Thumbnail Available
Publication

Stair Climbing Robot Based on Convolutional Neural Networks for Visual Impaired

2019 , Campos, Guillermo , Poza, David , Reyes, Moises , Zacate, Alma , Ponce, Hiram , Brieva, Jorge , Moya-Albor, Ernesto

When a person loses the sense of sight, in general, it is suggested to use a white cane to perform daily activities. However, using a white cane limits the movement of a person. In addition, guide dogs can be served in this impairment. However, the acquisition and maintenance of a guide dog is extremely high for people in development countries. In this regard, this paper presents a proof-of-concept of a low-cost robotic system able to guide a visual impaired, as a guide dog. The robot is specially designed for climbing stairs at indoors, and it uses convolutional neural networks (CNN) for both object detection and hand gesture recognition for special instructions from the user. Experimental results showed that our prototype robot can climb stairs with 86.7% of efficiency in concrete stair surfaces. Also, the visual representation by CNN performed more than 98% accuracy. © 2019 IEEE.

No Thumbnail Available
Publication

Application of Convolutional Neural Networks for Fall Detection Using Multiple Cameras

2020 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Gutiérrez, Sebastián , Martinez-Villaseñor, Lourdes , Moya-Albor, Ernesto , Brieva, Jorge

Currently one of the most important research issue for artificial intelligence and computer vision tasks is the recognition of human falls. Due to the current exponential increase in the use of cameras is it common to use vision-based approach for fall detection and classification systems. On another hand deep learning algorithms have transformed the way that we see vision-based problems. The Convolutional Neural Network (CNN) as deep learning technique offers more reliable and robust solutions on detection and classification problems. Focusing only on a vision-based approach, for this work we used images from a new public multimodal data set for fall detection (UP-Fall Detection dataset) published by our research team. In this chapter we present fall detection system using a 2D CNN analyzing multiple camera information. This method analyzes images in fixed time window frames extracting features using an optical flow method that obtains information of relative motion between two consecutive images. For experimental results, we tested this approach in UP-Fall Detection dataset. Results showed that our proposed multi-vision-based approach detects human falls achieving 95.64% in accuracy with a simple CNN network architecture compared with other state-of-the-art methods.

No Thumbnail Available
Publication

Automatic classification of coronary stenosis using convolutional neural networks and simulated annealing

2022 , Rendon-Aguilar, Luis Diego , Cruz-Aceves, Ivan , Fernandez-Jaramillo, Arturo Alfonso , Moya-Albor, Ernesto , Brieva, Jorge , Ponce, Hiram

Automatic detection of coronary stenosis plays an essential role in systems that perform computer-aided diagnosis in cardiology. Coronary stenosis is a narrowing of the coronary arteries caused by plaque that reduces the blood flow to the heart. Automatic classification of coronary stenosis images has been re-cently addressed using deep and machine learning techniques. Generally, the machine learning methods form a bank of empirical and automatic features from the angiographic images. In the present work, a novel method for the automatic classification of coronary stenosis X-ray images is presented. The method is based on convolutional neural networks, where the neural architecture search is performed by using the path-based metaheuristics of simulated annealing. To perform the neural architecture search, the maximization of the F1-score metric is used as the fitness function. The automatically generated convolutional neural network was compared with three deep learning methods in terms of the accuracy and F1-score metrics using a testing set of images obtaining 0.88 and 0.89, respectively. In addition, the proposed method was evaluated with different sets of coronary stenosis images obtained via data augmentation. The results involving a number of different instances have shown that the proposed architecture is robust preserving the efficiency with different datasets © 2023 Şaban öztürk. All rights reserved.

No Thumbnail Available
Publication

Vision-Based Analysis on Leaves of Tomato Crops for Classifying Nutrient Deficiency using Convolutional Neural Networks

2020 , Cevallos Vega, Claudio Sebastián , Ponce, Hiram , Moya-Albor, Ernesto , Brieva, Jorge

Tomato crops are one of the most important agricultural products at economic level in the world. However, the quality of the tomato fruits is highly dependent to the growing conditions such as the nutrients. One of consequences of the latter during tomato harvesting is nutrient deficiency. Manually, it is possible to anticipate the lack of primary nutrients (i.e. nitrogen, phosphorus and potassium) by looking the appearance of the leaves in tomato plants. Thus, this paper presents a supervised vision-based monitoring system for detecting nutrients deficiencies in tomato crops by taking images from the leaves of the plants. It uses a Convolutional Neural Network (CNN) to recognize and classify the type of nutrient that is deficient in the plants. First, we created a data set of images of leaves of tomato plants showing different symptoms due to the nutrient deficiency. Then, we trained a suitable CNN-model with our images and other augmented data. Experimental results showed that our CNN-model can achieve 86.57% of accuracy. We anticipate the implementation of our proposal for future precision agriculture applications such as automated nutrient level monitoring and control in tomato crops. © 2020 IEEE.

No Thumbnail Available
Publication

Analysis of Contextual Sensors for Fall Detection

2019 , Martinez-Villaseñor, Lourdes , Ponce, Hiram

Falls are a major problem among older people and often cause serious injuries. It is important to have efficient fall detection solutions to reduce the time in which a person who suffered a fall receives assistance. Given the recent availability of cameras, wearable and ambient sensors, more research in fall detection is focused on combining different data modalities. In order to determine the positive effects of each modality and combination to improve the effectiveness of fall detection, a detailed assessment has to be done. In this paper, we analyzed different combinations of wearable devices, namely IMUs and EEG helmet, with grid of active infrared sensors for fall detection, with the aim to determine the positive effects of contextual information on the accuracy in fall detection. We used short-term memory (LSTM) networks to enable fall detection from sensors raw data. For some activities certain combinations can be helpful to discriminate other activities of daily living (ADL) from falls. © 2019 IEEE.

No Thumbnail Available
Publication

A Methodology Based on Deep Q-Learning/Genetic Algorithms for Optimizing COVID-19 Pandemic Government Actions

2020 , Miralles-Pechuán, Luis , Jiménez, Fernando , Ponce, Hiram , Martinez-Villaseñor, Lourdes

Whenever countries are threatened by a pandemic, as is the case with the COVID-19 virus, governments need help to take the right actions to safeguard public health as well as to mitigate the negative effects on the economy. A restrictive approach can seriously damage the economy. Conversely, a relaxed one may put at risk a high percentage of the population. Other investigations in this area are focused on modelling the spread of the virus or estimating the impact of the different measures on its propagation. However, in this paper, we propose a new methodology for helping governments in planning the phases to combat the pandemic based on their priorities. To this end, we implement the SEIR epidemiological model to represent the evolution of the COVID-19 virus on the population. To optimize the best sequences of actions governments can take, we propose a methodology with two approaches, one based on Deep Q-Learning and another one based on Genetic Algorithms. The sequences of actions (confinement, self-isolation, two-meter distance or not taking restrictions) are evaluated according to a reward system focused on meeting two objectives: firstly, getting few people infected so that hospitals are not overwhelmed, and secondly, avoiding taking drastic measures which could cause serious damage to the economy. The conducted experiments evaluate our methodology based on the accumulated rewards during the established period. The experiments also prove that it is a valid tool for governments to reduce the negative effects of a pandemic by optimizing the planning of the phases. According to our results, the approach based on Deep Q-Learning outperforms the one based on Genetic Algorithms. © 2020 ACM.

No Thumbnail Available
Publication

Click-event sound detection in automotive industry using machine/deep learning

2021 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Gutiérrez, Sebastián

In the automotive industry, despite the robotic systems on the production lines, factories continue employing workers in several custom tasks getting for semi-automatic assembly operations. Specifically, the assembly of electrical harnesses of engines comprises a set of connections between electrical components. Despite the task is easy to perform, employees tend not to notice that a few components are not being connected properly due to physical fatigue provoked by repetitive tasks. This yields a low quality of the assembly production line and possible hazards. In this work, we propose a sound detection system based on machine/deep learning (ML/DL) approaches to identify click sounds produced when electrical harnesses are connected. The purpose of this system is to count the number of connections properly made and to feedback to the employees. We collect and release a public dataset of 25,000 click sounds of 25 ms length at 22 kHz during three months of assembly operations in an automotive production line located in Mexico. Then, we design an ML/DL-based methodology for click sound detection of assembled harnesses under real conditions of a noisy environment (noise level ranging from −16.67 dB to −12.87 dB) including other machinery sounds. Our best ML/DL model (i.e., a combination between five acoustic features and an optimized convolutional neural network) is able to detect click sounds in a real assembly production line with an accuracy of 94.55±0.83 %. To the best of our knowledge, this is the first time a click sounds detection system in assembling electrical harnesses of engines for giving feedback to the workers is proposed and implemented in a real-world automotive production line. We consider this work valuable for the automotive industry on how to apply ML/DL approaches for improving the quality of semi-automatic assembly operations. © 2021 Elsevier B.V.

No Thumbnail Available
Publication

Deep Learning for Multimodal Fall Detection

2019 , Martinez-Villaseñor, Lourdes , Pérez-Daniel, Karina Ruby , Ponce, Hiram

Fall detection systems can help providing quick assistance of the person diminishing the severity of the consequences of a fall. Real-time fall detection is important to decrease fear and time that a person remains laying on the floor after falling. In recent years, multimodal fall detection approaches are developed in order to gain more precision and robustness. In this work, we propose a multimodal fall detection system based on wearable sensors, ambient sensors and vision devices. We used long short-term memory networks (LSTM) and convolutional neural networks (CNN) for our analysis given that they are able to extract features from raw data, and are well suited for real-time detection. To test our proposal, we built a public multimodal dataset for fall detection. After experimentation, our proposed method reached 96.4% in accuracy, and it represented an improvement in precision, recall and F-{1}-score over using single LSTM or CNN networks for fall detection. © 2019 IEEE.

No Thumbnail Available
Publication

A 3D orthogonal vision-based band-gap prediction using deep learning: A proof of concept

2022 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , ORTIZ-MEDINA, JOSUE

In this work, a vision-based system for the electronic band-gap prediction of organic molecules is proposed using a multichannel 2D convolutional neural network (CNN) and a 3D CNN, applied to the recognition and classification of 2D projected images from 3D molecular structure models. The generated images are input into the CNN for an estimation of the energy gap, associated with the molecular structure. The public data set used in this research was the Organic Materials Database (OMDB-GAP1). A data transformation from the descriptive information contained in the data set to three 2D orthogonal images of molecules was done. The training set is composed of 30,000 images, whereas the testing set was composed of 7500 images, from 12,500 different molecules. The multichannel 2D CNN architecture was optimized via Bayesian optimization. Experimental results showed that the proposed CNN model obtained an acceptable mean absolute error of 0.6780 eV and root mean-squared error of 0.7673 eV, in contrast to two machine learning methods reported in the literature used for band-gap prediction based on conventional density function theory (DFT) methods. These results demonstrate the feasibility of CNN models to materials science routines using orthogonal images projections of molecules. © 2021 Elsevier B.V.