Espinosa Loera, Ricardo Abel
Main Affiliation
Preferred name
Espinosa Loera, Ricardo Abel
Official Name
Espinosa Loera, Ricardo Abel
ORCID
0000-0003-2573-7853
Scopus Author ID
57211517018
20 results
Now showing 1 - 10 of 20
- Some of the metrics are blocked by yourconsent settings
Item type:Publication, Prompt Assisted Enhancement for Correcting Illumination Artifacts in Endoscopic Images(Springer Nature Switzerland, 2025-10-24); ;Eluney Hernández ;Javier Cerriteño Magaña ;Gilberto OchoaChristian Daul - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Deploying Real-Time Speech Recognition on ESP32 Using TinyML and Edge Impulse(Springer Nature Switzerland, 2025); ;Gutiérrez, Sebastián; The emergence of Tiny Machine Learning (TinyML) has enabled real-time on-device inference on ultra-low-power microcontrollers, eliminating reliance on cloud computing while significantly reducing latency, power consumption, and bandwidth requirements. This study explores the deployment of a TinyML-based speech recognition system on an ESP32 microcontroller, leveraging Edge Impulse for model development, Mel-Frequency Cepstral Coefficients (MFCCs) for feature extraction, and TensorFlow Lite for Microcontrollers (TFLM) for efficient inference. The model was trained on a curated subset of the Google Speech Commands Dataset, incorporating background noise augmentation to enhance robustness in real-world environments. Using Edge Impulse’s EON Compiler, the model was fully quantized and optimized, achieving a 37% reduction in RAM usage and 27% in ROM. The final model attained 87.14% accuracy on testing data and 97.1% average classification confidence during real-time inference, with excellent noise rejection (99.6%) and latency of 266 ms. Compared to state-of-the-art systems deployed on more powerful platforms, the proposed approach achieves competitive accuracy while maintaining real-time inference and minimal resource consumption on ultra-low-power hardware. This makes it particularly suitable for battery-powered IoT, robotics, and embedded automation applications where connectivity and energy efficiency are critical. By balancing performance and efficiency, this research highlights the viability of deploying speech recognition systems on constrained microcontrollers. Future work will explore advanced architectures and enhanced feature extraction strategies to further improve recognition accuracy, especially for short or phonetically similar commands. ©The authors ©Springer. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Color-aware Exposure Correction for Endoscopic Imaging using a Lightweight Vision Transformer(2024); ;Eluney Hernández ;Gilberto Ochoa-RuizChristian Daul9 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, A deep learning-based image pre-processing pipeline for enhanced 3D colon surface reconstruction robust to endoscopic illumination artifacts(2024); ;Javier Cerriteño ;Saul Gonzalez-Dominguez ;Gilberto Ochoa-RuizChristian Daul28 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Estimation of Low Nutrients in Tomato Crops Through the Analysis of Leaf Images Using Machine Learning(2021); ;Cevallos, Claudio ;Gutiérrez, SebastiánTomato crops are considered the most important agricultural products worldwide. However, the quality of tomatoes depends mainly on the nutrient levels. Visual inspection is made by farmers to anticipate the nutrient deficiency of the plants. Recently, precision agriculture has explored opportunities to automate nutrient level monitoring. Previous work has demonstrated that a convolutional neural network (CNN) is able to estimate low nutrients in tomato plants using images of their leaves. However, the performance of the CNN was not adequate. Thus, this work proposes a novel CNNbased classifier, namely CNN+AHN, for estimating low nutrients in tomato crops using an image of the tomato leaves. The CNN+AHN incorporates a set of convolutional layers as the feature extraction part, and a supervised learning method called artificial hydrocarbon network (AHN) as the dense layer. Different combinations of the architecture of CNN+AHN were examined. Experimental results showed that our best CNN+AHN classifier is able to estimate low nutrients in tomato plants with an accuracy of 95:57% and F1-score of 95:75%, outperforming the literature.Scopus© Citations 15 6 2 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, A deep learning-based image pre-processing pipeline for enhanced 3D colon surface reconstruction robust to endoscopic illumination artifacts(2024); ;Javier Cerriteño ;Saul Gonzalez-Dominguez ;Gilberto Ochoa-RuizChristian Daul29 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Renewable Energy Prediction through Machine Learning Algorithms(2020) ;Luisa Fernanda Jimenez Alvarez ;Sebastian Ramos Gonzalez ;Antonio Delgado Lopez ;Diego Alonso Hernandez DelgadoScopus© Citations 10 9 1 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Design and Implementation of a Node Geolocation System for Fire Monitoring through LoRaWAN(2020) ;Gutiérrez, Sebastián ;Gutiérrez, SebastiánScopus© Citations 4 10 1 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Application of Convolutional Neural Networks for Fall Detection Using Multiple Cameras(2020); ;Gutiérrez, Sebastián ;Gutiérrez, Sebastián; Currently one of the most important research issue for artificial intelligence and computer vision tasks is the recognition of human falls. Due to the current exponential increase in the use of cameras is it common to use vision-based approach for fall detection and classification systems. On another hand deep learning algorithms have transformed the way that we see vision-based problems. The Convolutional Neural Network (CNN) as deep learning technique offers more reliable and robust solutions on detection and classification problems. Focusing only on a vision-based approach, for this work we used images from a new public multimodal data set for fall detection (UP-Fall Detection dataset) published by our research team. In this chapter we present fall detection system using a 2D CNN analyzing multiple camera information. This method analyzes images in fixed time window frames extracting features using an optical flow method that obtains information of relative motion between two consecutive images. For experimental results, we tested this approach in UP-Fall Detection dataset. Results showed that our proposed multi-vision-based approach detects human falls achieving 95.64% in accuracy with a simple CNN network architecture compared with other state-of-the-art methods.Scopus© Citations 21 51 1 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, A Novel Hybrid Endoscopic Dataset for Evaluating Machine Learning-Based Photometric Image Enhancement Models(2022) ;Axel García-Vega; ;Gilberto Ochoa-Ruiz ;Thomas BazinLuis Falcón-MoralesEndoscopy is the most widely used medical technique for cancer and polyp detection inside hollow organs. However, images acquired by an endoscope are frequently affected by illumination artefacts due to the enlightenment source orientation. There exist two major issues when the endoscope’s light source pose suddenly changes: overexposed and underexposed tissue areas are produced. These two scenarios can result in misdiagnosis due to the lack of information in the affected zones or hamper the performance of various computer vision methods (e.g., SLAM, structure from motion, optical flow) used during the non invasive examination. The aim of this work is two-fold: i) to introduce a new synthetically generated data-set generated by a generative adversarial techniques and ii) and to explore both shallow based and deep learning-based image-enhancement methods in overexposed and underexposed lighting conditions. Best quantitative results (i.e., metric based results), were obtained by the deep learning-based LMSPEC method, besides a running time around 7.6 fps.Scopus© Citations 6 35 1
