Now showing 1 - 2 of 2
No Thumbnail Available
Publication

An Implementation of a Monocular 360-Degree Vision System for Mobile Robot Navigation

2018 , Acevedo Medina, Eduardo , Beltrán, Arturo , Castellanos Canales, Mauricio , Chaverra, Luis , González Mora, José Guillermo , Sarmiento Oseguera, Manuel Antonio , Ponce, Hiram

One of the problems facing autonomous navigation is obstacle sensing and dynamic surroundings. Multi-sensor systems and omni directional vision systems have been implemented to increase the observability of robots. However, these approaches consider several drawbacks: the cost of processing multiple signals, synchronization of data collection, cost of materials and energy consumption, among others. Thus in this paper, we propose a new 360-degree vision system for mobile robot navigation using a static monocular camera. We demonstrate that using our system, it is possible to monitor the complete surroundings of the robot with a single sensor, i.e. the camera. Moreover, it can detect the position and orientation of an object from an egocentric point of view. We also present a low cost prototype of our proposal to validate it. © 2018 IEEE.

No Thumbnail Available
Publication

A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset

2019 , Espinosa Loera, Ricardo Abel , Ponce, Hiram , Moya-Albor, Ernesto , Martinez-Villaseñor, Lourdes , Brieva, Jorge , Gutiérrez, Sebastián

The automatic recognition of human falls is currently an important topic of research for the computer vision and artificial intelligence communities. In image analysis, it is common to use a vision-based approach for fall detection and classification systems due to the recent exponential increase in the use of cameras. Moreover, deep learning techniques have revolutionized vision-based approaches. These techniques are considered robust and reliable solutions for detection and classification problems, mostly using convolutional neural networks (CNNs). Recently, our research group released a public multimodal dataset for fall detection called the UP-Fall Detection dataset, and studies on modality approaches for fall detection and classification are required. Focusing only on a vision-based approach, in this paper, we present a fall detection system based on a 2D CNN inference method and multiple cameras. This approach analyzes images in fixed time windows and extracts features using an optical flow method that obtains information on the relative motion between two consecutive images. We tested this approach on our public dataset, and the results showed that our proposed multi-vision-based approach detects human falls and achieves an accuracy of 95.64% compared to state-of-the-art methods with a simple CNN network architecture. © 2019 Elsevier Ltd