CRIS

Permanent URI for this communityhttps://scripta.up.edu.mx/handle/20.500.12552/1

Browse

Search Results

Now showing 1 - 2 of 2
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Electrodermal Response Patterns and Emotional Engagement Under Continuous Algorithmic Video Stimulation: A Multimodal Biometric Analysis
    (MDPI AG, 2026-01-18)
    ;
    ;
    ;
    David Contreras-Tiscareno
    ;
    Diego Sebastian Montoya-Rodriguez
    Excessive use of short-form video platforms such as TikTok has raised growing concerns about digital addiction and its impact on young users’ emotional well-being. This study examines the relationship between continuous TikTok exposure and emotional engagement in young adults aged 20–23 through a multimodal experimental design. The purpose of this research is to determine whether emotional engagement increases, remains stable, or declines during prolonged exposure and to assess the degree of correspondence between facially inferred engagement and physiological arousal. To achieve this, multimodal biometric data were collected using the iMotions platform, integrating galvanic skin response (GSR) sensors and facial expression analysis via Affectiva’s AFFDEX SDK 5.1. Engagement levels were binarized using a logistic transformation, and a binomial test was conducted. GSR analysis, merged with a 50 ms tolerance, revealed no significant differences in skin conductance between engaged and non-engaged states. Findings indicate that although TikTok elicits strong initial emotional engagement, engagement levels significantly decline over time, suggesting habituation and emotional fatigue. The results refine our understanding of how algorithm-driven, short-form content affects users’ affective responses and highlight the limitations of facial metrics as sole indicators of physiological arousal. Implications for theory include advancing multimodal models of emotional engagement that account for divergences between expressivity and autonomic activation. Implications for practice emphasize the need for ethical platform design and improved digital well-being interventions. The originality and value of this study lie in its controlled experimental approach that synchronizes facial and physiological signals, offering objective evidence of the temporal decay of emotional engagement during continuous TikTok use and underscoring the complexity of measuring affect in highly stimulating digital environments.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Forehead and In-Ear EEG Acquisition and Processing: Biomarker Analysis and Memory-Efficient Deep Learning Algorithm for Sleep Staging with Optimized Feature Dimensionality
    (MDPI AG, 2025-10-01)
    Roberto De Fazio
    ;
    Şule Esma Yalçınkaya
    ;
    Ilaria Cascella
    ;
    ;
    Massimo De Vittorio
    Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be adapted for wearable applications. The system utilizes a custom experimental setup with the ADS1299EEG-FE-PDK evaluation board to acquire EEG signals from the forehead and in-ear regions under various conditions, including visual and auditory stimuli. Afterward, the acquired signals were processed to extract a wide range of features in time, frequency, and non-linear domains, selected based on their physiological relevance to sleep stages and disorders. The feature set was reduced using the Minimum Redundancy Maximum Relevance (mRMR) algorithm and Principal Component Analysis (PCA), resulting in a compact and informative subset of principal components. Experiments were conducted on the Bitbrain Open Access Sleep (BOAS) dataset to validate the selected features and assess their robustness across subjects. The feature set extracted from a single EEG frontal derivation (F4-F3) was then used to train and test a two-step deep learning model that combines Long Short-Term Memory (LSTM) and dense layers for 5-class sleep stage classification, utilizing attention and augmentation mechanisms to mitigate the natural imbalance of the feature set. The results—overall accuracies of 93.5% and 94.7% using the reduced feature sets (94% and 98% cumulative explained variance, respectively) and 97.9% using the complete feature set—demonstrate the feasibility of obtaining a reliable classification using a single EEG derivation, mainly for unobtrusive, home-based sleep monitoring systems.