2024-03-29T09:39:49Zhttps://minerva.usc.es/oai/requestoai:minerva.usc.es:10347/235752023-07-10T06:21:36Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Rey Blanco, Sergio
author
Blanco Heras, Dora
author
Argüello Pedreira, Francisco Santiago
author
2020
Texture information allows characterizing the regions of interest in a scene. It refers to the spatial organization of the fundamental microstructures in natural images. Texture extraction has been a challenging problem in the field of image processing for decades. In this paper, different techniques based on the classic Bag of Words (BoW) approach for solving the texture extraction problem in the case of hyperspectral images of the Earth surface are proposed. In all cases the texture extraction is performed inside regions of the scene called superpixels and the algorithms profit from the information available in all the bands of the image. The main contribution is the use of superpixel segmentation to obtain irregular patches from the images prior to texture extraction. Texture descriptors are extracted from each superpixel. Three schemes for texture extraction are proposed: codebook-based, descriptor-based, and spectral-enhanced descriptor-based. The first one is based on a codebook generator algorithm, while the other two include additional stages of keypoint detection and description. The evaluation is performed by analyzing the results of a supervised classification using Support Vector Machines (SVM), Random Forest (RF), and Extreme Learning Machines (ELM) after the texture extraction. The results show that the extraction of textures inside superpixels increases the accuracy of the obtained classification map. The proposed techniques are analyzed over different multi and hyperspectral datasets focusing on vegetation species identification. The best classification results for each image in terms of Overall Accuracy (OA) range from 81.07% to 93.77% for images taken at a river area in Galicia (Spain), and from 79.63% to 95.79% for a vast rural region in China with reasonable computation times
Blanco, S.R.; Heras, D.B.; Argüello, F. Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels. Remote Sens. 2020, 12, 2633
http://hdl.handle.net/10347/23575
10.3390/rs12162633
2072-4292
Hyperspectral
Texture extraction
Superpixel
BoW
SVM
Vegetation
Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels
oai:minerva.usc.es:10347/276162022-02-26T03:02:31Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Fernández Fuentes, Xosé
author
Fernández Pena, Anselmo Tomás
author
Cabaleiro Domínguez, José Carlos
author
2022
The web browser has become one of the basic tools of everyday life. A tool that is increasingly used to manage personal information. This has led to the introduction of new privacy options by the browsers, including private mode. In this paper, a methodology to explore the effectiveness of the private mode included in most browsers is proposed. A browsing session was designed and conducted in Mozilla Firefox and Google Chrome running on four different Linux environments. After analyzing the information written to disk and the information available in memory, it can be observed that Firefox and Chrome did not store any browsing-related information on the hard disk. However, memory analysis reveals that a large amount of information could be retrieved in some of the environments tested. For example, for the case where the browsers were executed in a VMware virtual machine, it was possible to retrieve most of the actions performed, from the keywords entered in a search field to the username and password entered to log in to a website, even after restarting the computer. In contrast, when Firefox was run on a slightly hardened non-virtualized Linux, it was not possible to retrieve any browsing-related artifacts after the browser was closed
Computers & Security 115 (2022) 102626
0167-4048
http://hdl.handle.net/10347/27616
10.1016/j.cose.2022.102626
Digital forensics
Browsing artefacts
Private browsing
Internet privacy
Virtualization
Digital forensic analysis methodology for private browsing: Firefox and Chrome on Linux as a case study
oai:minerva.usc.es:10347/291772023-07-10T06:11:15Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Fernández Criado, Marcos
author
Estévez Casado, Fernando
author
Iglesias Rodríguez, Roberto
author
Vázquez Regueiro, Carlos
author
Barro Ameneiro, Senén
author
2022
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private. This decentralized approach is prone to suffer the consequences of data statistical heterogeneity, both across the different entities and over time, which may lead to a lack of convergence. To avoid such issues, different methods have been proposed in the past few years. However, data may be heterogeneous in lots of different ways, and current proposals do not always determine the kind of heterogeneity they are considering. In this work, we formally classify data statistical heterogeneity and review the most remarkable learning Federated Learning strategies that are able to face it. At the same time, we introduce approaches from other machine learning frameworks. In particular, Continual Learning strategies are worthy of special attention, since they are able to handle habitual kinds of data heterogeneity. Throughout this paper, we present many methods that could be easily adapted to the Federated Learning settings to improve its performance. Apart from theoretically discussing the negative impact of data heterogeneity, we examine it and show some empirical results using different types of non-IID data
Information Fusion 88 (2022) 263-280
http://hdl.handle.net/10347/29177
10.1016/j.inffus.2022.07.024
1566-2535
Federated learning
Data heterogeneity
Non-IID data
Concept drift
Distributed learning
Continual learning
Non-IID data and Continual Learning processes in Federated Learning: a long road ahead
oai:minerva.usc.es:10347/299912023-07-10T06:11:26Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Matabuena Rodríguez, Marcos
author
Félix Lamas, Paulo
author
Hammouri, Ziad Akram Ali
author
Mota, Jorge
author
Pozo Cruz, Borja del
author
2022
Physical activity is deemed critical to successful ageing. Despite evidence and progress, there is still a need to determine more precisely the direction, magnitude, intensity, and volume of physical activity that should be performed on a daily basis to effectively promote the health of individuals. This study aimed to assess the clinical validity of new physical activity phenotypes derived from a novel distributional functional analysis of accelerometer data in older adults. A random sample of participants aged between 65 and 80 years with valid accelerometer data from the National Health and Nutrition Examination Survey (NHANES) 2011–2014 was used. Five major clinical phenotypes were identified, which provided a greater sensitivity for predicting 5-year mortality and survival outcomes than age alone, and our results confirm the importance of moderate-to-vigorous physical activity. The new clinical physical activity phenotypes are a promising tool for improving patient prognosis and for directing to more targeted intervention planning, according to the principles of precision medicine. The use of distributional representations shows clear advantages over more traditional metrics to explore the effects of the full spectrum of the physical activity continuum on human health
Matabuena, M., Félix, P., Hammouri, Z.A.A. et al. Physical activity phenotypes and mortality in older adults: a novel distributional data analysis of accelerometry in the NHANES. Aging Clin Exp Res 34, 3107–3114 (2022). https://doi.org/10.1007/s40520-022-02260-3
0009-398X
http://hdl.handle.net/10347/29991
10.1007/s40520-022-02260-3
1573-3327
Physical activity
Precision medicine
Accelerometry
Distributional representation
Longevity
Physical activity phenotypes and mortality in older adults: a novel distributional data analysis of accelerometry in the NHANES
oai:minerva.usc.es:10347/264022023-07-10T06:16:30Zcom_10347_2896com_10347_2890com_10347_2888com_10347_227com_10347_2989com_10347_2889com_10347_2990com_10347_2906col_10347_15763col_10347_9764col_10347_11719col_10347_15791
00925njm 22002777a 4500
dc
Gago Domínguez, Manuela
author
Redondo, Carmen M.
author
Calaza Cabanas, Manuel
author
Matabuena Rodríguez, Marcos
author
Álvarez Bermúdez, María José
author
Pérez Fernández, Román
author
Torres Español, María
author
Carracedo Álvarez, Ángel María
author
Castelao, Jose Esteban
author
2021
Experimental data showed that endothelial lipase (LIPG) is a crucial player in breast cancer. However, very limited data exists on the role of LIPG on the risk of breast cancer in humans. We examined the LIPG-breast cancer association within our population-based case–control study from Galicia, Spain, BREOGAN (BREast Oncology GAlicia Network). Plasma LIPG and/or OxLDL were measured on 114 breast cancer cases and 82 controls from our case–control study, and were included in the present study. The risk of breast cancer increased with increasing levels of LIPG (multivariable OR for the highest category (95% CI) 2.52 (1.11–5.81), P-trend = 0.037). The LIPG-breast cancer association was restricted to Pre-menopausal breast cancer (Multivariable OR for the highest LIPG category (95% CI) 4.76 (0.94–28.77), P-trend = 0.06, and 1.79 (0.61–5.29), P-trend = 0.372, for Pre-menopausal and Post-menopausal breast cancer, respectively). The LIPG-breast cancer association was restricted to Luminal A breast cancers (Multivariable OR for the highest LIPG category (95% CI) 3.70 (1.42–10.16), P-trend = 0.015, and 2.05 (0.63–7.22), P-trend = 0.311, for Luminal A and non-Luminal A breast cancers, respectively). Subset analysis only based on HER2 receptor indicated that the LIPG-breast cancer relationship was restricted to HER2-negative breast cancers (Multivariable OR for the highest LIPG category (95% CI) 4.39 (1.70–12.03), P-trend = 0.012, and 1.10 (0.28–4.32), P-trend = 0.745, for HER2-negative and HER2-positive tumors, respectively). The LIPG-breast cancer association was restricted to women with high total cholesterol levels (Multivariable OR for the highest LIPG category (95% CI) 6.30 (2.13–20.05), P-trend = 0.018, and 0.65 (0.11–3.28), P-trend = 0.786, among women with high and low cholesterol levels, respectively). The LIPG-breast cancer association was also restricted to non-postpartum breast cancer (Multivariable OR for the highest LIPG category (95% CI) 3.83 (1.37–11.39), P-trend = 0.003, and 2.35 (0.16–63.65), P-trend = 0.396, for non-postpartum and postpartum breast cancer, respectively), although we lacked precision. The LIPG-breast cancer association was more pronounced among grades II and III than grade I breast cancers (Multivariable ORs for the highest category of LIPG (95% CI) 2.73 (1.02–7.69), P-trend = 0.057, and 1.90 (0.61–6.21), P-trend = 0.170, for grades II and III, and grade I breast cancers, respectively). No association was detected for OxLDL levels and breast cancer (Multivariable OR for the highest versus the lowest category (95% CI) 1.56 (0.56–4.32), P-trend = 0.457)
Gago-Dominguez, M., Redondo, C.M., Calaza, M. et al. LIPG endothelial lipase and breast cancer risk by subtypes. Sci Rep 11, 10436 (2021). https://doi.org/10.1038/s41598-021-89669-4
http://hdl.handle.net/10347/26402
10.1038/s41598-021-89669-4
2045-2322
Breast cancer
Predictive markers
LIPG endothelial lipase and breast cancer risk by subtypes
oai:minerva.usc.es:10347/119002020-01-31T11:37:50Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2961com_10347_2894col_10347_11719col_10347_15488col_10347_13013
00925njm 22002777a 4500
dc
Pereira Fariña, Martín
author
Díaz Hermida, Félix
author
Bugarín Diz, Alberto José
author
2013-03
Syllogism is a type of deductive reasoning involving quantified statements. The syllogistic reasoning scheme in the classical Aristotelian framework involves three crisp term sets and four linguistic quantifiers, for which the main support is the linguistic properties of the quantifiers. A number of fuzzy approaches for defining an approximate syllogism have been proposed for which the main support is cardinality calculus. In this paper we analyze fuzzy syllogistic models previously described by Zadeh and Dubois et al. and compare their behavior with that of the classical Aristotelian framework to check which of the 24 classical valid syllogistic reasoning patterns or moods are particular crisp cases of these fuzzy approaches. This allows us to assess to what extent these approaches can be considered as either plausible extensions of the classical crisp syllogism or a basis for a general approach to the problem of approximate syllogism.
Pereira-Fariña, M., Díaz-Hermida, F., Bugarín, A. (2013). On the analysis of set-based fuzzy quantified reasoning using classical syllogistics. "Fuzzy Sets and Systems: An International Journal in Information Science and Engineering", vol. 214(1), 83-94
0165-0114
http://hdl.handle.net/10347/11900
10.1016/j.fss.2012.03.015
syllogistic reasoning
fuzzy quantifiers
On the analysis of set-based fuzzy quantified reasoning using classical syllogistics
oai:minerva.usc.es:10347/211552023-07-10T06:12:35Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Matabuena Rodríguez, Marcos
author
Vidal Aguiar, Juan Carlos
author
Hayes, Philip R.
author
Saavedra García, Miguel
author
Huelín Trigo, Fernando
author
2019
Maximum heart rate (MHR) is widely used in the prescription and monitoring of exercise intensity, and also as a criterion for the termination of sub-maximal aerobic fitness tests in clinical populations.
Traditionally, MHR is predicted from an age-based formula, usually 220−age. These formulae, however, are
prone to high predictive errors that potentially could lead to inaccurately prescribed or quantified training
or inappropriate fitness test termination. In this paper, we used functional data analysis (FDA) to create
a new method to predict MHR. It uses heart rate data gathered every 5 seconds during a low intensity,
sub-maximal exercise test. FDA allows the use of all the information recorded by monitoring devices in the
form of a function, reducing the amount of information needed to generalize a model, besides minimizing the
curse of dimensionality. The functional data model created reduced the predictive error by more than 50%
compared to current models within the literature. This new approach has important benefits to clinicians and
practitioners when using MHR to test fitness or prescribe exercise
M. Matabuena, J. C. Vidal, P. R. Hayes, M. Saavedra-García and F. H. Trillo, "Application of Functional Data Analysis for the Prediction of Maximum Heart Rate," in IEEE Access, vol. 7, pp. 121841-121852, 2019
http://hdl.handle.net/10347/21155
10.1109/ACCESS.2019.2938466
2169-3536
Maximum heart rate prediction
Functional data analysis
Machine learning
Low intensity sub-maximal test
Application of Functional Data Analysis for the Prediction of Maximum Heart Rate
oai:minerva.usc.es:10347/303682023-03-21T03:03:28Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Fernández Pichel, Marcos
author
Prada Corral, Manuel de
author
Losada Carril, David Enrique
author
Pichel Campos, Juan Carlos
author
Gamallo Otero, Pablo
author
2023
The availability of large web-based corpora has led to significant advances in a wide range of technologies, including massive retrieval systems or deep neural networks. However, leveraging this data is challenging, since web content is plagued by the so-called boilerplate: ads, incomplete or noisy text and rests of the navigation structure, such as menus or navigation bars. In this work, we present a novel and efficient approach to extract useful and well-formed content from web-scraped data. Our approach takes advantage of Language Models and their implicit knowledge about correctly formed text, and we demonstrate here that perplexity is a valuable artefact that can contribute in terms of effectiveness and efficiency. As a matter of fact, the removal of noisy parts leads to lighter AI or search solutions that are effective and entail important reductions in resources spent. We exemplify here the usefulness of our method with two downstream tasks, search and classification, and a cleaning task. We also provide a Python package with pre-trained models and a web demo demonstrating the capabilities of our approach
Fernández-Pichel, M., Prada-Corral, M., Losada, D., Pichel, J., & Gamallo, P. (2023). An unsupervised perplexity-based method for boilerplate removal. Natural Language Engineering, 1-18. doi:10.1017/S1351324923000049
1351-3249
http://hdl.handle.net/10347/30368
10.1017/S1351324923000049
1469-8110
Perplexity
Boilerplate removal
Information retrieval
Text classification
Text Pre-processing
An unsupervised perplexity-based method for boilerplate removal
oai:minerva.usc.es:10347/332462024-03-21T09:21:41Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Shahandashti, Peyman Fayyaz
author
López Martínez, Paula
author
Brea Sánchez, Víctor Manuel
author
García Lesta, Daniel
author
Heredia Conde, Miguel
author
2024
Indirect Time-of-Flight (iToF) sensors measure the received signal's phase shift or time delay to calculate depth. In realistic conditions, however, recovering depth is challenging as reflections from secondary scattering areas or translucent objects may interfere with the direct reflection, resulting in inaccurate 3D estimates. We propose a new measurement concept including a single-shot on-chip multifrequency demodulation method with periodically-repeated ultrashort-pulsed illumination using a novel pixel array architecture to address a main limitation of conventional iToF, the Multi-Path Interference (MPI). Due to the careful hardware/software codesign, the proposed single-shot architecture provides close-to-optimal Fourier measurements to a spectral estimation algorithm that retrieves the unknown parameters of the interfering return paths in a closed form. Electrical simulations of the on-chip multifrequency demodulation circuit demonstrate the feasibility of distance retrieval in double and triple bounce conditions in a single shot with high accuracy. Furthermore, we propose a set of methods for processing the resulting sensor measurements that exploit valuable a priori information and structural constraints of the data and observe that they yield a substantial increase in accuracy
P. F. Shahandashti, P. López, V. M. Brea, D. García-Lesta and M. H. Conde, "Simultaneous Multifrequency Demodulation for Single-Shot Multiple-Path ToF Imaging," in IEEE Transactions on Computational Imaging, vol. 10, pp. 54-68, 2024, doi: 10.1109/TCI.2023.3348758. keywords: {Sensors;Demodulation;Imaging;Frequency modulation;Image sensors;Lighting;Computer architecture;CMOS image sensor;depth image sensor;macro-pixel;multifrequency demodulation;multipath interference;single-shot;spectral estimation;time-of-flight}
2573-0436
http://hdl.handle.net/10347/33246
10.1109/TCI.2023.3348758
2333-9403
2334-0118
Sensors
Demodulation
Imaging
Frequency modulation
Image sensors
Lighting
Computer architecture
Simultaneous Multifrequency Demodulation for Single-Shot Multiple-Path ToF Imaging
oai:minerva.usc.es:10347/177142022-11-15T12:18:43Zcom_10347_2897com_10347_2890com_10347_2888com_10347_227com_10347_2990com_10347_2889com_10347_2953com_10347_2893col_10347_12277col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Arias Carpintero, Pablo
author
Robles García, Verónica
author
Sanmartín Díaz, Gabriel
author
Flores González, Julián Carlos
author
Cudeiro, Javier
author
2012
This work presents an immersive Virtual Reality (VR) system to evaluate, and potentially treat, the alterations in rhythmic hand movements seen in Parkinson's disease (PD) and the elderly (EC), by comparison with healthy young controls (YC). The system integrates the subjects into a VR environment by means of a Head Mounted Display, such that subjects perceive themselves in a virtual world consisting of a table within a room. In this experiment, subjects are presented in 1st person perspective, so that the avatar reproduces finger tapping movements performed by the subjects. The task, known as the finger tapping test (FT), was performed by all three subject groups, PD, EC and YC. FT was carried out by each subject on two different days (sessions), one week apart. In each FT session all subjects performed FT in the real world (FTREAL) and in the VR (FTVR); each mode was repeated three times in randomized order. During FT both the tapping frequency and the coefficient of variation of inter-tap interval were registered. FTVR was a valid test to detect differences in rhythm formation between the three groups. Intra-class correlation coefficients (ICC) and mean difference between days for FTVR (for each group) showed reliable results. Finally, the analysis of ICC and mean difference between FTVR vs FTREAL, for each variable and group, also showed high reliability. This shows that FT evaluation in VR environments is valid as real world alternative, as VR evaluation did not distort movement execution and detects alteration in rhythm formation. These results support the use of VR as a promising tool to study alterations and the control of movement in different subject groups in unusual environments, such as during fMRI or other imaging studies
Arias P, Robles-García V, Sanmartín G, Flores J, Cudeiro J (2012) Virtual Reality as a Tool for Evaluation of Repetitive Rhythmic Movements in the Elderly and Parkinson's Disease Patients. PLoS ONE 7(1): e30021. https://doi.org/10.1371/journal.pone.0030021
http://hdl.handle.net/10347/17714
10.1371/journal.pone.0030021
1932-6203
Virtual Reality as a Tool for Evaluation of Repetitive Rhythmic Movements in the Elderly and Parkinson's Disease Patients
oai:minerva.usc.es:10347/177212020-09-10T07:59:53Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Zablah Ávila, José Isaac
author
García Loureiro, Antonio Jesús
author
Gómez Folgar, Fernando
author
2014
El presente artículo muestra los resultados de un conjunto de benchmarks en
anfitriones y máquinas virtuales gestionadas con los hipervisores Xen y KVM,
aprovechando el soporte vía hardware para la virtualización del ordenador anfitrión.
El objetivo de este trabajo fue determinar qué hipervisor hacia un uso más eficiente
de los recursos bajo diferentes condiciones. En los resultados obtenidos, las
máquinas virtuales sobre Xen presentaron mejor rendimiento en cálculo; mientras
que KVM exhibió un mejor rendimiento en pruebas de acceso al disco y de la red.
Se tomaron los datos a partir de comparaciones de variables como tiempo y
volumen de transferencia de datos, después de ejecutar las pruebas de rendimiento
bajo las mismas condiciones en los diferentes escenarios. Los resultados de este
estudio aportan una ruta a seguir para diseñar y optimizar una infraestructura de
altas prestaciones basada en IaaS (infraestructura como servicio) para manejo y
procesamiento de datos en investigación científica
This paper presents the results of a set of benchmarks on hosts and virtual machines
running Xen and KVM hypervisors, they are leveraging hardware support for
virtualization via the host computer. The aim of this study was to determine which
hypervisor do more efficient use of resources under different conditions. In the
results, the virtual machines on Xen displayed better performance in calculus; while
KVM exhibited better performance on tests of disk access and network usage. Data
were taken from comparison of variables such as time and volume data transfer, after running performance tests under the same conditions in different scenarios.
The results of this study provide a roadmap to design and optimize high-performance
infrastructure based on IaaS (infrastructure as a service) for data management and
processing in scientific research
Zablah, J., Loureiro, A., & Gómez Folgar, F. (2015). Evaluación del rendimiento de hipervisores usados en infraestructuras cloud que aprovechan la virtualización por hardware. Portal De La Ciencia, 6, 107-120. doi:http://dx.doi.org/10.5377/pc.v6i0.1846
2223-3059
http://hdl.handle.net/10347/17721
10.5377/pc.v6i0.1846
Benchmark
Hipervisores
Virtualización por hardware
Xen
KVM
Cloud
Benchmarks
Hypervisor
Hardware virtualization
Evaluación del rendimiento de hipervisores usados en infraestructuras cloud que aprovechan la virtualización por hardware
oai:minerva.usc.es:10347/61302020-01-31T09:59:58Zcom_10347_2990com_10347_2889com_10347_227com_10347_6060com_10347_250com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_6064col_10347_10041
00925njm 22002777a 4500
dc
García González, Marcos
author
Gayo, Iria
author
González López, Isaac
author
2012
A identificação e a classificação semântica automáti-cas de entidades mencionadas são tarefas de especial relevância para variadas aplicações do processamento da língua natural, tais como a tradução automática, a extracção de informação ou os sistemas de resposta a perguntas. O presente artigo descreve a adaptação e implementação de diversas ferramentas de código aberto para a identificação e classificação dos seguin-tes tipos de entidades em galego: (i) datas, (ii) nume-rais, (iii) quantidades e (iv) nomes próprios. A análise dos três primeiros tipos de entidades realiza-se com o software FreeLing através de máquinas de estados finitos. Para a identificação de nomes próprios com-param-se duas estratégias: (i) a utilização de máquinas de estados finitos e (ii) métodos de aprendizagem automática. Finalmente, a classificação semântica dos nomes próprios é realizada com um sistema baseado em regras e recursos obtidos automaticamente. O artigo mostra um conjunto de avaliações para cada um dos módulos apresentados, disponibilizados com licenças livres.
GARCÍA, Marcos; GAYO, Iria; GONZÁLEZ LÓPEZ, Isaac: «Identificação e classificação de entidades mencionadas em galego», Estudos de Lingüística Galega, vol. 4 (2012). ISSN 1889-2566, pp. 13-25
1889-2566
http://hdl.handle.net/10347/6130
Processamento da língua natural
Reconhecimento de entidades mencionadas
Galego
Identificação e classificação de entidades mencionadas em galego
oai:minerva.usc.es:10347/176942023-07-10T06:16:10Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
García González, Marcos
author
Gamallo Otero, Pablo
author
2011
Los sistemas de extracción de información necesitan un procesamiento previo que reconozca, entre otras cosas, elementos correferenciales tales como las variantes de nombres propios. El presente artículo tiene dos objetivos: por un lado, describe los principales tipos de correferencia de nombres de persona encontrados en textos enciclopédicos y periodísticos en castellano. Por otro lado, presenta un algoritmo que resuelve satisfactoriamente la mayor parte de los casos descritos. El sistema, que no necesita corpus de entrenamiento, permite unificar las variantes de nombres de persona que aparecen en un texto, mejorando así tareas como la extracción de información biográfica
Information extraction systems need a previous processing step in order to recognize coreferential elements, such as personal name variants. This paper has two aims: the first is to describe the main types of personal name coreference found in encyclopedic and journalistic texts in Spanish. Furthermore, we introduce an algorithm that solves most coreferential links between personal name variants succesfully. The system, which does not need a training corpus, unifies the coreferential elements found in a text, thereby improving tasks like biographical information extraction
, M., & , P. (2011). Resolución de Correferencia de Nombres de Persona para Extracción de Información Biográfica. Procesamiento Del Lenguaje Natural, 47, 47-55. Recuperado de http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/965
1135-5948
http://hdl.handle.net/10347/17694
1989-7553
Correferencia de nombres de persona
Extracción de información
Personal name coreference
Information extraction
Resolución de correferencia de nombres de persona para extracción de información biográfica
oai:minerva.usc.es:10347/176952020-05-20T18:18:22Zcom_10347_2990com_10347_2889com_10347_227com_10347_2959com_10347_2894com_10347_2888com_10347_2968col_10347_11719col_10347_11714col_10347_10041
00925njm 22002777a 4500
dc
García González, Marcos
author
Gamallo Otero, Pablo
author
Gayo, Iria
author
Pousada Cruz, Miguel Ángel
author
2014
The great amount of text produced every day in the Web turned it as one of the main sources for obtaining linguistic corpora, that are further analyzed with Natural Language Processing techniques. On a global scale, languages such as Portuguese âofficial in 9 countries- appear on the Web in several varieties, with lexical, morphological and syntactic (among others) differences. Besides, a unified spelling system for Portuguese has been recently approved, and its implementation process has already started in some countries. However, it will last several years, so different varieties and spelling systems coexist. Since PoS-taggers for Portuguese are specifically built for a particular variety, this work analyzes different training corpora and lexica combinations aimed at building a model with high-precision annotation in several varieties and spelling systems of this language. Moreover, this paper presents different dictionaries of the new orthography (Spelling Agreement) as well as a new freely available testing corpus, containing different varieties and textual typologies
La gran cantidad de texto producido diariamente en la Web ha provocado
que ésta sea utilizada como una de las principales fuentes para la obtención de
corpus lingüísticos, posteriormente analizados utilizando técnicas de Procesamiento
del Lenguaje Natural. En una escala global, idiomas como el portugués —oficial
en 9 estados— aparecen en la Web en diferentes variedades, con diferencias léxicas,
morfológicas y sintácticas, entre otras. A esto se suma la reciente aprobación de una
ortografía unificada para las diferentes variedades del portugués, cuyo proceso de
implementación ya ha comenzado en varios países, pero que se prolongará todavía
durante varios años, conviviendo por lo tanto también diferentes ortografías. Una
vez que los etiquetadores morfosintácticos existentes para el portugués están adaptados
específicamente para una variedad nacional concreta, el presente trabajo analiza
diferentes combinaciones de corpus de aprendizaje y de léxicos con el fin de obtener
un modelo que mantenga una alta precisión de anotación en diferentes variedades y
ortografías de esta lengua. Además, se presentan diferentes diccionarios adaptados
a la nueva ortografía (Acordo Ortográfico de 1990) y un nuevo corpus de evaluación
con diferentes variedades y tipologías textuales, disponibilizado libremente
Garcia, M., Gamallo, P., Gayo, I., & Pousada Cruz, M. (2014). PoS-tagging the Web in Portuguese. National varieties, text typologies and spelling systems. Procesamiento Del Lenguaje Natural, 53, 95-101. Recuperado de http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/5045/2933
1135-5948
http://hdl.handle.net/10347/17695
1989-7553
PoS-tagging
Portuguese
Web as Corpus
Spelling Agreement
Anotación morfosintáctica
Portugués
Acordo ortográfico
PoS-tagging the Web in Portuguese. National varieties, text typologies and spelling systems
oai:minerva.usc.es:10347/329312024-03-21T09:21:42Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Casas-Ramos, Jacobo
author
Mucientes, Manuel
author
Lama Penín, Manuel
author
2024
Conformance checking techniques compare how a process is supposed to be executed according to a model with how it is executed in reality according to an event log. Alignment-based approaches are the most successful solutions for conformance checking. Optimal alignments are a way of finding the best match between the real and the modeled behavior and identifying the differences. However, finding these optimal alignments is a challenging task, especially for complex cases where the log and the model have many events and paths. The difficulty lies in the computational complexity required to find these alignments. To address this problem, we propose an efficient algorithm named REACH based on the A* search algorithm. The core components of the proposal are the use of a partial reachability graph for faster execution of process models for alignment computation and a set of optimization techniques for reducing the number of states explored by the A* algorithm. These improve performance by both reducing the required computation time per state and the number of states to process respectively. To evaluate the performance and scalability, we conducted tests using 227 pairs of logs and models, comparing the results obtained with those from 10 state-of-the-art approaches. Results show that REACH outperforms the other proposals in runtimes, and even aligns logs and models that no other algorithm is able to align.
Expert Systems with Applications Volume 241, 2024, 122467
0957-4174
http://hdl.handle.net/10347/32931
10.1016/j.eswa.2023.122467
Process mining
Conformance checking
Alignments
REACH: Researching Efficient Alignment-based Conformance Checking
oai:minerva.usc.es:10347/177252020-01-31T13:51:06Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
El-Sappagh, Shaker
author
Alonso Moral, José María
author
Ali, Farman
author
Ali, Amjad
author
Jang, Jun-Hyeog
author
Kwak, Kyung-Sup
author
2018
Diabetes is a serious chronic disease. The importance of clinical decision support systems (CDSSs) to diagnose diabetes has led to extensive research efforts to improve the accuracy, applicability, interpretability, and interoperability of these systems. However, this problem continues to require optimization. Fuzzy rule-based systems are suitable for the medical domain, where interpretability is a main concern. The medical domain is data-intensive, and using electronic health record data to build the FRBS knowledge base and fuzzy sets is critical. Multiple variables are frequently required to determine a correct and personalized diagnosis, which usually makes it difficult to arrive at accurate and timely decisions. In this paper, we propose and implement a new semantically interpretable FRBS framework for diabetes diagnosis. The framework uses multiple aspects of knowledge-fuzzy inference, ontology reasoning, and a fuzzy analytical hierarchy process (FAHP) to provide a more intuitive and accurate design. First, we build a two-layered hierarchical and interpretable FRBS; then, we improve this by integrating an ontology reasoning process based on SNOMED CT standard ontology. We incorporate FAHP to determine the relative medical importance of each sub-FRBS. The proposed system offers numerous unique and critical improvements regarding the implementation of an accurate, dynamic, semantically intelligent, and interpretable CDSS. The designed system considers the ontology semantic similarity of diabetes complications and symptoms concepts in the fuzzy rules' evaluation process. The framework was tested using a real data set, and the results indicate how the proposed system helps physicians and patients to accurately diagnose diabetes mellitus
El-Sappagh, S., Alonso, J., Ali, F., Ali, A., Jang, J., & Kwak, K. (2018). An Ontology-Based Interpretable Fuzzy Decision Support System for Diabetes Diagnosis. IEEE Access, 6, 37371-37394. doi: 10.1109/access.2018.2852004
http://hdl.handle.net/10347/17725
10.1109/ACCESS.2018.2852004
2169-3536
Diabetes
Cognition
Ontologies
Medical diagnostic imaging
Diseases
Semantics
An Ontology-Based Interpretable Fuzzy Decision Support System for Diabetes Diagnosis
oai:minerva.usc.es:10347/246532023-07-10T06:18:02Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Fernández, Eduardo F.
author
Seoane Iglesias, Natalia
author
Almonacid, Florencia
author
García Loureiro, Antonio Jesús
author
2019
A novel architecture of cell structure tailored to ultra-high (>2000 suns) concentration ratios is proposed. The basic solar cell consists of two p-n junctions connected in series by a highly doped tunnel diode with the metallic contacts located laterally. The tunneling connection allows using direct band-gap semiconductor compounds aiming to optimize the absorption of the spectrum. The performance of the novel architecture is investigated up to ultra-high concentration using TCAD software. Simulations show its viability for developing a new generation of solar cells to increase the potential in terms of efficiency and cost reduction of ultra-high concentrator systems. The solar cell does not show any degradation with concentration and efficiency as high as 28.4% at 15000 suns has been obtained for a preliminary design
Eduardo F. Fernández, Natalia Seoane, Florencia Almonacid and Antonio J. García-Loureiro (2019) Vertical-Tunnel-Junction (VTJ) Solar Cell for Ultra-High Light Concentrations (>2000 Suns). IEEE Electron Device Letters, 40 (1), 44-47. Doi: 10.1109/LED.2018.2880240
0741-3106
http://hdl.handle.net/10347/24653
10.1109/LED.2018.2880240
1558-0563
Vertical solar cells
Concentrator photovoltaics
Gallium arsenide (GaAs)
Tunnel diode
Series resistance
Vertical-Tunnel-Junction (VTJ) Solar Cell for Ultra-High Light Concentrations (>2000 Suns)
oai:minerva.usc.es:10347/224062020-05-28T11:35:58Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Indalecio Fernández, Guillermo
author
García Loureiro, Antonio Jesús
author
Elmessary, Muhammad A.
author
Kalna, Karol
author
Seoane Iglesias, Natalia
author
2018
Standard analysis of variability sources in nanodevices lacks information about the spatial influence of the variability. However, this spatial information is paramount for the industry and academia to improve the design of variability-resistant architectures. A recently developed technique, the fluctuation sensitivity map (FSM) is used to analyze the spatial effect of the line edge roughness (LER) variability in key figures-of-merit (FoM) in silicon gate-all-around (GAA) nanowire (NW) FETs. This technique gives insight about the local sensitivity identifying the regions inducing the strongest variability into the FoM. We analyze both 22 and 10 nm gate length GAA NW FETs affected by the LER with different amplitudes (0.6, 0.7, and 0.85 nm) and correlation lengths (10 and 20 nm) using in-house 3-D quantum-corrected drift-diffusion simulation tool calibrated against experimental or Monte Carlo data. The FSM finds that the gate is the most sensitive region to LER deformations. We demonstrate that the specific location of the deformation inside the gate plays an important role in the performance and that the effect of the location is also dependent on the FoM analyzed. Moreover, there is a negligible impact on the device performance if the LER deformation occurs in the source or drain region.
Indalecio, G., García-Loureiro, A. J., Elmessary, M. A., Kalna, K., and Seoane, N. (2018). Spatial sensitivity of Silicon GAA nanowire FETs under line edge roughness variations. IEEE Journal of the Electron Devices Society, 6, 601-610.https://dx.doi.org/10.1109/JEDS.2018.2828504
2168-6734
http://hdl.handle.net/10347/22406
10.1109/JEDS.2018.2828504
Si GAA nanowire
Variability sources
Line-edge roughness (LER)
Spatial sensitivity
Density gradient (DG) quantum corrections
Spatial Sensitivity of Silicon GAA Nanowire FETs Under Line Edge Roughness Variations
oai:minerva.usc.es:10347/293932023-07-10T06:11:18Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Piñeiro Pomar, César Alfredo
author
Pichel Campos, Juan Carlos
author
2022
One of the most important issues in the path to the convergence of HPC and Big Data is caused by the differences in their software stacks. Despite some research efforts, the interoperability between their programming models and languages is still limited. To deal with this problem we introduce a new computing framework called IgnisHPC, whose main objective is to unify the execution of Big Data and HPC workloads in the same framework. IgnisHPC has native support for multi-language applications using JVM and non-JVM-based languages. Since MPI was used as its backbone technology, IgnisHPC takes advantage of many communication models and network architectures. Moreover, MPI applications can be directly executed in an efficient way in the framework. The main consequence is that users could combine in the same multi-language code HPC tasks (using MPI) with Big Data tasks (using MapReduce operations). The experimental evaluation demonstrates the benefits of our proposal in terms of performance and productivity with respect to other frameworks. IgnisHPC is publicly available for the Big Data and HPC research community
Future Generation Computer Systems 134 (2022) 123-139. https://doi.org/10.1016/j.future.2022.04.002
http://hdl.handle.net/10347/29393
10.1016/j.future.2022.04.002
0167-739X
Big Data
HPC
MPI
Multi-language
Programming models
A unified framework to improve the interoperability between HPC and Big Data languages and programming models
oai:minerva.usc.es:10347/293262023-07-10T06:11:04Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Ordóñez Iglesias, Álvaro
author
Acción Montes, Álvaro
author
Argüello Pedreira, Francisco Santiago
author
Blanco Heras, Dora
author
2021
Image alignment is an essential task in many applications of hyperspectral remote sensing images. Before any processing, the images must be registered. The Maximally Stable Extremal Regions (MSER) is a feature detection algorithm which extracts regions by thresholding the image at different grey levels. These extremal regions are invariant to image transformations making them ideal for registration. The Scale-Invariant Feature Transform (SIFT) is a well-known keypoint detector and descriptor based on the construction of a Gaussian scale-space. This article presents a hyperspectral remote sensing image registration method based on MSER for feature detection and SIFT for feature description. It efficiently exploits the information contained in the different spectral bands to improve the image alignment. The experimental results over nine hyperspectral images show that the proposed method achieves a higher number of correct registration cases using less computational resources than other hyperspectral registration methods. Results are evaluated in terms of accuracy of the registration and also in terms of execution time
Ordóñez, A., Acción, A., Argüello, F., & Heras, D. B. (2021). HSI-MSER: Hyperspectral image registration algorithm based on MSER and SIFT. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 12061-12072. doi:10.1109/JSTARS.2021.3129099
1939-1404
http://hdl.handle.net/10347/29326
10.1109/JSTARS.2021.3129099
2151-1535
Image registration
Hyperspectral image
Image Feature Extraction
HSI-MSER: Hyperspectral Image Registration Algorithm based on MSER and SIFT
oai:minerva.usc.es:10347/308432023-07-07T00:02:45Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2903com_10347_2890col_10347_11719col_10347_15488col_10347_12018
00925njm 22002777a 4500
dc
Vila Blanco, Nicolás
author
Varas Quintana, Paulina
author
Tomás, Inmaculada
author
Carreira, María J.
author
2023
Dental radiographies have been used for many decades for estimating the chronological age, with a view to forensic identification, migration flow control, or assessment of dental development, among others. This study aims to analyse the current application of chronological age estimation methods from dental X-ray images in the last 6 years, involving a search for works in the Scopus and PubMed databases. Exclusion criteria were applied to discard off-topic studies and experiments which are not compliant with a minimum quality standard. The studies were grouped according to the applied methodology, the estimation target, and the age cohort used to evaluate the estimation performance. A set of performance metrics was used to ensure good comparability between the different proposed methodologies. A total of 613 unique studies were retrieved, of which 286 were selected according to the inclusion criteria. Notable tendencies to overestimation and underestimation were observed in some manual approaches for numeric age estimation, being especially notable in the case of Demirjian (overestimation) and Cameriere (underestimation). On the other hand, the automatic approaches based on deep learning techniques are scarcer, with only 17 studies published in this regard, but they showed a more balanced behaviour, with no tendency to overestimation or underestimation. From the analysis of the results, it can be concluded that traditional methods have been evaluated in a wide variety of population samples, ensuring good applicability in different ethnicities. On the other hand, fully automated methods were a turning point in terms of performance, cost, and adaptability to new populations
Vila-Blanco, N., Varas-Quintana, P., Tomás, I. et al. A systematic overview of dental methods for age assessment in living individuals: from traditional to artificial intelligence-based approaches. Int J Legal Med 137, 1117–1146 (2023). https://doi.org/10.1007/s00414-023-02960-z
http://hdl.handle.net/10347/30843
10.1007/s00414-023-02960-z
1117–1146
Dental radiology
Chronological age estimation
Forensic dentistry
Deep learning
A systematic overview of dental methods for age assessment in living individuals: from traditional to artificial intelligence-based approaches
oai:minerva.usc.es:10347/270982023-07-10T06:11:25Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Cores Costa, Daniel
author
Brea Sánchez, Víctor Manuel
author
Mucientes Molina, Manuel Felipe
author
2021
We present a new network architecture able to take advantage of spatio-temporal information available in videos to boost object detection precision. First, box features are associated and aggregated by linking proposals that come from the same anchor box in the nearby frames. Then, we design a new attention module that aggregates short-term enhanced box features to exploit long-term spatio-temporal information. This module takes advantage of geometrical features in the long-term for the first time in the video object detection domain. Finally, a spatio-temporal double head is fed with both spatial information from the reference frame and the aggregated information that takes into account the short- and long-term temporal context. We have tested our proposal in five video object detection datasets with very different characteristics, in order to prove its robustness in a wide number of scenarios. Non-parametric statistical tests show that our approach outperforms the state-of-the-art. Our code is available at https://github.com/daniel-cores/SLTnet
Image and Vision Computing. Volume 110, June 2021, 104179
http://hdl.handle.net/10347/27098
10.1016/j.imavis.2021.104179
0262-8856
Video object detection
Spatio-temporal features
Convolutional neural networks
Short-term anchor linking and long-term self-guided attention for video object detection
oai:minerva.usc.es:10347/199652023-07-10T06:17:36Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Rodríguez García, Germán
author
Estévez Casado, Fernando
author
Vázquez Regueiro, Carlos
author
Nieto, Adrián
author
2018
Mobile phones are increasingly used for purposes that have nothing to do with phone calls
or simple data transfers, and one such use is indoor inertial navigation. Nevertheless, the development
of a standalone application able to detect the displacement of the user starting only from the data
provided by the most common inertial sensors in the mobile phones (accelerometer, gyroscope and
magnetometer), is a complex task. This complexity lies in the hardware disparity, noise on data,
and mostly the many movements that the mobile phone can experience and which have nothing to do
with the physical displacement of the owner. In our case, we describe a proposal, which, after using
quaternions and a Kalman filter to project the sensors readings into an Earth Centered inertial reference
system, combines a classic Peak-valley detector with an ensemble of SVMs (Support Vector Machines)
and a standard deviation based classifier. Our proposal is able to identify and filter out those segments
of signal that do not correspond to the behavior of “walking”, and thus achieve a robust detection
of the physical displacement and counting of steps. We have performed an extensive experimental
validation of our proposal using a dataset with 140 records obtained from 75 different people who were
not connected to this research
Rodríguez, G., Casado, F., Iglesias, R., Regueiro, C., & Nieto, A. (2018). Robust step counting for inertial navigation with mobile phones. Sensors, 18(9), 3157.
1424-8220
http://hdl.handle.net/10347/19965
10.3390/s18093157
Indoor-positioning
Pedestrian dead reckoning
Sensor fusion
Step counting
Robust Step Counting for Inertial Navigation with Mobile Phones
oai:minerva.usc.es:10347/177882020-01-31T14:29:26Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2912com_10347_2890col_10347_11719col_10347_15488col_10347_13405
00925njm 22002777a 4500
dc
Souto Bayarri, José Miguel
author
Suárez Cuenca, Jorge Juan
author
García Tahoces, Pablo
author
Revel, Marie-Pierre
author
Delhaye, Damien
author
Carreira Villamor, José Martín
author
Remy-Jardin, Martin
author
Remy, Jacque
author
2012
The purpose of this study was to evaluate the diagnostic performance of a computer-aided diagnosis (CAD) system, on the detection of pulmonary nodules in multidetector row computed tomography (MDCT) images, by using two different MDCT scanners. The computerized scheme was based on the iris filter. We have collected CT cases of patients with pulmonary nodules. We have included in the study one hundred and thirty-two calcified and noncalcified nodules, measuring 4-30 mm in diameter. CT examinations were performed by using two different equipments: a CT scanner (SOMATOM Emotion 6), and a dual-source computed tomography system (SOMATOM Definition) (Siemens Medical System, Forchheim, Germany), with the following parameters: collimation, 6x1.0mm (Emotion 6); and 64×0.6mm (Definition); 100-130 kV; 70-110 mAs. Data were reconstructed with a slice thickness of 1.25mm (Emotion 6) and 1mm (Definition). True positive cases were determined by an independent interpretation of the study by three experienced chest radiologists, the panel decision being used as the reference standard. Free-response Receiver Operating Characteristic curves, sensitivity and number of false-positive per scan, were calculated. Our CAD scheme, for the test set of the study, yielded a sensitivity of 80%, with an average of 5.2 FPs per examination. At an average false positive rate of 9 per scan, our CAD scheme achieved sensitivities of 94% for all nodules, 94.5% for solid, 80% for non-solid, 84% for spiculated, and 97% for non-spiculated nodules. These encouraging results suggest that our CAD system, advocated as a second reader, may help radiologists in the detection of lung nodules in MDCT
BAYARRI, M., Suárez-Cuenca, J., Tahoces, P., Revel, M., Delhaye, D., & Carreira, J. et al. (2012). Automatic detection of pulmonary nodules: Evaluation of performance using two different MDCT scanners. Journal Of Biomedical Graphics And Computing, 2(2). doi: 10.5430/jbgc.v2n2p55
1925-4008
http://hdl.handle.net/10347/17788
10.5430/jbgc.v2n2p55
1925-4016
Computer aided diagnosis
Computer-aided diagnosis
Multidetector row computed tomography
Pulmonary nodule
Automatic detection of pulmonary nodules
Automatic detection of pulmonary nodules: Evaluation of performance using two different MDCT scanners
oai:minerva.usc.es:10347/275912023-07-10T06:11:27Zcom_10347_2990com_10347_2889com_10347_227com_10347_15562com_10347_15468col_10347_11719col_10347_15563
00925njm 22002777a 4500
dc
Almobydeen, Shahed
author
Ríos Viqueira, José Ramón
author
Lama Penín, Manuel
author
2022
This paper presents the design of a GeoSPARQL query processing solution for scientific raster array data, called GeoLD. The solution enables the implementation of SPARQL endpoints on top of OGC standard Web Coverage Processing Services (WCPS). Thus, the semantic querying of scientific raster data is supported without the need of specific raster array functions in the language. To achieve this, first Coverage to RDF mapping solutions were defined, based on the well-known W3C standard mappings for relational data. Next, the SPARQL algebra is extended with a new operator that delegates part of the GeoSPARQL query in WCPS services. Query optimization replaces those parts of the SPARQL query plan that may be delegated to a WCPS service by instances of such new WCPS operator. A first prototype has been implemented by extending the ARQ SPARQL query engine of Apache Jena. Petascope was used as the WCPS implementation on top of the Rasdaman raster array database. An initial evaluation with real meteorological data shows, as it was initially expected, that the approach outperforms an existing reference relational database based GeoSPARQL implementation
Computers & Geosciences 159 (2022) 105023
http://hdl.handle.net/10347/27591
10.1016/j.cageo.2021.105023
0098-3004
Geospatial linked data
Scientific linked data
Array linked data
Raster linked data
GeoSPARQL
Spatial query processing
GeoSPARQL query support for scientific raster array data
oai:minerva.usc.es:10347/246552023-07-10T06:17:47Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2968com_10347_2894col_10347_11719col_10347_15488col_10347_10041
00925njm 22002777a 4500
dc
Pichel Campos, José Ramón
author
Gamallo Otero, Pablo
author
Alegría, Iñaki
author
Neves, Marco
author
2020
The aim of this paper is to apply a corpus-based methodology, based on the measure of perplexity, to automatically calculate the cross-lingual language distance between historical periods of three languages. The three historical corpora have been constructed and collected with the closest spelling to the original on a balanced basis of fiction and non-fiction. This methodology has been applied to measure the historical distance of Galician with respect to Portuguese and Spanish, from the Middle Ages to the end of the 20th century, both in original spelling and automatically transcribed spelling. The quantitative results are contrasted with hypotheses extracted from experts in historical linguistics. Results show that Galician and Portuguese are varieties of the same language in the Middle Ages and that Galician converges and diverges with Portuguese and Spanish since the last period of the 19th century. In this process, orthography plays a relevant role. It should be pointed out that the method is unsupervised and can be applied to other languages
José Ramom Pichel, Pablo Gamallo, Iñaki Alegria & Marco Neves (2020) A Methodology to Measure the Diachronic Language Distance between Three Languages Based on Perplexity, Journal of Quantitative Linguistics, DOI: 10.1080/09296174.2020.1732177
0929-6174
http://hdl.handle.net/10347/24655
10.1080/09296174.2020.1732177
1744-5035
A Methodology to Measure the Diachronic Language Distance between Three Languages Based on Perplexity
oai:minerva.usc.es:10347/238232023-07-10T06:11:31Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Martín Rodilla, Patricia
author
Hattori, Marcia L.
author
González Pérez, César
author
2019
Anthropological, archaeological, and forensic studies situate enforced disappearance as a strategy associated with the Brazilian military dictatorship (1964–1985), leaving hundreds of persons without identity or cause of death identified. Their forensic reports are the only existing clue for people identification and detection of possible crimes associated with them. The exchange of information among institutions about the identities of disappeared people was not a common practice. Thus, their analysis requires unsupervised techniques, mainly due to the fact that their contextual annotation is extremely time-consuming, difficult to obtain, and with high dependence on the annotator. The use of these techniques allows researchers to assist in the identification and analysis in four areas: Common causes of death, relevant body locations, personal belongings terminology, and correlations between actors such as doctors and police officers involved in the disappearances. This paper analyzes almost 3000 textual reports of missing persons in São Paulo city during the Brazilian dictatorship through unsupervised algorithms of information extraction in Portuguese, identifying named entities and relevant terminology associated with these four criteria. The analysis allowed us to observe terminological patterns relevant for people identification (e.g., presence of rings or similar personal belongings) and automate the study of correlations between actors. The proposed system acts as a first classificatory and indexing middleware of the reports and represents a feasible system that can assist researchers working in pattern search among autopsy reports
Martin-Rodilla, P.; Hattori, M.L.; Gonzalez-Perez, C. Assisting Forensic Identification through Unsupervised Information Extraction of Free Text Autopsy Reports: The Disappearances Cases during the Brazilian Military Dictatorship. Information 2019, 10, 231
http://hdl.handle.net/10347/23823
10.3390/info10070231
2078-2489
Information extraction
Named entity recognition
Terminology extraction
Autopsy reports
Assisting Forensic Identification through Unsupervised Information Extraction of Free Text Autopsy Reports: The Disappearances Cases during the Brazilian Military Dictatorship
oai:minerva.usc.es:10347/212482023-07-10T06:16:53Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Seoane Iglesias, Natalia
author
Nagy, Daniel
author
Indalecio Fernández, Guillermo
author
Espiñeira Deus, Gabriel
author
Kalna, Karol
author
García Loureiro, Antonio Jesús
author
2019
An in-house-built three-dimensional multi-method semi-classical/classical toolbox has been developed to characterise the performance, scalability, and variability of state-of-the-art semiconductor devices. To demonstrate capabilities of the toolbox, a 10 nm gate length Si gate-all-around field-effect transistor is selected as a benchmark device. The device exhibits an off-current ( IOFF ) of 0.03 μ A/ μ m, and an on-current ( ION ) of 1770 μ A/ μ m, with the ION/IOFF ratio 6.63×104 , a value 27% larger than that of a 10.7 nm gate length Si FinFET. The device SS is 71 mV/dec, no far from the ideal limit of 60 mV/dec. The threshold voltage standard deviation due to statistical combination of four sources of variability (line- and gate-edge roughness, metal grain granularity, and random dopants) is 55.5 mV, a value noticeably larger than that of the equivalent FinFET (30 mV). Finally, using a fluctuation sensitivity map, we establish which regions of the device are the most sensitive to the line-edge roughness and the metal grain granularity variability effects. The on-current of the device is strongly affected by any line-edge roughness taking place near the source-gate junction or by metal grains localised between the middle of the gate and the proximity of the gate-source junction
Seoane, N.; Nagy, D.; Indalecio, G.; Espiñeira, G.; Kalna, K.; García-Loureiro, A. A Multi-Method Simulation Toolbox to Study Performance and Variability of Nanowire FETs. Materials 2019, 12, 2391
http://hdl.handle.net/10347/21248
10.3390/ma12152391
1996-1944
Nanowire field-effect transistors
Variability effects
Monte Carlo
Schrödinger based quantum corrections
Drift-diffusion
A Multi-Method Simulation Toolbox to Study Performance and Variability of Nanowire FETs
oai:minerva.usc.es:10347/266382023-07-10T06:11:26Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Estévez Casado, Fernando
author
Lema Pais, Dylan
author
Fernández Criado, Marcos
author
Iglesias Rodríguez, Roberto
author
Vázquez Regueiro, Carlos
author
Barro Ameneiro, Senén
author
2021
Smart devices, such as smartphones, wearables, robots, and others, can collect vast amounts of data from their environment. This data is suitable for training machine learning models, which can significantly improve their behavior, and therefore, the user experience. Federated learning is a young and popular framework that allows multiple distributed devices to train deep learning models collaboratively while preserving data privacy. Nevertheless, this approach may not be optimal for scenarios where data distribution is non-identical among the participants or changes over time, causing what is known as concept drift. Little research has yet been done in this field, but this kind of situation is quite frequent in real life and poses new challenges to both continual and federated learning. Therefore, in this work, we present a new method, called Concept-Drift-Aware Federated Averaging (CDA-FedAvg). Our proposal is an extension of the most popular federated algorithm, Federated Averaging (FedAvg), enhancing it for continual adaptation under concept drift. We empirically demonstrate the weaknesses of regular FedAvg and prove that CDA-FedAvg outperforms it in this type of scenario
Casado, F.E., Lema, D., Criado, M.F. et al. Concept drift detection and adaptation for federated and continual learning. Multimed Tools Appl (2021). https://doi.org/10.1007/s11042-021-11219-x
1380-7501
http://hdl.handle.net/10347/26638
10.1007/s11042-021-11219-x
1573-7721
Federated learning
Continual learning
Nonstationarity
Concept drift
Federated Averaging
Catastrophic forgetting
Rehearsal
Concept drift detection and adaptation for federated and continual learning
oai:minerva.usc.es:10347/177022023-07-10T06:16:41Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Quintía Vidal, Pablo
author
Iglesias Rodríguez, Roberto
author
Rodríguez González, Miguel Ángel
author
Vázquez Regueiro, Carlos
author
Valdés Villarrubia, Fernando
author
2012
This article describes a proposal to achieve fast robot learning from its interaction with the environment. Our proposal will be suitable for continuous learning procedures as it tries to limit the instability that appears every time the robot encounters a new situation it had not seen before. On the other hand, the user will not have to establish a degree of exploration (usual in reinforcement learning) and that would prevent continual learning procedures. Our proposal will use an ensemble of learners able to combine dynamic programming and reinforcement learning to predict when a robot will make a mistake. This information will be used to dynamically evolve a set of control policies that determine the robot actions
Quintía Vidal, P., Iglesias Rodríguez, R., Rodríguez González, M., Vázquez Regueiro, C., & Valdés Villarrubia, F. (2012). Learning in real robots from environment interaction. Journal of Physical Agents, 6(1), 43-51. doi:https://doi.org/10.14198/JoPha.2012.6.1.06
1888-0258
http://hdl.handle.net/10347/17702
0.14198/JoPha.2012.6.1.06
Continuous robot learning
Robot adaptation
Learning from environment interaction
Reinforcement learning
Learning in real robots from environment interaction
oai:minerva.usc.es:10347/246542023-07-10T06:18:09Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
González Bascoy, Pedro
author
Quesada Barriuso, Pablo
author
Blanco Heras, Dora
author
Argüello Pedreira, Francisco Santiago
author
2019
The high resolution of the hyperspectral remote sensing images available allows the detailed analysis of even small spatial structures. As a consequence, the study of techniques to efficiently extract spatial information is a very active realm. In this paper, we propose a novel denoising wavelet-based profile for the extraction of spatial information that does not require parameters fixed by the user. Over each band obtained by a wavelet-based feature extraction technique, a denoising profile (DP) is built through the recursive application of discrete wavelet transforms followed by a thresholding process. Each component of the DP consists of features reconstructed by recursively applying inverse wavelet transforms to the thresholded coefficients. Several thresholding methods are explored. In order to show the effectiveness of the extended DP (EDP), we propose a classification scheme based on the computation of the EDP and supervised classification by extreme learning machine. The obtained results are compared to other state-of-the-art methods based on profiles in the literature. An additional study of behavior in the presence of added noise is also performed showing the high reliability of the EDP proposed
Pedro G. Bascoy, Pablo Quesada-Barriuso, Dora B. Heras and Fancisco Argüello (2019) Wavelet-Based Multicomponent Denoising Profile for the Classification of Hyperspectral Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12 (2), 722 - 733. Doi: 10.1109/JSTARS.2019.2892990
1939-1404
http://hdl.handle.net/10347/24654
10.1109/JSTARS.2019.2892990
2151-1535
Classification
Denoising
Profile
Remote sensing
Wavelet transform
Wavelet-Based Multicomponent Denoising Profile for the Classification of Hyperspectral Images
oai:minerva.usc.es:10347/177002023-07-10T06:16:45Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Comesaña Figueroa, Enrique
author
Aldegunde Villar, Manuel Alejo
author
García Loureiro, Antonio Jesús
author
2011
Simulations of the tunneling current as a function of voltage and temperature for a Zener diode
where both sides are ferromagnetic have been performed. The current is evaluated as a function
of the applied bias, the magnetization, and the temperature on the diode. The tunneling
magnetoresistance is also analyzed. Mn doped GaAs parameters were used to simulate a highly
asymmetric doped diode, which leads to a large difference on the magnetization values between the
p and n sides
Comesaña, E., Aldegunde, M., & Garcia-Loureiro, A. (2011). Spin-polarized transport in a full magnetic pn tunnel junction. Applied Physics Letters, 98(19), 192507. doi: 10.1063/1.3586770
0003-6951
http://hdl.handle.net/10347/17700
10.1063/1.3586770
1077-3118
Spin-polarized transport in a full magnetic pn tunnel junction
oai:minerva.usc.es:10347/177102023-07-10T06:16:15Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Ordóñez Iglesias, Álvaro
author
Argüello Pedreira, Francisco Santiago
author
Blanco Heras, Dora
author
2018
Image registration is a common operation in any type of image processing, specially in remote sensing images. Since the publication of the scale–invariant feature transform (SIFT) method, several algorithms based on feature detection have been proposed. In particular, KAZE builds the scale space using a nonlinear diffusion filter instead of Gaussian filters. Nonlinear diffusion filtering allows applying a controlled blur while the important structures of the image are preserved. Hyperspectral images contain a large amount of spatial and spectral information that can be used to perform a more accurate registration. This article presents HSI–KAZE, a method to register hyperspectral remote sensing images based on KAZE but considering the spectral information. The proposed method combines the information of a set of preselected bands, and it adapts the keypoint descriptor and the matching stage to take into account the spectral information. The method is adequate to register images in extreme situations in which the scale between them is very different. The effectiveness of the proposed algorithm has been tested on real images taken on different dates, and presenting different types of changes. The experimental results show that the method is robust achieving image registrations with scales of up to 24.0×
Ordóñez, Á.; Argüello, F.; Heras, D.B. Alignment of Hyperspectral Images Using KAZE Features. Remote Sens. 2018, 10, 756
http://hdl.handle.net/10347/17710
10.3390/rs10050756
2072-4292
Hyperspectral data
Image registration
KAZE features
Remote sensing
Alignment of Hyperspectral Images Using KAZE Features
oai:minerva.usc.es:10347/177232020-11-11T12:10:10Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Pérez Montes, Diego
author
Añel Cabanelas, Juan Antonio
author
Fernández Pena, Anselmo Tomás
author
Uhe, Peter
author
Wallom, David C. H.
author
2017
Volunteer or crowd computing is becoming increasingly popular for solving complex research problems from an increasingly diverse range of areas. The majority of these have been built using the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which provides a range of different services to manage all computation aspects of a project. The BOINC system is ideal in those cases where not only does the research community involved need low-cost access to massive computing resources but also where there is a significant public interest in the research being done.
We discuss the way in which cloud services can help BOINC-based projects to deliver results in a fast, on demand manner. This is difficult to achieve using volunteers, and at the same time, using scalable cloud resources for short on demand projects can optimize the use of the available resources. We show how this design can be used as an efficient distributed computing platform within the cloud, and outline new approaches that could open up new possibilities in this field, using Climateprediction.net (http://www.climateprediction.net/) as a case study
Montes, D., Añel, J. A., Pena, T. F., Uhe, P., and Wallom, D. C. H.: Enabling BOINC in infrastructure as a service cloud system, Geosci. Model Dev., 10, 811-826, https://doi.org/10.5194/gmd-10-811-2017, 2017
1991-959X
http://hdl.handle.net/10347/17723
10.5194/gmd-10-811-2017
1991-9603
Enabling BOINC in infrastructure as a service cloud system
oai:minerva.usc.es:10347/307942023-07-10T06:11:51Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Esmorís Pena, Alberto Manuel
author
López Vilariño, David
author
Arango, David F.
author
Varela García, Francisco Alberto
author
Cabaleiro Domínguez, José Carlos
author
Fernández Rivera, Francisco Manuel
author
2023
Light detection and ranging (LiDAR) scanning in urban environments leads to accurate and dense three-dimensional point clouds where the different elements in the scene can be precisely characterized. In this paper, two LiDAR-based algorithms that complement each other are proposed. The first one is a novel profiling method robust to noise and obstacles. It accurately characterizes the curvature, the slope, the height of the sidewalks, obstacles, and defects such as potholes. It was effective for 48 of 49 detected zebra crossings, even in the presence of pedestrians or vehicles in the crossing zone. The second one is a detailed quantitative summary of the state of the zebra crossing. It contains information about the location, the geometry, and the road marking. Coarse grain statistics are more prone to obstacle-related errors and are only fully reliable for 18 zebra crossings free from significant obstacles. However, all the anomalous statistics can be analyzed by looking at the associated profiles. The results can help in the maintenance of urban roads. More specifically, they can be used to improve the quality and safety of pedestrian routes
Esmorís, A. M., Vilariño, D. L., Arango, D. F., Varela-García, F.-A., Cabaleiro, José~C., & Rivera, F. F. (2023). Characterizing zebra crossing zones using LiDAR data. Computer-Aided Civil and Infrastructure Engineering, 1–22. https://doi.org/10.1111/mice.12968
1093-9687
http://hdl.handle.net/10347/30794
10.1111/mice.12968
1467-8667
Zebra crossing zones
Light detection and ranging
LiDAR
Safety of pedestrians
Characterizing zebra crossing zones using LiDAR data
oai:minerva.usc.es:10347/177172022-11-15T12:16:53Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Méndez Fernández, Roi
author
Otero, Antonio
author
Jarque, Samuel
author
Flores González, Julián Carlos
author
2012
En este artículo se presenta el desarrollo de un sistema para la visualización interactiva de la reconstrucción virtual de los instrumentos del Pórtico de la Gloria.
Se explica el proceso seguido para la creación de un conjunto de hardware y software específicos y centrados en el usuario que, a través de una interfaz tangible,
permiten la interacción con reproducciones 3D altamente realistas de los instrumentos del Pórtico.
El sistema, utilizando técnicas de visión por computador para controlar la interacción persona-ordenador, permite a un usuario interactuar con modelos 3D de una
forma intuitiva y sencilla. De este modo se consigue hacer accesibles estos modelos a usuarios no expertos haciendo del sistema una opción ideal para su exposición
en museos
This article presents the development of a system to perform the interactive visualization of the virtual reconstruction of the instruments of the Portico de la Gloria.
We describe the process followed for creating a specific set of hardware and software centered on the user that, through a tangible interface, allows interaction with
highly realistic 3D views of the instruments of the Portico.
The system, using computer vision techniques to control human-computer interaction, allows a user to interact with 3D models in an intuitive and easy way. This
will make these models accessible to non-experts, making the system an ideal choice for its exhibition in museums
Méndez, R., Otero, A., Jarque, S., & Flores, J. (2012). Exploración en tiempo real de la reconstrucción virtual de los instrumentos del Pórtico de la Gloria. Virtual Archaeology Review, 3(6), 49. doi: 10.4995/var.2012.4440
http://hdl.handle.net/10347/17717
10.4995/var.2012.4440
1989-9947
IPO
Tiempo real
Pórtico de la Gloria
HCI
Real time
Exploración en tiempo real de la reconstrucción virtual de los instrumentos del Pórtico de la Gloria
oai:minerva.usc.es:10347/246452023-07-10T06:17:19Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Ruiz, Ana
author
Seoane Iglesias, Natalia
author
Claramunt, Sergi
author
García Loureiro, Antonio Jesús
author
Porti, Marc
author
Nafria, Montserrat
author
2019
In this work, a more realistic approximation based on 2D nanoscale experimental data obtained on a metal layer is presented to investigate the impact of the metal gate polycrystallinity on the MOSFET variability. The nanoscale data (obtained with a Kelvin Probe Force Microscope, KPFM) were introduced in a device simulator to analyze the effect of a TiN metal gate work functions (WF) fluctuations on the MOSFET electrical characteristics. The results demonstrate that the device characteristics are affected not only by the WF fluctuations, but also their spatial distribution, which is specially relevant in very small devices. The effect on these characteristics of the spatial distribution on the gate area of such fluctuations is also evaluated
Microelectronic Engineering, Volume 216, 15 August 2019, 111048
0167-9317
http://hdl.handle.net/10347/24645
10.1016/j.mee.2019.111048
Combined nanoscale KPFM characterization and device simulation for the evaluation of the MOSFET variability related to metal gate workfunction fluctuations
oai:minerva.usc.es:10347/246512023-07-10T06:17:23Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Acción Montes, Álvaro
author
Argüello Pedreira, Francisco Santiago
author
Blanco Heras, Dora
author
2019
Morphological profiles are a common approach for extracting spatial information from remote sensing hyperspectral images by extracting structural features. Other profiles can be built based on different approaches such as, for example, differential morphological profiles, or attribute profiles. Another technique used for characterizing spatial information on the images at different scales is based on computing profiles relying on edge-preserving filters such as anisotropic diffusion filters. Their main advantage is the preservation of the distinctive morphological features of the images at the cost of an iterative calculation. In this article, the high computational cost associated with the construction of anisotropic diffusion profiles (ADPs) is highly reduced. In particular, we propose a low-cost computational approach for computing ADPs on Nvidia GPUs as well as a detailed characterization of the method, comparing it in terms of accuracy and structural similarity to other existing alternatives
Álvaro Acción, Francisco Argüello and Dora B. Heras (2019) Extended Anisotropic Diffusion Profiles in GPU for Hyperspectral Imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12 (12), 4964-4976. Doi: 10.1109/JSTARS.2019.2939857
1939-1404
http://hdl.handle.net/10347/24651
10.1109/JSTARS.2019.2939857
2151-1535
Anisotropic diffusion profile
CUDA
Hyperspectral
Nonlinear diffusion
Remote sensing
Extended Anisotropic Diffusion Profiles in GPU for Hyperspectral Imagery
oai:minerva.usc.es:10347/246522023-07-10T06:17:54Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Outes Castro, Celia
author
Fernández, Eduardo F.
author
Seoane Iglesias, Natalia
author
Almonacid, Florencia
author
García Loureiro, Antonio Jesús
author
2020
Ultra-high concentrator photovoltaic systems (UHCPV), usually referred to CPV systems exceeding 1000 suns, are signalled as one of the most promising research avenues to produce a new generation of high-efficiency and low-cost CPV systems. However, the structure of current concentrator solar cells prevents their development due to the unavoidable series resistance losses at such elevated concentration ratios. In this work, we investigate the performance of the so-called vertical-tunnel-junction (VTJ), recently introduced by the authors, by using advance TCAD. In particular, we carry out an optimisation procedure of the key parameters that affect its performance and conduct a deep investigation of the impact of the main recombination mechanisms and of sun concentration up to 10,000 suns. The results indicate that the performance of the novel structure is not significantly affected by these two factors. A record efficiency of 32.2% at 10,000 suns has been found. This represents a promising way to obtain state-of-the-art efficiencies above 30% for single-band-gap cells, and offers a new route towards the development of competitive CPV systems operating at ultra-high concentration fluxes
Solar Energy, Volume 203, June 2020, Pages 136-144
0038-092X
http://hdl.handle.net/10347/24652
10.1016/j.solener.2020.04.029
Vertical solar cells
Series resistance
Gallium arsenide (GaAs)
Tunnel diode
Concentrator photovoltaics
Numerical optimisation and recombination effects on the vertical-tunnel-junction (VTJ) GaAs solar cell up to 10,000 suns
oai:minerva.usc.es:10347/246562023-07-10T06:17:51Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Piñeiro Pomar, César Alfredo
author
Martínez Castaño, Rodrigo
author
Pichel Campos, Juan Carlos
author
2020
Most of the relevant Big Data processing frameworks (e.g., Apache Hadoop, Apache Spark) only support JVM (Java Virtual Machine) languages by default. In order to support non-JVM languages, subprocesses are created and connected to the framework using system pipes. With this technique, the impossibility of managing the data at thread level arises together with an important loss in the performance. To address this problem we introduce Ignis, a new Big Data framework that benefits from an elegant way to create multi-language executors managed through an RPC system. As a consequence, the new system is able to execute natively applications implemented using non-JVM languages. In addition, Ignis allows users to combine in the same application the benefits of implementing each computational task in the best suited programming language without additional overhead. The system runs completely inside Docker containers, isolating the execution environment from the physical machine. A comparison with Apache Spark shows the advantages of our proposal in terms of performance and scalability
Future Generation Computer Systems, Volume 105, April 2020, Pages 705-716
0167-739X
http://hdl.handle.net/10347/24656
10.1016/j.future.2019.12.052
Big data
Multi-language
Performance
Scalability
Container
Ignis: An efficient and scalable multi-language Big Data framework
oai:minerva.usc.es:10347/177122020-01-31T13:38:39Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2912com_10347_2890col_10347_11719col_10347_15488col_10347_13405
00925njm 22002777a 4500
dc
Souto Bayarri, José Miguel
author
Masip, Lambert Raúl
author
Couto, Miguel
author
Suárez Cuenca, Jorge Juan
author
Martínez, Amparo
author
García Tahoces, Pablo
author
Carreira Villamor, José Martín
author
Croisille, Pierre
author
2013
The purpose of this study was to evaluate the performance of a semiautomatic segmentation method for the anatomical and functional assessment of both ventricles from cardiac cine magnetic resonance (MR) examinations, reducing user interaction to a “mouse-click”. Fifty-two patients with cardiovascular diseases were examined using a 1.5-T MR imaging unit. Several parameters of both ventricles, such as end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (EF), were quantified by an experienced operator using the conventional method based on manually-defined contours, as the standard of reference; and a novel semiautomatic segmentation method based on edge detection, iterative thresholding and region growing techniques, for evaluation purposes. No statistically significant differences were found between the two measurement values obtained for each parameter (p > 0.05). Correlation to estimate right ventricular function was good (r > 0.8) and turned out to be excellent (r > 0.9) for the left ventricle (LV). Bland-Altman plots revealed acceptable limits of agreement between the two methods (95%). Our study findings indicate that the proposed technique allows a fast and accurate assessment of both ventricles. However, further improvements are needed to equal results achieved for the right ventricle (RV) using the conventional methodology
Souto, M.; Masip, L.R.; Couto, M.; Suárez-Cuenca, J.J.; Martínez, A.; Tahoces, P.G.; Carreira, J.M.; Croisille, P. Quantification of Right and Left Ventricular Function in Cardiac MR Imaging: Comparison of Semiautomatic and Manual Segmentation Algorithms. Diagnostics 2013, 3, 271-282.
http://hdl.handle.net/10347/17712
10.3390/diagnostics3020271
2075-4418
Cardiac cine magnetic resonance imaging (MRI)
Segmentation
Ejection fraction (EF)
Right ventricular function
Left ventricular function
Quantification of Right and Left Ventricular Function in Cardiac MR Imaging: Comparison of Semiautomatic and Manual Segmentation Algorithms
oai:minerva.usc.es:10347/177272020-11-11T12:15:12Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Atamuratov, Atabek E.
author
Abdikarimov, A.
author
Khalilloev, Mahkam M.
author
Atamuratova, Z. A.
author
Rahmanov, R.
author
García Loureiro, Antonio Jesús
author
Yusupov, Ahmed
author
2017
Short channel effects, such as DIBL are compared for SOI-FinFETs with different silicon body geometries. The original device considered was
straight without narrowing at the top and a set of devices that exhibit the mentioned narrowing, up to the extreme case where the top of the gate
has no surface and so the body cross-section is essentially a triangle. We have studied five different variations from the original geometry of a
25 nm gate length SOI-FinFET device with 1.5 nm thick oxide layer. The P-type channel had a doping concentration of 1015 cm−3
and n-type
S/D areas are doped at concentrations of 1020 cm−3
. The silicon body of the device accordingly had a height of 30 nm and a width of 12
nm. Simulation results show the source-drain barrier decreasing with increasing the upper body thickness. The DIBL effect of the considered
FinFETs depends on upper body thickness, tending to increase with thicker upper body widths. Results of a comparison of two devices with
different shapes but with the same cross-sectional area shows the relationship mainly depends on the shape rather than the cross-section area of the device body
Atamuratov, A., Abdikarimov, A., Khalilloev, M., Atamuratova, Z.A., Rahmanov, R., García Loureiro, A. & Yusupov, A. (2017). Simulation of DIBL effect in 25 nm SOI-FinFET with the different body shapes. Nanosystems: Physics, Chemistry, Mathematics, 8 (1), P. 71–74
2220-8054
http://hdl.handle.net/10347/17727
10.17586/2220-8054-2017-8-1-71-74
2305-7971
FinFET
DIBL
Potential barrier
Simulation of DIBL effect in 25 nm SOI-FinFET with the different body shapes
oai:minerva.usc.es:10347/177132020-01-31T13:26:19Zcom_10347_2990com_10347_2889com_10347_227com_10347_2903com_10347_2890com_10347_2888col_10347_11719col_10347_12018
00925njm 22002777a 4500
dc
Quintas González, Víctor
author
Prada López, Isabel
author
Carreira, María J.
author
Suárez Quintanilla, David
author
Balsa Castro, Carlos
author
Tomás Carmona, Inmaculada
author
2017
Currently, there is little evidence on the in situ antibacterial activity of essential oils (EO) without alcohol. This study aimed to evaluate in situ the substantivity and antiplaque effect on the plaque-like biofilm (PL-biofilm) of two solutions, a traditional formulation that contains EO with alcohol (T-EO) and an alcohol-free formulation of EO (Af-EO). Eighteen healthy adults performed a single mouthwash of: T-EO, Af-EO, and sterile water (WATER) after wearing an individualized disk-holding splint for 2 days. The bacterial viability (BV) and thickness of the PL-biofilm were quantified at baseline, 30 s, and 1, 3, 5, and 7 h post-rinsing (Test 1). Subsequently, each volunteer wore the splint for 4 days, applying two daily mouthwashes of: T-EO, Af-EO, and WATER. The BV, thickness, and covering grade (CG) of the PL-biofilm were quantified (Test 2). Samples were analyzed by confocal laser scanning microscopy after staining with the LIVE/DEAD® BacLight™ solution. To conduct the computations of the BV automatically, a Matlab toolbox called Dentius Biofilm was developed. In test 1, both EO antiseptics had a similar antibacterial effect, reducing BV after a single rinse compared to the WATER, and keeping it below baseline levels up to 7 h post-rinse (P < 0.001). The mean thickness of the PL-biofilm after rinsing was not affected by any of the EO formulations and ranged from 18.58 to 20.19 μm. After 4 days, the T-EO and Af-EO solutions were significantly more effective than the WATER, reducing the BV, thickness, and CG of the PL-biofilm (P < 0.001). Although, both EO antiseptics presented a similar bactericidal activity, the Af-EO rinses led to more significant reductions in the thickness and CG of the PL-biofilm than the T-EO rinses (thickness = 7.90 vs. 9.92 μm, P = 0.012; CG = 33.36 vs. 46.61%, P = 0.001). In conclusion, both essential oils antiseptics had very high immediate antibacterial activity and substantivity in situ on the 2-day PL-biofilm after a single mouthwash. In the 4-day PL-biofilm, both essential oils formulations demonstrated a very good antiplaque effect in situ, although the alcohol-free formula performed better at reducing the biofilm thickness and covering grade
Quintas V, Prada-López I, Carreira MJ, Suárez-Quintanilla D, Balsa-Castro C and Tomás I (2017) In Situ Antibacterial Activity of Essential Oils with and without Alcohol on Oral Biofilm: A Randomized Clinical Trial. Front. Microbiol. 8:2162. doi: 10.3389/fmicb.2017.02162
http://hdl.handle.net/10347/17713
10.3389/fmicb.2017.02162
1664-302X
Anti-infective agents
Local
Biofilm
Dental plaque
Essential oils
Microscopy
Fluorescence
In Situ Antibacterial Activity of Essential Oils with and without Alcohol on Oral Biofilm: A Randomized Clinical Trial
oai:minerva.usc.es:10347/218392023-07-10T06:17:49Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Kobus, Robin
author
Abuín Mosquera, José Manuel
author
Müller, André
author
Hellmann, Sören Lukas
author
Pichel Campos, Juan Carlos
author
Fernández Pena, Anselmo Tomás
author
Hildebrandt, Andreas
author
Hankeln, Thomas
author
Schmidt, Bertil
author
2020
Background
All-Food-Sequencing (AFS) is an untargeted metagenomic sequencing method that allows for the detection and quantification of food ingredients including animals, plants, and microbiota. While this approach avoids some of the shortcomings of targeted PCR-based methods, it requires the comparison of sequence reads to large collections of reference genomes. The steadily increasing amount of available reference genomes establishes the need for efficient big data approaches.
Results
We introduce an alignment-free k-mer based method for detection and quantification of species composition in food and other complex biological matters. It is orders-of-magnitude faster than our previous alignment-based AFS pipeline. In comparison to the established tools CLARK, Kraken2, and Kraken2+Bracken it is superior in terms of false-positive rate and quantification accuracy. Furthermore, the usage of an efficient database partitioning scheme allows for the processing of massive collections of reference genomes with reduced memory requirements on a workstation (AFS-MetaCache) or on a Spark-based compute cluster (MetaCacheSpark).
Conclusions
We present a fast yet accurate screening method for whole genome shotgun sequencing-based biosurveillance applications such as food testing. By relying on a big data approach it can scale efficiently towards large-scale collections of complex eukaryotic and bacterial reference genomes. AFS-MetaCache and MetaCacheSpark are suitable tools for broad-scale metagenomic screening applications. They are available at https://muellan.github.io/metacache/afs.html (C++ version for a workstation) and https://github.com/jmabuin/MetaCacheSpark (Spark version for big data clusters).
Kobus, R., Abuín, J.M., Müller, A. et al. A big data approach to metagenomics for all-food-sequencing. BMC Bioinformatics 21, 102 (2020)
http://hdl.handle.net/10347/21839
10.1186/s12859-020-3429-6
1471-2105
Next-generation sequencing
Metagenomics
Species identification
Eukaryotic genomes
Locality sensitive hashing
Bigdata
A big data approach to metagenomics for all-food-sequencing
oai:minerva.usc.es:10347/294852023-07-10T06:11:04Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
García Fernández, Julián
author
Seoane Iglesias, Natalia
author
Comesaña Figueroa, Enrique
author
García Loureiro, Antonio
author
2022
We present a novel Pelgrom-based predictive (PBP) model to estimate the impact of variability on the on-current of different state-of-the-art semiconductor devices. In this work, we focus on two of the most problematic sources of variability, the metal grain granularity (MGG) and the line edge roughness (LER). This model allows us to make an accurate prediction of the on-current standard deviation , being the relative error of the predicted data lower than 8% in 92% of the studied cases. The PBP model entails an immense reduction in the computational cost since once it is calibrated for an architecture, the prediction of the impact of a variability on devices with any given dimension can be made without any further simulations. This model could be useful for predicting the effect of variability on future technology nodes
Solid-State Electronics 199 (2023), 108492
http://hdl.handle.net/10347/29485
10.1016/j.sse.2022.108492
0038-1101
TCAD
FinFET
Nanowire FET
Nanosheet FET
Pelgrom
Prediction model
Monte carlo
A comprehensive Pelgrom-based on-current variability model for FinFET, NWFET and NSFET
oai:minerva.usc.es:10347/234512023-07-10T06:16:51Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Martínez Sánchez, Jorge
author
Fernández Rivera, Francisco
author
Cabaleiro Domínguez, José Carlos
author
López Vilariño, David
author
Fernández Pena, Anselmo Tomás
author
2020
Road extraction from Light Detection and Ranging (LiDAR) has become a hot topic over recent years. Nevertheless, it is still challenging to perform this task in a fully automatic way. Experiments are often carried out over small datasets with a focus on urban areas and it is unclear how these methods perform in less urbanized sites. Furthermore, some methods require the manual input of critical parameters, such as an intensity threshold. Aiming to address these issues, this paper proposes a method for the automatic extraction of road points suitable for different landscapes. Road points are identified using pipeline filtering based on a set of constraints defined on the intensity, curvature, local density, and area. We focus especially on the intensity constraint, as it is the key factor to distinguish between road and ground points. The optimal intensity threshold is established automatically by an improved version of the skewness balancing algorithm. Evaluation was conducted on ten study sites with different degrees of urbanization. Road points were successfully extracted in all of them with an overall completeness of 93%, a correctness of 83%, and a quality of 78%. These results are competitive with the state-of-the-art
Martínez Sánchez, J.; Fernández Rivera, F.; Cabaleiro Domínguez, J.C.; López Vilariño, D.; Fernández Pena, T. Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing. Remote Sens. 2020, 12, 2025
http://hdl.handle.net/10347/23451
10.3390/rs12122025
2072-4292
Airbone LiDAR point clouds
Road point extraction
Bidirectional skewness balancing
Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing
oai:minerva.usc.es:10347/176902020-07-16T11:09:49Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
Gamallo Otero, Pablo
author
2014
Este artigo propõe um método para a construção de novos dicionários bilíngues a partir de dicionários já existentes e da exploração de corpora comparáveis. Mais concretamente, um novo dicionário para um par de línguas é gerado em duas etapas: primeiro, cruzam-se dicionários bilíngues entre essas línguas e uma terceira intermediária e, segundo, o resultado do cruzamento, que contém um número elevado de traduções espúrias causadas pela ambiguidade das palavras da língua intermediária, filtra-se com apoio em textos de temática comparável nas duas línguas alvo. A qualidade do dicionário derivado é muito alta, próxima dos dicionários construídos manualmente. Descreveremos um caso de estudo onde criaremos um novo dicionário Inglês-Português com mais de 7.000 entradas bilíngues geradas pelo nosso método
This article proposes a method for building new bilingual dictionaries from existing ones and the use of comparable corpora. More precisely, a new bilingual dictionary with pairs in two target languages is built in two steps. First, a noisy dictionary is generated by transitivity by crossing two existing dictionaries containing translation pairs in one of the two target languages and an intermediary one. The result of crossing the two existing dictionaries gives rise to a noisy resource because of the ambiguity of words in the intermediary language. Second, odd translation pairs are filtered out by making use of a set of bilingual lexicons automatically extracted from comparable corpora. The quality of the filtered dictionary is very high, close to that of those dictionaries built by lexicographs. We also report a case study where a new, non noisy, English-Portuguese dictionary with more than 7,000 bilingual entries was automatically generated
GAMALLO, Pablo. (2014). Uso de corpora comparáveis para filtrar dicionários bilíngues gerados por transitividade. DELTA: Documentação de Estudos em Lingüística Teórica e Aplicada, 30(2), 213-235. https://dx.doi.org/10.1590/0102-445034728151307539
0102-4450
http://hdl.handle.net/10347/17690
10.1590/0102-445034728151307539
1678-460X
Processamento da língua natural
Extração de informação
Corpora comparáveis
Dicionários bilíngues
Natural language processing
Information extraction
Comparable corpora
Bilingual dictionaries
Uso de corpora comparáveis para filtrar dicionários bilíngues gerados por transitividade
oai:minerva.usc.es:10347/236602023-07-10T06:12:48Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2912com_10347_2890col_10347_11719col_10347_15488col_10347_13405
00925njm 22002777a 4500
dc
González Castro, Víctor
author
Cernadas García, Eva
author
Huelga Zapico, Emilio
author
Fernández Delgado, Manuel
author
Antúnez López, José Ramón
author
Souto Bayarri, José Miguel
author
2020
In this work, by using descriptive techniques, the characteristics of the texture of the CT (computed tomography) image of patients with colorectal cancer were extracted and, subsequently, classified in KRAS+ or KRAS-. This was accomplished by using different classifiers, such as Support Vector Machine (SVM), Grading Boosting Machine (GBM), Neural Networks (NNET), and Random Forest (RF). Texture analysis can provide a quantitative assessment of tumour heterogeneity by analysing both the distribution and relationship between the pixels in the image. The objective of this research is to demonstrate that CT-based Radiomics can predict the presence of mutation in the KRAS gene in colorectal cancer. This is a retrospective study, with 47 patients from the University Hospital, with a confirmatory pathological analysis of KRAS mutation. The highest accuracy and kappa achieved were 83% and 64.7%, respectively, with a sensitivity of 88.9% and a specificity of 75.0%, achieved by the NNET classifier using the texture feature vectors combining wavelet transform and Haralick coefficients. The fact of being able to identify the genetic expression of a tumour without having to perform either a biopsy or a genetic test is a great advantage, because it prevents invasive procedures that involve complications and may present biases in the sample. As well, it leads towards a more personalized and effective treatment
González-Castro, V.; Cernadas, E.; Huelga, E.; Fernández-Delgado, M.; Porto, J.; Antunez, J.R.; Souto-Bayarri, M. CT Radiomics in Colorectal Cancer: Detection of KRAS Mutation Using Texture Analysis and Machine Learning. Appl. Sci. 2020, 10, 6214
http://hdl.handle.net/10347/23660
10.3390/app10186214
2076-3417
KRAS mutation
Colorectal cancer
Texture analysis
Wavelets
Haralick texture descriptors
Support Vector Machine
Grading Boosting Machine
Neural Network
Random Forest
CT Radiomics in Colorectal Cancer: Detection of KRAS Mutation Using Texture Analysis and Machine Learning
oai:minerva.usc.es:10347/212442023-07-10T06:12:10Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Márquez, David G.
author
Félix Lamas, Paulo
author
García, Constantino A.
author
Tejedor, Javier
author
Fred, Ana L. N.
author
Otero, Abraham
author
2019
In this work, a new clustering algorithm especially geared towards merging data arising from multiple sensors is presented. The algorithm, called PN-EAC, is based on the ensemble clustering paradigm and it introduces the novel concept of negative evidence. PN-EAC combines both positive evidence, to gather information about the elements that should be grouped together in the final partition, and negative evidence, which has information about the elements that should not be grouped together. The algorithm has been validated in the electrocardiographic domain for heartbeat clustering, extracting positive evidence from the heartbeat morphology and negative evidence from the distances between heartbeats. The best result obtained on the MIT-BIH Arrhythmia database yielded an error of 1.44%. In the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database database (INCARTDB), an error of 0.601% was obtained when using two electrocardiogram (ECG) leads. When increasing the number of leads to 4, 6, 8, 10 and 12, the algorithm obtains better results (statistically significant) than with the previous number of leads, reaching an error of 0.338%. To the best of our knowledge, this is the first clustering algorithm that is able to process simultaneously any number of ECG leads. Our results support the use of PN-EAC to combine different sources of information and the value of the negative evidence
Márquez, D.G.; Félix, P.; García, C.A.; Tejedor, J.; Fred, A.L.; Otero, A. Positive and Negative Evidence Accumulation Clustering for Sensor Fusion: An Application to Heartbeat Clustering. Sensors 2019, 19, 4635
http://hdl.handle.net/10347/21244
10.3390/s19214635
1424-8220
Sensor fusion
Clustering
Evidence accumulation
Fusion techniques
Machine learning
ECG
Multilead clustering
Heartbeat clustering
Multimodal clustering
Positive and Negative Evidence Accumulation Clustering for Sensor Fusion: An Application to Heartbeat Clustering
oai:minerva.usc.es:10347/238262023-07-10T06:17:07Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
Gamallo Otero, Pablo
author
Pichel Campos, José Ramón
author
Alegría, Iñaki
author
2020
Phylogenetics is a sub-field of historical linguistics whose aim is to classify a group of languages by considering their distances within a rooted tree that stands for their historical evolution. A few European languages do not belong to the Indo-European family or are otherwise isolated in the European rooted tree. Although it is not possible to establish phylogenetic links using basic strategies, it is possible to calculate the distances between these isolated languages and the rest using simple corpus-based techniques and natural language processing methods. The objective of this article is to select some isolated languages and measure the distance between them and from the other European languages, so as to shed light on the linguistic distances and proximities of these controversial languages without considering phylogenetic issues. The experiments were carried out with 40 European languages including six languages that are isolated in their corresponding families: Albanian, Armenian, Basque, Georgian, Greek, and Hungarian
Gamallo, P.; Pichel, J.R.; Alegria, I. Measuring Language Distance of Isolated European Languages. Information 2020, 11, 181
http://hdl.handle.net/10347/23826
10.3390/info11040181
2078-2489
Language distance
Phylogenetics
Perplexity
Clustering
Kullback leibler divergence
Measuring Language Distance of Isolated European Languages
oai:minerva.usc.es:10347/294532022-11-23T03:02:51Zcom_10347_2990com_10347_2889com_10347_227com_10347_2903com_10347_2890com_10347_2888col_10347_11719col_10347_12018
00925njm 22002777a 4500
dc
Vila Blanco, Nicolás
author
Varas Quintana, Paulina
author
Aneiros Ardao, Ángela
author
Tomás Carmona, Inmaculada
author
Carreira Nouche, María José
author
2022
Chronological age and biological sex estimation are two key tasks in a variety of procedures, including human identification and migration control. Issues such as these have led to the development of both semiautomatic and automatic prediction models, but the former are expensive in terms of time and human resources, while the latter lack the interpretability required to be applicable in real-life scenarios. This paper therefore proposes a new, fully automatic methodology for the estimation of age and sex. This first applies a tooth detection by means of a modified CNN with the objective of extracting the oriented bounding boxes of each tooth. Then, it feeds the image features inside the tooth boxes into a second CNN module designed to produce per-tooth age and sex probability distributions. The method then adopts an uncertainty-aware policy to aggregate these estimated distributions. Our approach yielded a lower mean absolute error than any other previously described, at 0.97 years. The accuracy of the sex classification was 91.82%, confirming the suitability of the teeth for this purpose. The proposed model also allows analyses of age and sex estimations on every tooth, enabling experts to identify the most relevant for each task or population cohort or to detect potential developmental problems. In conclusion, the performance of the method in both age and sex predictions is excellent and has a high degree of interpretability, making it suitable for use in a wide range of application scenarios
Computers in Biology and Medicine 149 (2022) 106072
http://hdl.handle.net/10347/29453
10.1016/j.compbiomed.2022.106072
0010-4825
Deep learning
Dental panoramic radiographs
Tooth detection
Chronological age prediction
Sex classification
XAS: Automatic yet eXplainable Age and Sex determination by combining imprecise per-tooth predictions
oai:minerva.usc.es:10347/117332020-01-31T08:33:24Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Castro, Daniel
author
Félix Lamas, Paulo
author
Rodríguez Presedo, Jesús María
author
2014-10-08
Continuous follow-up of heart condition through long-term electrocardiogram monitoring is an invaluable tool for diagnosing some cardiac arrhythmias. In such context, providing tools for fast locating alterations of normal conduction patterns is mandatory and still remains an open issue. This work presents a real-time method for adaptive clustering QRS complexes from multilead ECG signals that provides the set of QRS morphologies that appear during an ECG recording. The method processes the QRS complexes sequentially, grouping them into a dynamic set of clusters based on the information content of the temporal context. The clusters are represented by templates which evolve over time and adapt to the QRS morphology changes. Rules to create, merge and remove clusters are defined along with techniques for noise detection in order to avoid their proliferation. To cope with beat misalignment, Derivative Dynamic Time Warping is used. The proposed method has been validated against the MIT-BIH Arrhythmia Database and the AHA ECG Database showing a global purity of 98.56% and 99.56%, respectively. Results show that our proposal not only provides better results than previous offline solutions but also fulfills real-time requirements.
Castro, D., Félix, P., Presedo, J. (2014). A method for context-based adaptive QRS clustering in real-time. "IEEE Journal of Biomedical and Health Informatics" , [Documento en línea] doi: 10.1109/JBHI.2014.2361659
2168-2194
http://hdl.handle.net/10347/11733
QRS
Clustering
ECG
Electrocardiogram
Adaptive
Realtime
Context-based
EKG
Beat
Heartbeat
A method for context-based adaptive QRS clustering in real-time
oai:minerva.usc.es:10347/246462023-07-10T06:17:50Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Espiñeira Deus, Gabriel
author
Nagy, Daniel
author
García Loureiro, Antonio Jesús
author
Seoane Iglesias, Natalia
author
Indalecio Fernández, Guillermo
author
2019
This paper presents a study of the impact that several widely used threshold voltage (VT) extraction methods have on semiconductor device variability studies. The second derivative (SD), linear extrapolation (LE) and third derivative (TD) extraction techniques have been compared to the standard method used in variability, the constant current criteria (CC). To estimate the influence of these methods on the results, an ensemble of 10.7 nm gate length Si FinFETs affected by RD variability have been simulated. We have shown that variability estimators like the VT, VT and the VT shift, are heavily affected by the selected extraction methodology, with up to 30% differences in the standard deviation. We have demonstrated that being aware of which VT extraction technique
has been used in a variability analysis is crucial to properly interpret the results as they may be heavily method-dependent
Solid-State Electronics, Volume 159, September 2019, Pages 165-170
0038-1101
http://hdl.handle.net/10347/24646
10.1016/j.sse.2019.03.055
Threshold voltage
FinFETs
Random dopants
Variability
Statistical analysis
Impact of threshold voltage extraction methods on semiconductor device variability
oai:minerva.usc.es:10347/177242023-07-10T06:16:12Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Soto Hidalgo, José Manuel
author
Alonso Moral, José María
author
Acampora, Giovanni
author
Alcalá Fernández, Jesús
author
2018
Fuzzy logic systems are useful for solving problems in many application fields. However, these systems are usually stored in specific formats and researchers need to rewrite them to use in new problems. Recently, the IEEE Computational Intelligence Society has sponsored the publication of the IEEE Standard 1855-2016 to provide a unified and well-defined representation of fuzzy systems for problems of classification, regression, and control. The main aim of this standard is to facilitate the exchange of fuzzy systems across different programming systems in order to avoid the need to rewrite available pieces of code or to develop new software tools to replicate functionalities that are already provided by other software. In order to make the standard operative and useful for the research community, this paper presents JFML, an open source Java library that offers a complete implementation of the new IEEE standard and capability to import/export fuzzy systems in accordance with other standards and software. Moreover, the new library has associated a Website with complementary material, documentation, and examples in order to facilitate its use. In this paper, we present three case studies that illustrate the potential of JFML and the advantages of exchanging fuzzy systems among available software
Soto-Hidalgo, J., Alonso, J., Acampora, G., & Alcala-Fdez, J. (2018). JFML: A Java Library to Design Fuzzy Logic Systems According to the IEEE Std 1855-2016. IEEE Access, 1-1. doi: 10.1109/access.2018.2872777
http://hdl.handle.net/10347/17724
0.1109/ACCESS.2018.2872777
2169-3536
Fuzzy logic
Fuzzy logic systems
IEEE Standards
IEEE std 1855-2016
Libraries
Fuzzy systems
Fuzzy markup language
Software
Open source software
Java
IEC61131-7
JFML: A Java Library to Design Fuzzy Logic Systems According to the IEEE Std 1855-2016
oai:minerva.usc.es:10347/266592021-07-31T02:02:30Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
El Rashidy, Nora
author
Abdelrazek, Samir
author
Abuhmed, Tamer
author
Amer, Eslam
author
Ali, Farman
author
Hu, Jon Wan
author
El-Sappagh, Shaker
author
2021
Since December 2019, the global health population has faced the rapid spreading of coronavirus disease (COVID-19). With the incremental acceleration of the number of infected cases, the World Health Organization (WHO) has reported COVID-19 as an epidemic that puts a heavy burden on healthcare sectors in almost every country. The potential of artificial intelligence (AI) in this context is difficult to ignore. AI companies have been racing to develop innovative tools that contribute to arm the world against this pandemic and minimize the disruption that it may cause. The main objective of this study is to survey the decisive role of AI as a technology used to fight against the COVID-19 pandemic. Five significant applications of AI for COVID-19 were found, including (1) COVID-19 diagnosis using various data types (e.g., images, sound, and text); (2) estimation of the possible future spread of the disease based on the current confirmed cases; (3) association between COVID-19 infection and patient characteristics; (4) vaccine development and drug interaction; and (5) development of supporting applications. This study also introduces a comparison between current COVID-19 datasets. Based on the limitations of the current literature, this review highlights the open research challenges that could inspire the future application of AI in COVID-19
Diagnostics 2021, 11(7), 1155; https://doi.org/10.3390/diagnostics11071155
http://hdl.handle.net/10347/26659
10.3390/diagnostics11071155
2075-4418
Artificial intelligence
Deep learning
COVID_19
Comprehensive Survey of Using Machine Learning in the COVID-19 Pandemic
oai:minerva.usc.es:10347/246472021-07-29T08:06:08Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Dabbabi, Samar
author
Souli, Mehdi
author
Ben Nasr, Tarek
author
García Loureiro, Antonio Jesús
author
Kamoun, Najoua
author
2019
Tin doped Zinc oxide/Fluorine doped tin dioxide bilayer films (ZnO:Sn/SnO2:F) were deposited on glass substrates using spray pyrolysis technique. The effect of vacuum annealing at different temperatures was investigated. Both structural and morphological analysis have shown that there is a significant modification in the bilayer film structure and surface following the vacuum annealing process at 450 °C. Electrical properties have been investigated using the Hall Effect measurements as well as the impedance spectroscopy at room temperature. The circuit parameters were determined using an equivalent circuit model fitted from the impedance spectra and suggesting the presence of grain and grain boundary conductions in the bilayer structure. It was found that the film annealed in vacuum for 1 h at 350 °C is optimal in all respects, as it possesses all the desirable characteristics including the lowest resistivity, high porosity and better grain boundary conductivity
Vacuum, Volume 167, September 2019, Pages 416-420
0042-207X
http://hdl.handle.net/10347/24647
10.1016/j.vacuum.2019.06.008
Spray pyrolysis
ZnO:Sn/SnO2:F film
Electrical properties
Impedance spectroscopy
Vacuum annealing effect on physical properties and electrical circuit model of ZnO:Sn/SnO2:F bilayer structure
oai:minerva.usc.es:10347/246582023-07-10T06:17:59Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Cabello Ferrer, Diego
author
Ferro Santiago, Esteban
author
Pereira Rial, Óscar
author
Martínez Vázquez, Beatriz
author
Brea Sánchez, Víctor Manuel
author
Carrillo, Juan M.
author
2020
This paper presents experimental results from a system that comprises a fully autonomous energy harvester with a solar cell of 1 mm 2 as energy transducer and a Power Management Unit (PMU) on the same silicon substrate, and an output voltage regulator. Both chips are implemented in standard 0.18 μm CMOS technology with total layout areas of 1.575 mm 2 and 0.0126 mm 2 , respectively. The system also contains an off-the-shelf 3.2 mm × 2.5 mm × 0.9 mm supercapacitor working as an off-chip battery or energy reservoir between the PMU and the voltage regulator. Experimental results show that the fast energy recovery of the on-chip solar cell and PMU permits the system to replenish the supercapacitor with enough charge as to sustain Bluetooth Low Energy (BLE) communications even with input light powers of 510 nW. The whole system is able to self-start-up without external mechanisms at 340 nW. This work is the first step towards a self-supplied sensor node with processing and communication capabilities. The small form factor and ultra-low power consumption of the system components is in compliance with biomedical applications requirements
1549-8328
http://hdl.handle.net/10347/24658
10.1109/TCSI.2019.2944252
1558-0806
Implantable devices
LDO
MPPT
On-chip energy harvesting
PMU
Voltage reference generator
On-Chip Solar Energy Harvester and PMU With Cold Start-Up and Regulated Output Voltage for Biomedical Applications
oai:minerva.usc.es:10347/290002022-08-04T02:03:00Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Espiñeira Deus, Gabriel
author
García Loureiro, Antonio Jesús
author
Seoane Iglesias, Natalia
author
2022
In the current technology node, purely classical numerical simulators lack the precision needed to obtain valid results. At the same time, the simulation of fully quantum models can be a cumbersome task in certain studies such as device variability analysis, since a single simulation can take up to weeks to compute and hundreds of device configurations need to be analyzed to obtain statistically significative results. A good compromise between fast and accurate results is to add corrections to the classical simulation that are able to reproduce the quantum nature of matter. In this context, we present a new approach of Schrödinger equation-based quantum corrections. We have implemented it using Message Passing Interface in our in-house built semiconductor simulation framework called VENDES, capable of running in distributed systems that allow for more accurate results in a reasonable time frame. Using a 12-nm-gate-length gate-all-around nanowire FET (GAA NW FET) as a benchmark device, the new implementation shows an almost perfect agreement in the output data with less than a 2% difference between the cases using 1 and 16 processes. Also, a reduction of up to 98% in the computational time has been found comparing the sequential and the 16 process simulation. For a reasonably dense mesh of 150k nodes, a variability study of 300 individual simulations can be now performed with VENDES in approximately 2.5 days instead of an estimated sequential execution of 137 days
Journal of Computational Electronics 21, 10–20 (2022). https://doi.org/10.1007/s10825-021-01823-3
http://hdl.handle.net/10347/29000
10.1007/s10825-021-01823-3
1572-8137
Drift-diffusion
Schrödinger quantum corrections
Gate-all-around nanowire FET
Finite element method
Message passing interface
Parallel approach of Schrödinger-based quantum corrections for ultrascaled semiconductor devices
oai:minerva.usc.es:10347/183012023-07-10T06:16:06Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
García Lesta, Daniel
author
Cabello Ferrer, Diego
author
Ferro Santiago, Esteban
author
López Martínez, Paula
author
Brea Sánchez, Víctor Manuel
author
2017-08-01
Wireless sensor networks (WSNs) are increasingly adopted in agriculture to monitor environmental variables to predict the presence of pests. Differently from these approaches, this paper introduces aWSN to detect the presence of snails in the field. The network can be used to both trigger an alarm of early pest presence and to further elaborate statistical models with the addition of environmental data as temperature or humidity to predict snail presence. In this paper we also design our own WSN simulator to account for real-life conditions as an uneven spacing of motes in the field or different currents generated by solar cells at the motes. This allows achieving more realistic network deployment in the field. Experimental tests are included in this paper, showing that our motes are perpetual in terms of energy consumption
D. García-Lesta, D. Cabello, E. Ferro, P. López, and V.M. Brea (2017). Wireless Sensor Network With Perpetual Motes for Terrestrial Snail Activity Monitoring. IEEE Sensors Journal, 17(15), 5008-5015. Doi: 10.1109/jsen.2017.2718107
1530-437X
http://hdl.handle.net/10347/18301
10.1109/jsen.2017.2718107
Wireless sensor networks
Agricultural pests
Capacitive sensors
ZigBee
Sensor applications
Wireless Sensor Network With Perpetual Motes for Terrestrial Snail Activity Monitoring
oai:minerva.usc.es:10347/266782023-07-10T06:11:03Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
Gamallo Otero, Pablo
author
2021
This article describes a compositional model based on syntactic dependencies which has been designed to build contextualized word vectors, by following linguistic principles related to the concept of selectional preferences. The compositional strategy proposed in the current work has been evaluated on a syntactically controlled and multilingual dataset, and compared with Transformer BERT-like models, such as Sentence BERT, the state-of-the-art in sentence similarity. For this purpose, we created two new test datasets for Portuguese and Spanish on the basis of that defined for the English language, containing expressions with noun-verb-noun transitive constructions. The results we have obtained show that the linguistic-based compositional approach turns out to be competitive with Transformer models
Appl. Sci. 2021, 11(12), 5743; https://doi.org/10.3390/app11125743
http://hdl.handle.net/10347/26678
10.3390/app11125743
2076-3417
Compositionality
Dependency parsing
Meaning construction
Compositional distributional semantics
Transformer architecture
Contextualized word embeddings
Sentence BERT
Compositional Distributional Semantics with Syntactic Dependencies and Selectional Preferences
oai:minerva.usc.es:10347/177082020-01-31T13:29:38Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Illade Quinteiro, Julio
author
Brea Sánchez, Víctor Manuel
author
López Martínez, Paula
author
Cabello Ferrer, Diego
author
Doménech Asensi, Ginés
author
2015
Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution
Illade-Quinteiro, J.; Brea, V.M.; López, P.; Cabello, D.; Doménech-Asensi, G. Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise. Sensors 2015, 15, 4624-4642
1424-8220
http://hdl.handle.net/10347/17708
10.3390/s150304624
Time-of-flight sensors
Shot noise
Standard CMOS technologies
Distance measurement
Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise
oai:minerva.usc.es:10347/212472023-07-10T06:18:25Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Santos Saavedra, David
author
López López, Eric
author
Pardo López, Xosé Manuel
author
Iglesias Rodríguez, Roberto
author
Barro Ameneiro, Senén
author
Fernández Vidal, Xosé Ramón
author
2019
Scene recognition is still a very important topic in many fields, and that is definitely the case in robotics. Nevertheless, this task is view-dependent, which implies the existence of preferable directions when recognizing a particular scene. Both in human and computer vision-based classification, this actually often turns out to be biased. In our case, instead of trying to improve the generalization capability for different view directions, we have opted for the development of a system capable of filtering out noisy or meaningless images while, on the contrary, retaining those views from which is likely feasible that the correct identification of the scene can be made. Our proposal works with a heuristic metric based on the detection of key points in 3D meshes (Harris 3D). This metric is later used to build a model that combines a Minimum Spanning Tree and a Support Vector Machine (SVM). We have performed an extensive number of experiments through which we have addressed (a) the search for efficient visual descriptors, (b) the analysis of the extent to which our heuristic metric resembles the human criteria for relevance and, finally, (c) the experimental validation of our complete proposal. In the experiments, we have used both a public image database and images collected at our research center
Santos, D.; Lopez-Lopez, E.; Pardo, X.M.; Iglesias, R.; Barro, S.; Fdez-Vidal, X.R. Robust and Fast Scene Recognition in Robotics Through the Automatic Identification of Meaningful Images. Sensors 2019, 19, 4024
http://hdl.handle.net/10347/21247
10.3390/s19184024
1424-8220
Scene recognition
Image collection summarization
Meaningful images
Robust and Fast Scene Recognition in Robotics Through the Automatic Identification of Meaningful Images
oai:minerva.usc.es:10347/177092023-07-10T06:16:15Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Canedo Rodríguez, Adrián
author
Iglesias Rodríguez, Roberto
author
Vázquez Regueiro, Carlos
author
Álvarez Santos, Víctor
author
Pardo López, Xosé Manuel
author
2013
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal
Canedo-Rodriguez, A.; Iglesias, R.; Regueiro, C.V.; Alvarez-Santos, V.; Pardo, X.M. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments. Sensors 2013, 13, 426-454
1424-8220
http://hdl.handle.net/10347/17709
10.3390/s130100426
Robot deployment
Robot detection and tracking
Multi-camera networks
Ambient intelligence
Ubiquitous robots
Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments
oai:minerva.usc.es:10347/212592020-04-09T02:00:56Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2919com_10347_2891col_10347_11719col_10347_15488col_10347_10699
00925njm 22002777a 4500
dc
García Díaz, Antón
author
Leborán Álvarez, Víctor
author
Fernández Vidal, Xosé Ramón
author
Pardo López, Xosé Manuel
author
2012
A hierarchical definition of optical variability is proposed that links physical magnitudes to visual saliency and yields a more reductionist interpretation than previous approaches. This definition is shown to be grounded on the classical efficient coding hypothesis. Moreover, we propose that a major goal of contextual adaptation mechanisms is to ensure the invariance of the behavior that the contribution of an image point to optical variability elicits in the visual system. This hypothesis and the necessary assumptions are tested through the comparison with human fixations and state-of-the-art approaches to saliency in three open access eye-tracking datasets, including one devoted to images with faces, as well as in a novel experiment using hyperspectral representations of surface reflectance. The results on faces yield a significant reduction of the potential strength of semantic influences compared to previous works. The results on hyperspectral images support the assumptions to estimate optical variability. As well, the proposed approach explains quantitative results related to a visual illusion observed for images of corners, which does not involve eye movements
Garcia-Diaz, A., Leborán, V., Fdez-Vidal, X. R., & Pardo, X. M. (2012). On the relationship between optical
variability, visual saliency, and eye fixations: A computational approach. Journal of Vision, 12(6):17, 1–22, http://www.
journalofvision.org/content/12/6/17, doi:10.1167/12.6.17
http://hdl.handle.net/10347/21259
10.1167/12.6.17
1534-7362
Optical variability
Contextual adaptation
Saliency
Efficient coding
Eye fixations
Face saliency
Hyperspectral
On the relationship between optical variability, visual saliency, and eye fixations: a computational approach
oai:minerva.usc.es:10347/177202020-01-31T13:47:08Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
García Martínez, Constantino Antonio
author
Otero, Abraham
author
Félix Lamas, Paulo
author
Rodríguez Presedo, Jesús María
author
Márquez, David G.
author
2017
The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted
increasing attention, due to their ability to describe complex dynamics with physically interpretable equations.
In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from
a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a
function-space view and thus the inference takes place directly in this space. To cope with the computational
complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided.
This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a
distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated
data and real data from economy and paleoclimatology. The application of the method to real data demonstrates
its ability to capture the behavior of complex systems
García, C., Otero, A., Félix, P., Presedo, J., & Márquez, D. (2017). Nonparametric estimation of stochastic differential equations with sparse Gaussian processes. Physical Review E, 96(2). doi: 10.1103/physreve.96.022104
2470-0045
http://hdl.handle.net/10347/17720
10.1103/PhysRevE.96.022104
2470-0053
Nonparametric estimation of stochastic differential equations with sparse Gaussian processes
oai:minerva.usc.es:10347/289952022-08-04T02:03:02Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2903com_10347_2890col_10347_11719col_10347_15488col_10347_12018
00925njm 22002777a 4500
dc
Vila Blanco, Nicolás
author
Varas Quintana, Paulina
author
Aneiros Ardao, Ángela
author
Tomás Carmona, Inmaculada
author
Carreira Nouche, María José
author
2021
Purpose: The shape of the mandible has been analyzed in a variety of fields, whether to diagnose conditions like osteoporosis or osteomyelitis, in forensics, to estimate biological information such as age, gender, and race or in orthognathic surgery. Although the methods employed produce encouraging results, most rely on the dry bone analyses or complex imaging techniques that, ultimately, hamper sample collection and, as a consequence, the development of large-scale studies. Thus, we proposed an objective, repeatable, and fully automatic approach to provide a quantitative description of the mandible in orthopantomographies (OPGs).
Methods: We proposed the use of a deep convolutional neural network (CNN) to localize a set of landmarks of the mandible contour automatically from OPGs. Furthermore, we detailed four different descriptors for the mandible shape to be used for a variety of purposes. This includes a set of linear distances and angles calculated from eight anatomical landmarks of the mandible, the centroid size, the shape variations from the mean shape, and a group of shape parameters extracted with a point distribution model.
Results: The fully automatic digitization of the mandible contour was very accurate, with a mean point to the curve error of 0.21 mm and a standard deviation comparable to that of a trained expert. The combination of the CNN and the four shape descriptors was validated in the well-known problems of forensic sex and age estimation, obtaining 87.8% of accuracy and a mean absolute error of 1.57 years, respectively
International Journal of Computer Assisted Radiology and Surgery 16, 2215–2224 (2021). https://doi.org/10.1007/s11548-021-02474-2
http://hdl.handle.net/10347/28995
10.1007/s11548-021-02474-2
1861-6429
Convolutional neural networks
Shape modeling
Mandible morphometrics
Deep learning
Automated description of the mandible shape by deep learning
oai:minerva.usc.es:10347/294802023-07-10T06:11:10Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Vaquero Otal, Lorenzo
author
Brea Sánchez, Víctor Manuel
author
Mucientes Molina, Manuel Felipe
author
2023
Maintaining the identity of multiple objects in real-time video is a challenging task, as it is not always feasible to run a detector on every frame. Thus, motion estimation systems are often employed, which either do not scale well with the number of targets or produce features with limited semantic information. To solve the aforementioned problems and allow the tracking of dozens of arbitrary objects in real-time, we propose SiamMOTION. SiamMOTION includes a novel proposal engine that produces quality features through an attention mechanism and a region-of-interest extractor fed by an inertia module and powered by a feature pyramid network. Finally, the extracted tensors enter a comparison head that efficiently matches pairs of exemplars and search areas, generating quality predictions via a pairwise depthwise region proposal network and a multi-object penalization module. SiamMOTION has been validated on five public benchmarks, achieving leading performance against current state-of-the-art trackers. Code available at: https://www.github.com/lorenzovaquero/SiamMOTION
Pattern Recognition 135 (2023) 109141
http://hdl.handle.net/10347/29480
10.1016/j.patcog.2022.109141
0031-3203
Multiple visual object tracking
Siamese CNN
Motion estimation
Real-time siamese multiple object tracker with enhanced proposals
oai:minerva.usc.es:10347/183402023-07-10T06:16:49Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Blanco Filgueira, Beatriz
author
López Martínez, Paula
author
Roldán Aranda, Juan Bautista
author
2016
The CMOS photodiode is the primary photosensing device used in solid-state image sensors. A review of significant CMOS photodiode models that can be found in the literature in recent years is presented here. We have focused on photocurrent models in one, two, and three dimensions, paying special attention to lateral current components. Lateral collection, particularly for small devices fabricated in deep submicrometer technologies, has been shown to be of utmost importance. Finally, several models to account for crosstalk effects are also described
Beatriz Blanco-Filgueira, Paula López Martínez, and Juan Bautista Roldán Aranda (2016) A review on CMOS photodiodes modeling, the role of the lateral photoresponse. IEEE TRANSACTIONS ON ELECTRON DEVICES, 63(1), 16-25. Doi: 10.1109/TED.2015.2446204
0018-9383
http://hdl.handle.net/10347/18340
10.1109/TED.2015.2446204
CMOS photodiode
Crosstalk
Lateral current
Modeling
Simulation
A Review of CMOS Photodiode Modeling and the Role of the Lateral Photoresponse
oai:minerva.usc.es:10347/294512023-07-10T06:11:10Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Bosquet Mera, Brais
author
Cores Costa, Daniel
author
Brea Sánchez, Víctor Manuel
author
Mucientes Molina, Manuel Felipe
author
Bimbo, Alberto del
author
2023
Object detection accuracy on small objects, i.e., objects under 32 32 pixels, lags behind that of large ones. To address this issue, innovative architectures have been designed and new datasets have been released. Still, the number of small objects in many datasets does not suffice for training. The advent of the generative adversarial networks (GANs) opens up a new data augmentation possibility for training architectures without the costly task of annotating huge datasets for small objects. In this paper, we propose a full pipeline for data augmentation for small object detection which combines a GAN-based object generator with techniques of object segmentation, image inpainting, and image blending to achieve high-quality synthetic data. The main component of our pipeline is DS-GAN, a novel GAN-based architecture that generates realistic small objects from larger ones. Experimental results show that our overall data augmentation method improves the performance of state-of-the-art models up to 11.9% AP on UAVDT and by 4.7% AP on iSAID, both for the small objects subset and for a scenario where the number of training instances is limited.
Pattern Recognition 133 2023 (108998)
http://hdl.handle.net/10347/29451
10.1016/j.patcog.2022.108998
0031-3203
Small object detection
Data augmentation
Generative adversarial network
A full data augmentation pipeline for small object detection based on generative adversarial networks
oai:minerva.usc.es:10347/176932020-07-16T10:59:35Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
García González, Marcos
author
Gamallo Otero, Pablo
author
2014
This work presents a coreference resolution system of person entities based on a multi-pass architecture which sequentially applies a set of independent modules, using an entity-centric approach. Several evaluations show that the system obtains promising results in different scenarios (71% and 81% F1 CoNLL). Furthermore, the impact of coreference resolution in information extraction was analyzed, by applying an open information extraction system after the coreference resolution tool. The results of this test indicate that information extraction gives better both recall and precision results. The evaluations were carried out in Spanish, Portuguese and Galician, and all the resources and tools are freely distributed
Este trabajo presenta un sistema de resolución de correferencia de entidades
persona cuya arquitectura se basa en la aplicación secuencial de módulos de
resolución independientes y en una estrategia centrada en las entidades. Diversas evaluaciones indican que el sistema obtiene resultados prometedores en varios escenarios
(≈ 71% y ≈ 81% de F1 CoNLL). Con el fin de analizar la influencia de la
resolución de correferencia en la extracción de información, un sistema de extracción
de información abierta se ha aplicado sobre textos con anotación correferencial. Los
resultados de este experimento indican que la extracción de información mejora tanto
en cobertura como en precisión. Las evaluaciones han sido realizadas en español,
portugués y gallego, y todas las herramientas y recursos son distribuidos libremente
Garcia, M., & Gamallo, P. (2014). Entity-Centric Coreference Resolution of Person Entities for Open Information Extraction. Procesamiento Del Lenguaje Natural, 53, 25-32. Recuperado de http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/5049/2937
1135-5948
http://hdl.handle.net/10347/17693
1989-7553
Coreference
Anaphora
Open information extraction
Correferencia
Anáfora
Extracción de información abierta
Entity-Centric Coreference Resolution of Person Entities for Open Information Extraction
oai:minerva.usc.es:10347/175662023-07-10T06:12:37Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
Gamallo Otero, Pablo
author
García González, Marcos
author
2017
Este artigo apresenta LinguaKit, uma suite multilingue de ferramentas de analise, extraçao, anotaçao e correçao linguısticas. LinguaKit permite realizar tarefas tao diversas como a lematizaçao, a etiquetagem morfossintatica ou a analise sintatica (entre outras), incluindo tambem aplicaçoes para a analise de sentimentos (ou minaria de opinioes), a extraçao de termos multipalavra, ou a anotaçao concetual e ligaçao a recursos enciclopedicos tais como a DBpedia. A maior parte dos modulos funcionam para quatro variedades linguısticas: portugues, espanhol, ingles e galego. Alinguagem de programaçao de LinguaKit ́e Perl, e o codigo esta disponıvel sob a licença livre GPLv3
This paper presents LinguaKit, a multilingualsuiteof tools for analysis, extraction, annotation andlinguistic correction. LinguaKit allows the user toperform different tasks such as lemmatization, PoS-tagging or syntactic parsing (among others), inclu-ding applications for sentiment analysis (or opinionmining), extraction of multiword expressions or conceptual annotation and entity linking to DBpedia.Most part of the developed modules work in four lin-guistic varieties: Portuguese, Spanish, English, andGalician. The system is programmed in Perl, and itis freely available under a GPLv3 license
Gamallo, P., & Garcia, M. (2017). LinguaKit: uma ferramenta multilingue para a análise linguística e a extração de informação. Linguamática, 9(1), 19-28. https://doi.org/10.21814/lm.9.1.243
1647-0818
http://hdl.handle.net/10347/17566
10.21814/lm.9.1.243
Extraçao de informaçao
Tecnologia linguistica
Information extraction
Linguistic technology
LinguaKit: uma ferramenta multilingue para a analise linguistica e a extraçao de informaçao
oai:minerva.usc.es:10347/177912023-07-10T06:17:30Zcom_10347_2990com_10347_2889com_10347_227com_10347_2919com_10347_2891com_10347_2888col_10347_11719col_10347_10699
00925njm 22002777a 4500
dc
Piñeiro Guillén, Ángel
author
Sánchez Botana, Antía
author
Pardo Castro, Víctor
author
Pereiro López, Manuel
author
Baldomir Fernández, Daniel
author
Arias Rodríguez, Juan Enrique
author
2010
The series of V spinels [A2+] V2 O4 (A = Cd, Mn, Zn, Mg) provides an opportunity to tune the V-V distance continuously, in the frustrated pyrochlore lattice of the spinel. This system has been shown to approach the metallic state when V-V distance is reduced. The proximity to the transition leads to a dimerized structure in ZnV2 O4 caused by lattice instabilities. A different manner to tune the V − V distance of this structure is to fix the A2+ cation (in our case, Zn) and apply pressure. We have analyzed the evolution of the electronic structure of the system in the dimerized state. Such structure prevents the system to present a metallic phase at moderate pressures. We have also calculated the transport properties in a semiclassical approach based on Boltzmann transport theory. Our results support the validity of this structural distortion by providing a nice fit with experimental measurements
Piñeiro, A., Botana, A., Pardo, V., Botana, J., Pereiro, M., Baldomir, D., & Arias, J. (2011). Effects of applied pressure in ZnV2 O4 and evidences for a dimerized structure. Journal Of Applied Physics, 109(7), 07E158. doi: 10.1063/1.3565410
0021-8979
http://hdl.handle.net/10347/17791
10.1063/1.3565410
1089-7550
Nonequilibrium statistical mechanics
Oxides
Band gap
Resistivity measurements
Pyrochlore
Dimerization
Thermoelectric effects
Transport properties
Bond length
Phase transitions
Effects of applied pressure in ZnV2 O4 and evidences for a dimerized structure
oai:minerva.usc.es:10347/177052023-07-10T06:16:47Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Canedo Rodríguez, Adrián
author
Álvarez Santos, Víctor
author
Vázquez Regueiro, Carlos
author
Pardo López, Xosé Manuel
author
Iglesias Rodríguez, Roberto
author
2012
Nowadays, deploying service robots and adapting their services to a new environment is a task which might require several days. This is an important problem of robotics in general, but specially when the goal is to bring robots to our everyday life. In this paper we present a multi-agent intelligent space, which consists on intelligent cameras and autonomous guide robots. The deployment of the system does not require expertise and can be done in a short period of time. The cameras detect situations requiring the robots’ guiding services, inform the robots accordingly, and support the robots navigation towards the goal areas, without the need of a map of the environment. An example of these situations requiring the robot guide service could be a group of persons entering a museum. In this sense, we also present an adaptive person follower behaviour intended to be the basis of a route learning process, necessary to offer the guide service
Canedo Rodríguez, A., Álvarez Santos, V., Vázquez Regueiro, C., Pardo López, X., & Iglesias Rodríguez, R. (2012). Multi-agent system for fast deployment of a guide robot in unknown environments. Journal of Physical Agents, 6(1), 31-41. doi:https://doi.org/10.14198/JoPha.2012.6.1.05
1888-0258
http://hdl.handle.net/10347/17705
10.14198/JoPha.2012.6.1.05
Guide robot
Multi-camera networks
Intelligent space
Person following
Feature weighting
Multi-agent system for fast deployment of a guide robot in unknown environments
oai:minerva.usc.es:10347/183722023-07-10T06:16:48Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Alcalá Fernández, Jesús
author
Alonso Moral, José María
author
2015
Fuzzy systems have been used widely thanks to their ability to successfully solve a wide range of problems in different application fields. However, their replication and application require a high level of knowledge and experience. Furthermore, few researchers publish the software and/or source code associated with their proposals, which is a major obstacle to scientific progress in other disciplines and in industry. In recent years, most fuzzy system software has been developed in order to facilitate the use of fuzzy systems. Some software is commercially distributed, but most software is available as free and open-source software, reducing such obstacles and providing many advantages: quicker detection of errors, innovative applications, faster adoption of fuzzy systems, etc. In this paper, we present an overview of freely available and open-source fuzzy systems software in order to provide a well-established framework that helps researchers to find existing proposals easily and to develop well-founded future work. To accomplish this, we propose a two-level taxonomy, and we describe the main contributions related to each field. Moreover, we provide a snapshot of the status of the publications in this field according to the ISI Web of Knowledge. Finally, some considerations regarding recent trends and potential research directions are presented
Jesús Alcalá-Fdez and José M. Alonso (2016) A Survey of Fuzzy Systems Software: Taxonomy, Current Research Trends and Prospects. IEEE Transactions on Fuzzy Systems, 24 (1), 40-56. Doi: 10.1109/TFUZZ.2015.2426212
1063-6706
http://hdl.handle.net/10347/18372
10.1109/TFUZZ.2015.2426212
Fuzzy logic
Fuzzy systems
Fuzzy systems software
Software for applications
Software engineering
Educational software
Open source software
A Survey of Fuzzy Systems Software: Taxonomy, Current Research Trends, and Prospects
oai:minerva.usc.es:10347/176962022-11-28T13:48:46Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2968com_10347_2894col_10347_11719col_10347_15488col_10347_10041
00925njm 22002777a 4500
dc
Gamallo Otero, Pablo
author
Pichel Campos, Juan Carlos
author
García González, Marcos
author
Abuín Mosquera, José Manuel
author
Fernández Pena, Anselmo Tomás
author
2014
Este artículo describe una suite de módulos lingüísticos para el castellano, basado en una arquitectura en tuberías, que incluye tareas de análisis morfosintáctico así como de reconocimiento y clasificación de entidades nombradas. Se han aplicado técnicas de paralelización en un entorno Big Data para conseguir que la suite de módulos sea más eficiente y escalable y, de este modo, reducir de forma significativa los tiempos de cómputo con los que poder abordar problemas a la escala de la Web. Los módulos han sido desarrollados con técnicas básicas para facilitar su integración en entornos distribuidos, con un rendimiento próximo al estado del arte
This article describes a suite of linguistic modules for the Spanish language based on a pipeline architecture, which contains tasks for PoS tagging and Named Entity Recognition and Classification (NERC). We have applied run-time parallelization techniques in a Big Data environment in order to make the suite of modules more efficient and scalable, and thereby to reduce computation time in a significant way. Therefore, we can address problems at Web scale. The linguistic modules have been developed using basic NLP techniques in order to easily integrate them in distributed computing environments. The qualitative performance of the modules is close the the state of the art
Gamallo, P., Pichel, J., García, M., Abuín, J., & Fernández Pena, T. (2014). Análisis morfosintáctico y clasificación de entidades nombradas en un entorno Big Data. Procesamiento Del Lenguaje Natural, 53, 17-24. Recuperado de http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/5046/2934
1135-5948
http://hdl.handle.net/10347/17696
1989-7553
Análisis morfosintáctico
Reconocimiento y clasificación de entidades nombradas
Big Data
Computación paralela
PoS tagging
Named Entity Recognition
Parallel computing
Análisis morfosintáctico y clasificación de entidades nombradas en un entorno Big Data
oai:minerva.usc.es:10347/267772022-11-23T12:09:38Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Vaquero Otal, Lorenzo
author
Brea Sánchez, Víctor Manuel
author
Mucientes Molina, Manuel Felipe
author
2022
Most video analytics applications rely on object detectors to localize objects in frames. However, when real-time is a requirement, running the detector at all the frames is usually not possible. This is somewhat circumvented by instantiating visual object trackers between detector calls, but this does not scale with the number of objects. To tackle this problem, we present SiamMT, a new deep learning multiple visual object tracking solution that applies single-object tracking principles to multiple arbitrary objects in real-time. To achieve this, SiamMT reuses feature computations, implements a novel crop-and-resize operator, and defines a new and efficient pairwise similarity operator. SiamMT naturally scales up to several dozens of targets, reaching 25 fps with 122 simultaneous objects for VGA videos, or up to 100 simultaneous objects in HD720 video. SiamMT has been validated on five large real-time benchmarks, achieving leading performance against current state-of-the-art trackers
Pattern Recognition 2022, 121: 108205. https://doi.org/10.1016/j.patcog.2021.108205
0031-3203
http://hdl.handle.net/10347/26777
10.1016/j.patcog.2021.108205
Multiple visual object tracking
Motion estimation
Deep learning
Siamese networks
Tracking more than 100 arbitrary objects at 25 FPS through deep learning
oai:minerva.usc.es:10347/212452023-07-10T06:17:11Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Suárez Garea, Jorge Alberto
author
Blanco Heras, Dora
author
Argüello Pedreira, Francisco Santiago
author
2019
The use of Convolutional Neural Networks (CNNs) to solve Domain Adaptation (DA) image classification problems in the context of remote sensing has proven to provide good results but at high computational cost. To avoid this problem, a deep learning network for DA in remote sensing hyperspectral images called TCANet is proposed. As a standard CNN, TCANet consists of several stages built based on convolutional filters that operate on patches of the hyperspectral image. Unlike the former, the coefficients of the filter are obtained through Transfer Component Analysis (TCA). This approach has two advantages: firstly, TCANet does not require training based on backpropagation, since TCA is itself a learning method that obtains the filter coefficients directly from the input data. Second, DA is performed on the fly since TCA, in addition to performing dimensional reduction, obtains components that minimize the difference in distributions of data in the different domains corresponding to the source and target images. To build an operating scheme, TCANet includes an initial stage that exploits the spatial information by providing patches around each sample as input data to the network. An output stage performing feature extraction that introduces sufficient invariance and robustness in the final features is also included. Since TCA is sensitive to normalization, to reduce the difference between source and target domains, a previous unsupervised domain shift minimization algorithm consisting of applying conditional correlation alignment (CCA) is conditionally applied. The results of a classification scheme based on CCA and TCANet show that the DA technique proposed outperforms other more complex DA techniques
S. Garea, A.; Heras, D.B.; Argüello, F. TCANet for Domain Adaptation of Hyperspectral Images. Remote Sens. 2019, 11, 2289
http://hdl.handle.net/10347/21245
10.3390/rs11192289
2072-4292
Domain adaptation
TCA
Hyperspectral
Correlation alignment
Classification
TCANet for Domain Adaptation of Hyperspectral Images
oai:minerva.usc.es:10347/272402021-12-18T03:02:57Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Alkharabsheh, Khalid
author
Alawadi, Sadi
author
Crespo González Carvajal, Yania
author
Manso-Martínez, Mª Esperanza
author
Taboada González, José Ángel
author
2021
The automatic detection of Design Smells has evolved in parallel to the evolution of automatic refactoring tools. There was a huge rise in research activity regarding Design Smell detection from 2010 to the present. However, it should be noted that the adoption of Design Smell detection in real software development practice is not comparable to the adoption of automatic refactoring tools. On the basis of the assumption that it is the objectiveness of a refactoring operation as opposed to the subjectivity in definition and identification of Design Smells that makes the difference, in this paper, the lack of agreement between different evaluators when detecting Design Smells is empirically studied. To do so, a series of experiments and studies were designed and conducted to analyse the concordance in Design Smell detection of different persons and tools, including a comparison between them. This work focuses on two well known Design Smells : God Class and Feature Envy . Concordance analysis is based on the Kappa statistic for inter-rater agreement (particularly Kappa-Fleiss ). The results obtained show that there is no agreement in detection in general, and, in those cases where a certain agreement appears, it is considered to be a fair or poor degree of agreement, according to a Kappa-Fleiss interpretation scale. This seems to confirm that there is a subjective component which makes the raters evaluate the presence of Design Smells differently. The study also raises the question of a lack of training and experience regarding Design Smells
Alkharabsheh, K., Alawadi, S., Crespo, Y., Manso, M. E., & González, J. A. T. (2021). Analysing agreement among different evaluators in god class and feature envy detection. IEEE Access, 9, 145191-145211. doi:10.1109/ACCESS.2021.3123123
2169-3536
http://hdl.handle.net/10347/27240
Tools
Software
Codes
Feature extraction
Maintenance engineering
Licenses
Formal concept analysis
Design smell
Survey
Empirical study
Experiment
Inter-rater agreement
Kappa-Fleiss
Analysing agreement among different evaluators in god class and feature envy detection
oai:minerva.usc.es:10347/159592020-11-11T12:10:39Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Abuín Mosquera, José Manuel
author
Pichel Campos, Juan Carlos
author
Fernández Pena, Anselmo Tomás
author
Amigo Lechuga, Jorge
author
2016-05-16
Next-generation sequencing (NGS) technologies have led to a huge amount of genomic data that need to be analyzed and interpreted. This fact has a huge impact on the DNA sequence alignment process, which nowadays requires the mapping of billions of small DNA sequences onto a reference genome. In this way, sequence alignment remains the most time-consuming stage in the sequence analysis workflow. To deal with this issue, state of the art aligners take advantage of parallelization strategies. However, the existent solutions show limited scalability and have a complex implementation. In this work we introduce SparkBWA, a new tool that exploits the capabilities of a big data technology as Spark to boost the performance of one of the most widely adopted aligner, the Burrows-Wheeler Aligner (BWA). The design of SparkBWA uses two independent software layers in such a way that no modifications to the original BWA source code are required, which assures its compatibility with any BWA version (future or legacy). SparkBWA is evaluated in different scenarios showing noticeable results in terms of performance and scalability. A comparison to other parallel BWA-based aligners validates the benefits of our approach. Finally, an intuitive and flexible API is provided to NGS professionals in order to facilitate the acceptance and adoption of the new tool. The source code of the software described in this paper is publicly available at https://github.com/citiususc/SparkBWA, with a GPL3 license
Abuín JM, Pichel JC, Pena TF, Amigo J (2016) SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data. PLOS ONE 11(5): e0155461
1932-6203
http://hdl.handle.net/10347/15959
10.1371/journal.pone.0155461
SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data
oai:minerva.usc.es:10347/299992023-07-10T06:11:04Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Suárez Garea, Jorge Alberto
author
Blanco Heras, Dora
author
Argüello Pedreira, Francisco Santiago
author
Demir, Begüm
author
2022
Domain Adaptation (DA) is a technique that aims at extracting information from a labeled remote sensing image to allow classifying a different image obtained by the same sensor but at a different geographical location. This is a very complex problem from the computational point of view, specially due to the very high-resolution of multispectral images. TCANet is a deep learning neural network for DA classification problems that has been proven as very accurate for solving them. TCANet consists of several stages based on the application of convolutional filters obtained through Transfer Component Analysis (TCA) computed over the input images. It does not require backpropagation training, in contrast to the usual CNN-based networks, as the convolutional filters are directly computed based on the TCA transform applied over the training samples. In this paper, a hybrid parallel TCA-based domain adaptation technique for solving the classification of very high-resolution multispectral images is presented. It is designed for efficient execution on a multi-node computer by using Message Passing Interface (MPI), exploiting the available Graphical Processing Units (GPUs), and making efficient use of each multicore node by using Open Multi-Processing (OpenMP). As a result, an accurate DA technique from the point of view of classification and with high speedup values over the sequential version is obtained, increasing the applicability of the technique to real problems
Garea, A.S., Heras, D.B., Argüello, F. et al. A hybrid CUDA, OpenMP, and MPI parallel TCA-based domain adaptation for classification of very high-resolution remote sensing images. J Supercomput (2022). https://doi.org/10.1007/s11227-022-04961-y
0920-8542
http://hdl.handle.net/10347/29999
10.1007/s11227-022-04961-y
1573-0484
CUDA
OpenMP
MPI
GPU
Multicore
Domain adaptation
Feature extraction
Remote sensing
Multispectral
A hybrid CUDA, OpenMP, and MPI parallel TCA-based domain adaptation for classification of very high-resolution remote sensing images
oai:minerva.usc.es:10347/178772023-07-10T06:17:25Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Fernández Fabeiro, Jorge
author
Ordóñez Iglesias, Álvaro
author
González Escribano, Arturo
author
Blanco Heras, Dora
author
2018
Hyperspectral image registration is a relevant task for real-time applications like environmental disasters management or search and rescue scenarios. Traditional algorithms were not really devoted to real-time performance, even when ported to GPUs or other parallel devices. Thus, the HYFMGPU algorithm arose as a solution to such a lack. Nevertheless, as sensors are expected to evolve and thus generate images with finer resolutions and wider wavelength ranges, a multi-GPU implementation of this algorithm seems to be necessary in a near future. This work presents a multi-device MPI + CUDA implementation of the HYFMGPU algorithm that distributes all its stages among several GPUs. This version has been validated testing it for 5 different real hyperspectral images, with sizes from about 80 MB to nearly 2 GB, achieving speedups for the whole execution of the algorithm from 1.18 × to 1.59 × in 2 GPUs and from 1.26 × to 2.58 × in 4 GPUs. The parallelization efficiencies obtained are stable around 86 % and 78 % for 2 and 4 GPUs, respectively, which proves the scalability of this multi-device version
Fernández-Fabeiro, J., Ordóñez, Á., Gonzalez-Escribano, A., & Heras, D. (2018). A multi-device version of the HYFMGPU algorithm for hyperspectral scenes registration. The Journal Of Supercomputing. doi: 10.1007/s11227-018-2689-7
0920-8542
http://hdl.handle.net/10347/17877
10.1007/s11227-018-2689-7
1573-0484
Hyperspectral imaging
Image registration
Fourier transforms
Multi-GPU
CUDA
OpenMP
MPI
Remote sensing
A multi-device version of the HYFMGPU algorithm for hyperspectral scenes registration
oai:minerva.usc.es:10347/177182023-07-10T06:16:47Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Lamas Rodríguez, Julián
author
Argüello Pedreira, Francisco Santiago
author
Blanco Heras, Dora
author
2013
The problem of visualizing large volumetric datasets is appealing for computation on the GPU. Nevertheless, the design of GPU volume rendering solutions must deal with the limited available memory in a graphics card. In this work, we present a system for multiresolution volume rendering which preprocesses the dataset dividing it into bricks and generating a compressed version by applying different levels of compression based on wavelets. The compressed volume is then stored in the GPU memory. For the later visualization process by texture mapping each brick of the volume is decompressed and rendered with a different resolution level depending on its distance to the camera. This approach computes most of the tasks in the GPU, thus minimizing the data transfers among CPU and GPU. We obtain competitive results for volumes of size in the range between 64 and 256
Lamas-Rodríguez, J., Argüello, F., & Heras, D. (2014). MULTIRESOLUTION RENDERING BASED ON GPGPU COMPUTING. International Journal of Computing, 12(4), 298-307. Retrieved from http://computingonline.net/computing/article/view/609
1727-6209
http://hdl.handle.net/10347/17718
2312-5381
Compressed volume rendering
Texture mapping
Multiresolution rendering
Wavelet transform
Quantization
CUDA
OpenGL
Multiresolution rendering based on GPGPU computing
oai:minerva.usc.es:10347/176882023-07-10T06:16:06Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
Gamallo Otero, Pablo
author
2017-01-24
This article provides a preliminary semantic framework for Dependency Grammar in which lexical words are semantically defined as contextual distributions (sets of contexts) while syntactic dependencies are compositional operations on word distributions. More precisely, any syntactic dependency uses the contextual distribution of the dependent word to restrict the distribution of the head, and makes use of the contextual distribution of the head to restrict that of the dependent word. The interpretation of composite expressions and sentences, which are analyzed as a tree of binary dependencies, is performed by restricting the contexts of words dependency by dependency in a left-to-right incremental way. Consequently, the meaning of the whole composite expression or sentence is not a single representation, but a list of contextualized senses, namely the restricted distributions of its constituent (lexical) words. We report the results of two large-scale corpus-based experiments on two different natural language processing applications: paraphrasing and compositional translation
Gamallo, P. (2017). The role of syntactic dependencies in compositional distributional semantics. Corpus Linguistics And Linguistic Theory, 13(2), 261-289. doi: 10.1515/cllt-2016-0038
1613-7027
http://hdl.handle.net/10347/17688
10.1515/cllt-2016-0038
1613-7035
Distributional similarity
Compositional semantics
Syntactic analysis
Dependencies
The role of syntactic dependencies in compositional distributional semantics
oai:minerva.usc.es:10347/236592020-11-12T03:00:26Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Montes, Diego
author
Añel, Juan A.
author
Wallom, David C. H.
author
Uhe, Peter
author
Caderno, Pablo V.
author
Fernández Pena, Anselmo Tomás
author
2020
Cloud computing is a mature technology that has already shown benefits for a wide range of academic research domains that, in turn, utilize a wide range of application design models. In this paper, we discuss the use of cloud computing as a tool to improve the range of resources available for climate science, presenting the evaluation of two different climate models. Each was customized in a different way to run in public cloud computing environments (hereafter cloud computing) provided by three different public vendors: Amazon, Google and Microsoft. The adaptations and procedures necessary to run the models in these environments are described. The computational performance and cost of each model within this new type of environment are discussed, and an assessment is given in qualitative terms. Finally, we discuss how cloud computing can be used for geoscientific modelling, including issues related to the allocation of resources by funding bodies. We also discuss problems related to computing security, reliability and scientific reproducibility
Montes , D.; Añel , J.A.; Wallom , D.C.H.; Uhe , P.; Caderno, P.V.; Pena, T.F. Cloud Computing for Climate Modelling: Evaluation, Challenges and Benefits. Computers 2020, 9, 52
http://hdl.handle.net/10347/23659
10.3390/computers9020052
2073-431X
Climate model
Cloud computing
Supercomputer
Cloud Computing for Climate Modelling: Evaluation, Challenges and Benefits
oai:minerva.usc.es:10347/287832023-07-10T06:11:25Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Bosquet Mera, Brais
author
Mucientes Molina, Manuel Felipe
author
Brea Sánchez, Víctor Manuel
author
2021
Object detection through convolutional neural networks is reaching unprecedented levels of precision. However, a detailed analysis of the results shows that the accuracy in the detection of small objects is still far from being satisfactory. A recent trend that will likely improve the overall object detection success is to use the spatial information operating alongside temporal video information. This paper introduces STDnet-ST, an end-to-end spatio-temporal convolutional neural network for small object detection in video. We define small as those objects under px, where the features become less distinctive. STDnet-ST is an architecture that detects small objects over time and correlates pairs of the top-ranked regions with the highest likelihood of containing those small objects. This permits to link the small objects across the time as tubelets. Furthermore, we propose a procedure to dismiss unprofitable object links in order to provide high quality tubelets, increasing the accuracy. STDnet-ST is evaluated on the publicly accessible USC-GRAD-STDdb, UAVDT and VisDrone2019-VID video datasets, where it achieves state-of-the-art results for small objects
Pattern Recognition 116 (2021) 107929
0031-3203
http://hdl.handle.net/10347/28783
10.1016/j.patcog.2021.107929
Small object detection
Spatio-temporal convolutional network
Object linking
STDnet-ST: Spatio-temporal ConvNet for small object detection
oai:minerva.usc.es:10347/238252023-07-10T06:17:10Zcom_10347_2990com_10347_2889com_10347_227com_10347_2968com_10347_2894com_10347_2888col_10347_11719col_10347_10041
00925njm 22002777a 4500
dc
Al-Matarneh Mohammad Ata, Sattam
author
Gamallo Otero, Pablo
author
2019
In this paper, we examine the performance of several classifiers in the process of searching for very negative opinions. More precisely, we do an empirical study that analyzes the influence of three types of linguistic features (n-grams, word embeddings, and polarity lexicons) and their combinations when they are used to feed different supervised machine learning classifiers: Naive Bayes (NB), Decision Tree (DT), and Support Vector Machine (SVM). The experiments we have carried out show that SVM clearly outperforms NB and DT in all datasets by taking into account all features individually as well as their combinations
Almatarneh, S.; Gamallo, P. Comparing Supervised Machine Learning Strategies and Linguistic Features to Search for Very Negative Opinions. Information 2019, 10, 16
http://hdl.handle.net/10347/23825
10.3390/info10010016
2078-2489
Sentiment analysis
Opinion mining
Linguistic features
Classification
Very negative opinions
Comparing Supervised Machine Learning Strategies and Linguistic Features to Search for Very Negative Opinions
oai:minerva.usc.es:10347/332582024-03-22T01:02:51Zcom_10347_2990com_10347_2889com_10347_227col_10347_11719
00925njm 22002777a 4500
dc
Jaklin, Marko
author
García Lesta, Daniel
author
López Martínez, Paula
author
Brea Sánchez, Víctor Manuel
author
2024
The on-chip extraction of dynamic information from a scene can be addressed with either frame-based CMOS vision, also called smart image sensors, or dynamic vision sensors, also known as event cameras. When implemented with a pinned photodiode (PPD) as a 4-transistor active pixel sensor (4T-APS), the former brings about the benefits of low temporal noise and dark current but without high dynamic range (HDR). The latter comes with the benefits of HDR and a fast event detection rate with low power consumption. The drawback is the background activity noise, which leads to additional hardware or algorithms to keep it low. This paperanalyses the mismatch and noise of a global shutter 4T-APS implementation with local HDR through an overflow capacitor and correlated double sampling (CDS) to provide low noise events through frame differencing. The aim is to narrow the gap with dynamic vision sensors in terms of event rate and dynamic range. We show that our solution would be competitive with event cameras in scenarios with slow moving objects and a relatively wide dynamic range (85 dB).
Marko Jaklin, Daniel García-Lesta, Paula López, Victor M. Brea (2024). International Journal of Circuit Theory and Applications
0098-9886
http://hdl.handle.net/10347/33258
10.1002/cta.3925
1097-007X
CMOS vision sensors
Dynamic vision sensors
Event cameras
Frame difference
High dynamic range
Smart image sensors
Global shutter CMOS vision sensors and event cameras for on‐chip dynamic information
oai:minerva.usc.es:10347/176992023-07-10T06:16:12Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
García Lorenzo, Óscar
author
Fernández Pena, Anselmo Tomás
author
Cabaleiro Domínguez, José Carlos
author
Pichel Campos, Juan Carlos
author
Fernández Rivera, Francisco
author
2014
Today’s microprocessors include multicores that feature a diverse set of compute cores and onboard memory subsystems connected by complex communication networks and protocols. The analysis of factors that affect performance in such complex systems is far from being an easy task. Anyway, it is clear that increasing data locality and affinity is one of the main challenges to reduce the access latency to data. As the number of cores increases, the influence of this issue on the performance of parallel codes is more and more important. Therefore, models to characterize the performance in such systems are broadly demanded. This paper shows the use of an extension of the well known Roofline Model adapted to the main features of the memory hierarchy present in most of the current multicore systems. Also the Roofline Model was extended to show the dynamic evolution of the execution of a given code. In order to reduce the overheads to get the information needed to obtain this dynamic Roofline Model, hardware counters present in most of the current microprocessors are used. To illustrate its use, two simple parallel vector operations, SAXPY and SDOT, were considered. Different access strides and initial location of vectors in memory modules were used to show the influence of different scenarios in terms of locality and affinity. The effect of thread migration were also considered. We conclude that the proposed Roofline Model is an useful tool to understand and characterise the behaviour of the execution of parallel codes in multicore systems
García Lorenzo, O., Pena, T. F., Cabaleiro, J.C., Pichel, J.C. and Fernández Rivera, F. (2014). Using an extended Roofline Model to understand data and thread affinities on NUMA systems. Annals of Multicore and GPU Programming, v. 1, n. 1, pp. 37-48
2341-3158
http://hdl.handle.net/10347/17699
Using an extended Roofline Model to understand data and thread affinities on NUMA systems
oai:minerva.usc.es:10347/291622022-08-27T02:02:55Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Villarroya Fernández, Sebastián
author
Baumann, Peter
author
2022
This paper provides an in-depth survey on the integration of machine learning and array databases. First,machine learning support in modern database management systems is introduced. From straightforward implementations of linear algebra operations in SQL to machine learning capabilities of specialized database managers designed to process specific types of data, a number of different approaches are overviewed. Then, the paper covers the database features already implemented in current machine learning systems. Features such as rewriting, compression, and caching allow users to implement more efficient machine learning applications. The underlying linear algebra computations in some of the most used machine learning algorithms are studied in order to determine which linear algebra operations should be efficiently implemented by array databases. An exhaustive overview of array data and relevant array database managers is also provided. Those database features that have been proven of special importance for efficient execution of machine learning algorithms are analyzed in detail for each relevant array database management system. Finally, current state of array databases capabilities for machine learning implementation is shown through two example implementations in Rasdaman and SciDB
Applied Intelligence (2022). https://doi.org/10.1007/s10489-022-03979-2
0924-669X
http://hdl.handle.net/10347/29162
10.1007/s10489-022-03979-2
1573-7497
Array data
Array database managers
Machine learning
Efficient array machine learning
A survey on machine learning in array databases
oai:minerva.usc.es:10347/219672023-07-10T06:12:25Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Nagy, Daniel
author
Indalecio Fernández, Guillermo
author
García Loureiro, Antonio Jesús
author
Espiñeira Deus, Gabriel
author
Elmessary, Muhammad A.
author
Kalna, Karol
author
Seoane Iglesias, Natalia
author
2019
Variability of semiconductor devices is seriously limiting their performance at nanoscale. The impact of variability can be accurately and effectively predicted by computer-aided simulations in order to aid future device designs. Quantum corrected (QC) drift-diffusion (DD) simulations are usually employed to estimate the variability of state-of-the-art non-planar devices but require meticulous calibration. More accurate simulation methods, such as QC Monte Carlo (MC), are considered time consuming and elaborate. Therefore, we predict TiN metal gate work-function granularity (MGG) and line edge roughness (LER) induced variability on a 10-nm gate length gate-all-around Si nanowire FET and perform a rigorous comparison of the QC DD and MC results. In case of the MGG, we have found that the QC DD predicted variability can have a difference of up to 20% in comparison with the QC MC predicted one. In case of the LER, we demonstrate that the QC DD can overestimate the QC MC simulation produced variability by a significant error of up to 56%. This error between the simulation methods will vary with the root mean square (RMS) height and maximum source/drain $n$ -type doping. Our results indicate that the aforementioned QC DD simulation technique yields inaccurate results for the ON-current variability
http://hdl.handle.net/10347/21967
10.1109/ACCESS.2019.2892592
2169-3536
Drift-diffusion
Line edge roughness
Metal gate granularity
Monte Carlo
Quantum corrections
Nanowire FET
Drift-Diffusion Versus Monte Carlo Simulated ON-Current Variability in Nanowire FETs
oai:minerva.usc.es:10347/246502023-07-10T06:17:55Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Losada Carril, David Enrique
author
Parapar, Javier
author
Barreiro, Álvaro
author
2019
In information retrieval evaluation, pooling is a well‐known technique to extract a sample of documents to be assessed for relevance. Given the pooled documents, a number of studies have proposed different prioritization methods to adjudicate documents for judgment. These methods follow different strategies to reduce the assessment effort. However, there is no clear guidance on how many relevance judgments are required for creating a reliable test collection. In this article we investigate and further develop methods to determine when to stop making relevance judgments. We propose a highly diversified set of stopping methods and provide a comprehensive analysis of the usefulness of the resulting test collections. Some of the stopping methods introduced here combine innovative estimates of recall with time series models used in Financial Trading. Experimental results on several representative collections show that some stopping methods can reduce up to 95% of the assessment effort and still produce a robust test collection. We demonstrate that the reduced set of judgments can be reliably employed to compare search systems using disparate effectiveness metrics such as Average Precision, NDCG, P@100, and Rank Biased Precision. With all these measures, the correlations found between full pool rankings and reduced pool rankings is very high
Losada, D.E., Parapar, J. and Barreiro, A. (2019), When to stop making relevance judgments? A study of stopping methods for building information retrieval test collections. Journal of the Association for Information Science and Technology, 70: 49-60. https://doi.org/10.1002/asi.24077
http://hdl.handle.net/10347/24650
10.1002/asi.24077
2330-1643
When to stop making relevance judgments? A study of stopping methods for building information retrieval test collections
oai:minerva.usc.es:10347/183422023-07-10T06:16:53Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Ferro Santiago, Esteban
author
Brea Sánchez, Víctor Manuel
author
López Martínez, Paula
author
Cabello Ferrer, Diego
author
2018-10-19
This paper presents a Power Management Unit (PMU) powered by a 1 mm $^2$ solar cell on the same substrate to rise up the harvested voltage above 1.1 V. The on-chip solar cell and the PMU are fabricated in standard 0.18 μm CMOS technology achieving a form factor of 1.575 mm $^2$ . The PMU is able to start up from a harvested power of 2.38 nW without any external kick off or control signal. The PMU features a continuous and two-dimensional Maximum Power Point Tracking (MPPT) working in open-loop mode to handle a harvested power range from nW to $\mu$ W, by modifying both the charge pump topology and the switching frequency. The MPPT is based on four voltage level detectors that define five working regions depending on the illumination and on a self-tuning reference current for a fine adjustment of the switching frequency. The chip also includes an auxiliary charge pump to generate the voltage level necessary for the control circuit, implemented as a Pelliconi charge pump of 8 stages with NMOS transistors in Pwell as diodes. A Dickson charge pump with transmission gates as switches and with variable gain and capacitance per stage is also designed as the main charge pump. Finally, two relaxation oscillators are implemented to drive both charge pumps. This paper is accompanied by a video file demonstrating the PMU operation by powering an off-chip NAND gate
Esteban Ferro, Víctor Manuel Brea, Paula López and Diego Cabello (2018) Micro-Energy Harvesting System including a PMU and a Solar Cell on the same Substrate with Cold Start-Up from 2.38 nW and Input Power Range up to 10 W using Continuous MPPT. IEEE TRANSACTIONS ON POWER ELECTRONICS, 63(1), 1-12. Doi: 10.1109/TPEL.2018.2877105
0885-8993
http://hdl.handle.net/10347/18342
10.1109/TPEL.2018.2877105
Energy harvesting
DC-DC power conversion
PMU
On-Chip Solar Cell
Analog MPPT
Micro-Energy Harvesting System including a PMU and a Solar Cell on the same Substrate with Cold Start-Up from 2.38 nW and Input Power Range up to 10μW using Continuous MPPT
oai:minerva.usc.es:10347/177162020-01-31T13:38:38Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888com_10347_2943com_10347_2892col_10347_11719col_10347_15488col_10347_9688
00925njm 22002777a 4500
dc
Barro Ameneiro, Senén
author
Fernández López, Sara
author
2016
Universities play a crucial role in the systems of innovation by transferring the results of R&D activities to society and industry. This contribution is even more important in the Ibero-American countries given that the other critical ‘player’ (i.e., the industry) exercises a less active role in the development of innovation compared to the OECD countries. The aim of this paper is to analyze the knowledge transfer activities of the Ibero-American Higher Education Systems over the period 2000-2010. Using that database by Barro (2015), this study provides an accurate diagnosis of the Ibero-American universities’ performance in knowledge transfer, suggesting a number of practical implications for university decision-makers
Barro, S. and Fernández, S. (2016). Universities’ Performance in Knowledge Transfer: An Analysis of the Ibero-American Region Over the Golden Decade. Journal of Innovation Management, 4, 2, pp. 16-29
2183-0606
http://hdl.handle.net/10347/17716
Ibero-America
Technology transfer
University
Patenting
R&D activities
R&D resources
Universities’ Performance in Knowledge Transfer: An Analysis of the Ibero-American Region Over the Golden Decade
oai:minerva.usc.es:10347/312352024-02-28T13:24:05Zcom_10347_2990com_10347_2889com_10347_227com_10347_2961com_10347_2894com_10347_2888col_10347_11719col_10347_13013
00925njm 22002777a 4500
dc
Velasco Benito, Gael
author
Sobrino Cerdeiriña, Alejandro
author
Bugarín-Diz, Alberto
author
2023-07-28
The aim of this paper is to propose a new approach for the automatic treatment of linguistic vagueness. Our motivation is the feeling that most existing approaches dealing with linguistic information are based on converting vague meaning into crisp meaning using some conversions to precise measurements. As a result, existing approaches are adequate and easy to implement, but do not closely model the human thought process. To help alleviate this deficiency, we propose the use of linguistic relations to provide a natural language interface to an end user. We show a possible linguistic Prolog model based on an extension of the syntactic unification algorithm using synonymy and antonymy, as well as the extension of the resolution principle. Our approach does not aim to provide a well-founded formal semantics for such a linguistic Prolog, but a simple model supported by two experiments focused on the use of vague language, both of them executed in Spanish (an analysis of the data of the first experiment it is also available in that language at [1]). Thus, the purpose of this paper is to contribute to the mechanization of approximate reasoning by being respectful of the semantics of the vague terms involved in it; i.e., by paying attention to how they are evaluated by linguistic users under experimentation
International Journal of Approximate Reasoning 161 (2023) 108995
0888-613X
http://hdl.handle.net/10347/31235
10.1016/j.ijar.2023.108995
Linguistic vagueness
Computing with words
Approximate reasoning
An empirically supported approach to the treatment of imprecision in vague reasoning
oai:minerva.usc.es:10347/211442020-04-04T02:01:05Zcom_10347_2931com_10347_2892com_10347_2888com_10347_227com_10347_2990com_10347_2889col_10347_15755col_10347_11719
00925njm 22002777a 4500
dc
Méndez Fernández, Roi
author
Castelló Mayo, Enrique
author
Ríos Viqueira, José Ramón
author
Flores González, Julián
author
2019
A virtual TV set combines actors and objects with computer-generated virtual environments in real time. Nowadays, this technology is widely used in television broadcasts and cinema productions. A virtual TV set consists of three main elements: the stage, the computer-system and the chroma-keyer. The stage is composed by a monochrome cyclorama (the background) in front of which actors and objects are located (the foreground). The computer-system generates the virtual elements that will form the virtual environment. The chroma-keyer combines the elements in the foreground with the computer-generated environments by erasing the monochrome background and insetting the synthetic elements using the chroma-keying technique. In order to ease the background removal, the cyclorama illumination must be diffuse and homogeneous, avoiding the hue differences that are introduced by shadows, shines and over-lighted areas. The analysis of this illumination is usually performed manually by an expert using a photometer which makes the process slow, tedious and dependent on the experience of the operator. In this paper, a new calibration process to check and improve the homogeneity of a cyclorama’s illumination by non-experts using a custom software which provides both visual information and statistical data, is presented. This calibration process segments a cyclorama image in regions with similar luminance and calculates the centroid of each of them. The statistical study of the variation in the size of the regions and the position of the centroids are the key tools used to determine the homogeneity of the cyclorama lighting.
Méndez, R.; Castelló, E.; Ríos Viqueira, J.R.; Flores, J. A New Calibration Process for a Homogeneous Cyclorama Illumination in Virtual TV Sets. Appl. Sci. 2019, 9, 2020
http://hdl.handle.net/10347/21144
10.3390/app9102020
2076-3417
Chroma-keying
Cyclorama
Illumination
Virtual TV set
Mixed reality
A New Calibration Process for a Homogeneous Cyclorama Illumination in Virtual TV Sets
oai:minerva.usc.es:10347/329322024-03-21T09:21:43Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Vicente García, Laura
author
Pereira Rial, Óscar
author
Lópèz Martínez, Paula
author
2023
Maximizing the power transferred to the load is a key feature in any energy harvesting system. Contrary to traditional approaches, this paper mathematically demonstrates that performing a maximum power point tracking on the power delivered to the load instead of on the photogenerated power allows to harvest up to 25% more power due to setting less demanding operating conditions. A circuit implementation of a system that successfully maximizes its output by exclusively taking measurements of the output voltage is designed and demonstrated using a 180 nm commercial CMOS process. The system operates at μW range and achieves a peak power conversion efficiency of 79.04% at 30.74 μW output
AEU - International Journal of Electronics and Communications, Volume 172, 2023, 154956
1434-8411
http://hdl.handle.net/10347/32932
10.1016/j.aeue.2023.154956
Energy harvesting
MPPT
DC–DC converters
Maximum output power point tracking for photovoltaic energy harvesting systems: Mathematical model and circuit implementation
oai:minerva.usc.es:10347/177922023-07-10T06:16:07Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Fernández Delgado, Manuel
author
Cernadas García, Eva
author
Barro Ameneiro, Senén
author
Amorim, Dinani Gomes
author
2014
We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods), implemented in Weka, R (with and without the caret package), C and Matlab, including all the relevant classifiers available today. We use 121 data sets, which represent the whole UCI data base (excluding the large- scale problems) and other own real problems, in order to achieve significant conclusions about the classifier behavior, not dependent on the data set collection. The classifiers most likely to be the bests are the random forest (RF) versions, the best of which (implemented in R and accessed via caret) achieves 94.1% of the maximum accuracy overcoming 90% in the 84.3% of the data sets. However, the difference is not statistically significant with the second best, the SVM with Gaussian kernel implemented in C using LibSVM, which achieves 92.3% of the maximum accuracy. A few models are clearly better than the remaining ones: random forest, SVM with Gaussian and polynomial kernels, extreme learning machine with Gaussian kernel, C5.0 and avNNet (a committee of multi-layer perceptrons implemented in R with the caret package). The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in the top-20, respectively)
Fernández-Delgado, M., Cernadas, E., Barro, S. & Amorim, D. (2014). Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?, JMLR, 15, 3133−3181
1532-4435
http://hdl.handle.net/10347/17792
10.1117/1.JRS.11.015020
1533-7928
Classification
UCI data base
Random forest
Support vector machine
Neural networks
Decision trees
Ensembles
Rule-based classifiers
Discriminant analysis
Bayesian classifiers
Generalized linear models
Partial least squares and principal component regression
Multiple adaptive regression splines
Nearest-neighbors
Logistic and multinomial regression
Do we need hundreds of classifiers to solve real world classification problems?
oai:minerva.usc.es:10347/267172023-07-10T06:11:04Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Argüello Pedreira, Francisco Santiago
author
Blanco Heras, Dora
author
Suárez Garea, Jorge Alberto
author
Quesada Barriuso, Pablo
author
2021
Watershed management is the study of the relevant characteristics of a watershed aimed at the use and sustainable management of forests, land, and water. Watersheds can be threatened by deforestation, uncontrolled logging, changes in farming systems, overgrazing, road and track construction, pollution, and invasion of exotic plants. This article describes a procedure to automatically monitor the river basins of Galicia, Spain, using five-band multispectral images taken by an unmanned aerial vehicle and several image processing algorithms. The objective is to determine the state of the vegetation, especially the identification of areas occupied by invasive species, as well as the detection of man-made structures that occupy the river basin using multispectral images. Since the territory to be studied occupies extensive areas and the resulting images are large, techniques and algorithms have been selected for fast execution and efficient use of computational resources. These techniques include superpixel segmentation and the use of advanced texture methods. For each one of the stages of the method (segmentation, texture codebook generation, feature extraction, and classification), different algorithms have been evaluated in terms of speed and accuracy for the identification of vegetation and natural and artificial structures in the Galician riversides. The experimental results show that the proposed approach can achieve this goal with speed and precision
Remote Sens. 2021, 13(14), 2687; https://doi.org/10.3390/rs13142687
http://hdl.handle.net/10347/26717
10.3390/rs13142687
2072-4292
River basin
Watershed management
Habitat assessment
Invasive species
Galicia
Texture analysis
Vegetation classification
Watershed Monitoring in Galicia from UAV Multispectral Imagery Using Advanced Texture Methods
oai:minerva.usc.es:10347/269582023-07-10T06:11:38Zcom_10347_2990com_10347_2889com_10347_227com_10347_2953com_10347_2893com_10347_2888col_10347_11719col_10347_15488
00925njm 22002777a 4500
dc
Ordóñez Iglesias, Álvaro
author
Blanco Heras, Dora
author
Argüello Pedreira, Francisco Santiago
author
Demir, Begüm
author
2020
Image registration is a common task in remote sensing, consisting in aligning different images of the same scene. It is a computationally expensive process, especially if high precision is required, the resolution is high, or consist of a large number of bands, as is the case of the hyperspectral images. HSIKAZEisaregistration method specially adapted for hyperspectral images that is based on feature detection and takes profit of the spatial and the spectral information available in those images. In this paper, an implementation of the HSI–KAZE registration algorithm on GPUs using CUDA is proposed. It detects keypoints based on non–linear diffusion filtering and is suitable for on–board processing of high resolution hyperspectral images. The algorithm includes a band selection method based on the entropy, construction of a scale-space through of non-linear filtering, keypoint detection with position refinement, and keypoint descriptors with spatial and spectral parts. Several techniques have been applied to obtain optimum performance on the GPU
The Journal of Supercomputing (2020)76:9478–9492. DOI 10.1007/s11227-020-03214-0
0920-8542
http://hdl.handle.net/10347/26958
10.1007/s11227-020-03214-0
1573-0484
Hyperspectral data
Image registration
KAZE features
Remote sensing
CUDA
GPU
GPU Accelerated Registration of Hyperspectral Images Using KAZE Features
oai:minerva.usc.es:10347/177902020-11-06T11:49:20Zcom_10347_2990com_10347_2889com_10347_227com_10347_2904com_10347_2890com_10347_2888com_10347_2930com_10347_2891col_10347_11719col_10347_12265col_10347_11711
00925njm 22002777a 4500
dc
Cabezas Sáinz, Pablo
author
Guerra Varela, Jorge
author
Carreira Nouche, María José
author
Mariscal Ávila, Javier
author
Roel Sánchez, María
author
Rubiolo Gaytán, Juan Andrés
author
Sciara, Andrés A.
author
Abal Posada, Miguel
author
Botana López, Luis Miguel
author
López López, Rafael
author
Sánchez Piñón, Laura Elena
author
2017
Background
Zebrafish (Danio rerio) is a model organism that has emerged as a tool for cancer research, cancer being the second most common cause of death after cardiovascular disease for humans in the developed world. Zebrafish is a useful model for xenotransplantation of human cancer cells and toxicity studies of different chemotherapeutic compounds in vivo. Compared to the murine model, the zebrafish model is faster, can be screened using high-throughput methods and has a lower maintenance cost, making it possible and affordable to create personalized therapies. While several methods for cell proliferation determination based on image acquisition and quantification have been developed, some drawbacks still remain. In the xenotransplantation technique, quantification of cellular proliferation in vivo is critical to standardize the process for future preclinical applications of the model.
Methods
This study improved the conditions of the xenotransplantation technique – quantification of cellular proliferation in vivo was performed through image processing with our ZFtool software and optimization of temperature in order to standardize the process for a future preclinical applications. ZFtool was developed to establish a base threshold that eliminates embryo auto-fluorescence and measures the area of marked cells (GFP) and the intensity of those cells to define a ‘proliferation index’.
Results
The analysis of tumor cell proliferation at different temperatures (34 °C and 36 °C) in comparison to in vitro cell proliferation provides of a better proliferation rate, achieved as expected at 36°, a maintenance temperature not demonstrated up to now. The mortality of the embryos remained between 5% and 15%. 5- Fluorouracil was tested for 2 days, dissolved in the incubation medium, in order to quantify the reduction of the tumor mass injected. In almost all of the embryos incubated at 36 °C and incubated with 5-Fluorouracil, there was a significant tumor cell reduction compared with the control group. This was not the case at 34 °C.
Conclusions
Our results demonstrate that the proliferation of the injected cells is better at 36 °C and that this temperature is the most suitable for testing chemotherapeutic drugs like the 5-Fluorouracil
Cabezas-Sainz, P., Guerra-Varela, J., Carreira, M., Mariscal, J., Roel, M., & Rubiolo, J. et al. (2018). Improving zebrafish embryo xenotransplantation conditions by increasing incubation temperature and establishing a proliferation index with ZFtool. BMC Cancer, 18(1). doi: 10.1186/s12885-017-3919-8
1471-2407
http://hdl.handle.net/10347/17790
10.1186/s12885-017-3919-8
Zebrafish
Xenograft
Cancer
5-fu
Proliferation
Temperature
ZFtool
Improving zebrafish embryo xenotransplantation conditions by increasing incubation temperature and establishing a proliferation index with ZFtool
marc///col_10347_11719/100