Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,359)

Search Parameters:
Journal = Computation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1443 KiB  
Article
Predicting Urban Traffic Congestion with VANET Data
by Wilson Chango, Pamela Buñay, Juan Erazo, Pedro Aguilar, Jaime Sayago, Angel Flores and Geovanny Silva
Computation 2025, 13(4), 92; https://doi.org/10.3390/computation13040092 (registering DOI) - 7 Apr 2025
Abstract
The purpose of this study lies in developing a comparison of neural network-based models for vehicular congestion prediction, with the aim of improving urban mobility and mitigating the negative effects associated with traffic, such as accidents and congestion. This research focuses on evaluating [...] Read more.
The purpose of this study lies in developing a comparison of neural network-based models for vehicular congestion prediction, with the aim of improving urban mobility and mitigating the negative effects associated with traffic, such as accidents and congestion. This research focuses on evaluating the effectiveness of different neural network architectures, specifically Transformer and LSTM, in order to achieve accurate and reliable predictions of vehicular congestion. To carry out this research, a rigorous methodology was employed that included a systematic literature review based on the PRISMA methodology, which allowed for the identification and synthesis of the most relevant advances in the field. Likewise, the Design Science Research (DSR) methodology was applied to guide the development and validation of the models, and the CRISP-DM (Cross-Industry Standard Process for Data Mining) methodology was used to structure the process, from understanding the problem to implementing the solutions. The dataset used in this study included key variables related to traffic, such as vehicle speed, vehicular flow, and weather conditions. These variables were processed and normalized to train and evaluate various neural network architectures, highlighting LSTM and Transformer networks. The results obtained demonstrated that the LSTM-based model outperformed the Transformer model in the task of congestion prediction. Specifically, the LSTM model achieved an accuracy of 0.9463, with additional metrics such as a loss of 0.21, an accuracy of 0.93, a precision of 0.29, a recall of 0.71, an F1-score of 0.42, an MSE of 0.07, and an RMSE of 0.26. In conclusion, this study demonstrates that the LSTM-based model is highly effective for predicting vehicular congestion, surpassing other architectures such as Transformer. The integration of this model into a simulation environment showed that real-time traffic information can significantly improve urban mobility management. These findings support the utility of neural network architectures in sustainable urban planning and intelligent traffic management, opening new perspectives for future research in this field. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

22 pages, 10018 KiB  
Article
Eye Care: Predicting Eye Diseases Using Deep Learning Based on Retinal Images
by Araek Tashkandi
Computation 2025, 13(4), 91; https://doi.org/10.3390/computation13040091 - 3 Apr 2025
Viewed by 52
Abstract
Eye illness detection is important, yet it can be difficult and error-prone. In order to effectively and promptly diagnose eye problems, doctors must use cutting-edge technologies. The goal of this research paper is to develop a sophisticated model that will help physicians detect [...] Read more.
Eye illness detection is important, yet it can be difficult and error-prone. In order to effectively and promptly diagnose eye problems, doctors must use cutting-edge technologies. The goal of this research paper is to develop a sophisticated model that will help physicians detect different eye conditions early on. These conditions include age-related macular degeneration (AMD), diabetic retinopathy, cataracts, myopia, and glaucoma. Common eye conditions include cataracts, which cloud the lens and cause blurred vision, and glaucoma, which can cause vision loss due to damage to the optic nerve. The two conditions that could cause blindness if treatment is not received are age-related macular degeneration (AMD) and diabetic retinopathy, a side effect of diabetes that destroys the blood vessels in the retina. Problems include myopic macular degeneration, glaucoma, and retinal detachment—severe types of nearsightedness that are typically defined as having a refractive error of –5 diopters or higher—are also more likely to occur in people with high myopia. We intend to apply a user-friendly approach that will allow for faster and more efficient examinations. Our research attempts to streamline the eye examination procedure, making it simpler and more accessible than traditional hospital approaches. Our goal is to use deep learning and machine learning to develop an extremely accurate model that can assess medical images, such as eye retinal scans. This was accomplished by using a huge dataset to train the machine learning and deep learning model, as well as sophisticated image processing techniques to assist the algorithm in identifying patterns of various eye illnesses. Following training, we discovered that the CNN, VggNet, MobileNet, and hybrid Deep Learning models outperformed the SVM and Random Forest machine learning models in terms of accuracy, achieving above 98%. Therefore, our model could assist physicians in enhancing patient outcomes, raising survival rates, and creating more effective treatment plans for patients with these illnesses. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
Show Figures

Figure 1

17 pages, 6320 KiB  
Article
Oscillation Flow of Viscous Electron Fluids in Conductors of Rectangular Cross-Section
by Andriy A. Avramenko, Igor V. Shevchuk, Nataliia P. Dmitrenko, Andriy I. Tyrinov, Yiliia Y. Kovetska and Andriy S. Kobzar
Computation 2025, 13(4), 90; https://doi.org/10.3390/computation13040090 - 1 Apr 2025
Viewed by 65
Abstract
The article presents results of an analytical and numerical modeling of electron fluid motion and heat generation in a rectangular conductor at an alternating electric potential. The analytical solution is based on the series expansion solution (Fourier method) and double series solution (method [...] Read more.
The article presents results of an analytical and numerical modeling of electron fluid motion and heat generation in a rectangular conductor at an alternating electric potential. The analytical solution is based on the series expansion solution (Fourier method) and double series solution (method of eigenfunction decomposition). The numerical solution is based on the lattice Boltzmann method (LBM). An analytical solution for the electric current was obtained. This enables estimating the heat generation in the conductor and determining the influence of the parameters characterizing the conductor dimensions, the parameter M (phenomenological transport time describing momentum-nonconserving collisions), the Knudsen number (mean free path for momentum-nonconserving) and the Sh number (frequency) on the heat generation rate as an electron flow passes through a conductor. Full article
Show Figures

Figure 1

9 pages, 224 KiB  
Article
Invariance of Stationary Distributions of Exponential Networks with Prohibitions and Determination of Maximum Prohibitions
by Gurami Tsitsiashvili and Marina Osipova
Computation 2025, 13(4), 89; https://doi.org/10.3390/computation13040089 - 1 Apr 2025
Viewed by 12
Abstract
The paper considers queuing networks with prohibitions on transitions between network nodes that determine the protocol of their operation. In the graph of transient network intensities, a set of base vertices is allocated (proportional to the number of edges), and we raise the [...] Read more.
The paper considers queuing networks with prohibitions on transitions between network nodes that determine the protocol of their operation. In the graph of transient network intensities, a set of base vertices is allocated (proportional to the number of edges), and we raise the question of whether some subset of it can be deleted such that the stationary distribution of the Markov process describing the functioning of the network is preserved. In order for this condition to be fulfilled, it is sufficient that the set of vertices of the graph of transient intensities, after the removal of a subset of the base vertices, coincide with the set of states of the Markov process and that this graph be connected. It is proved that the ratio of the number of remaining base vertices to their total number n converges to one-half for n. In this paper, we are looking for graphs of transient intensities with a minimum (in some sense) set of edges for open and closed service networks. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

15 pages, 766 KiB  
Article
MedMAE: A Self-Supervised Backbone for Medical Imaging Tasks
by Anubhav Gupta, Islam Osman, Mohamed S. Shehata, W. John Braun and Rebecca E. Feldman
Computation 2025, 13(4), 88; https://doi.org/10.3390/computation13040088 - 1 Apr 2025
Viewed by 61
Abstract
Medical imaging tasks are very challenging due to the lack of publicly available labeled datasets. Hence, it is difficult to achieve high performance with existing deep learning models as they require a massive labeled dataset to be trained effectively. An alternative solution is [...] Read more.
Medical imaging tasks are very challenging due to the lack of publicly available labeled datasets. Hence, it is difficult to achieve high performance with existing deep learning models as they require a massive labeled dataset to be trained effectively. An alternative solution is to use pre-trained models and fine-tune them using a medical imaging dataset. However, all existing models are pre-trained using natural images, which represent a different domain from that of medical imaging; this leads to poor performance due to domain shift. To overcome these problems, we propose a pre-trained backbone using a collected medical imaging dataset with a self-supervised learning tool called a masked autoencoder. This backbone can be used as a pre-trained model for any medical imaging task, as it is trained to learn a visual representation of different types of medical images. To evaluate the performance of the proposed backbone, we use four different medical imaging tasks. The results are compared with existing pre-trained models. These experiments show the superiority of our proposed backbone in medical imaging tasks. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
Show Figures

Figure 1

21 pages, 329 KiB  
Article
Subsequential Continuity in Neutrosophic Metric Space with Applications
by Vishal Gupta, Nitika Garg and Rahul Shukla
Computation 2025, 13(4), 87; https://doi.org/10.3390/computation13040087 - 25 Mar 2025
Viewed by 109
Abstract
This paper introduces two concepts, subcompatibility and subsequential continuity, which are, respectively, weaker than the existing concepts of occasionally weak compatibility and reciprocal continuity. These concepts are studied within the framework of neutrosophic metric spaces. Using these ideas, a common fixed point theorem [...] Read more.
This paper introduces two concepts, subcompatibility and subsequential continuity, which are, respectively, weaker than the existing concepts of occasionally weak compatibility and reciprocal continuity. These concepts are studied within the framework of neutrosophic metric spaces. Using these ideas, a common fixed point theorem is developed for a system involving four maps. Furthermore, the results are applied to solve the Volterra integral equation, demonstrating the practical use of these findings in neutrosophic metric spaces. Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
22 pages, 1039 KiB  
Article
A Machine Learning-Based Computational Methodology for Predicting Acute Respiratory Infections Using Social Media Data
by Jose Manuel Ramos-Varela, Juan C. Cuevas-Tello and Daniel E. Noyola
Computation 2025, 13(4), 86; https://doi.org/10.3390/computation13040086 - 25 Mar 2025
Viewed by 129
Abstract
We study the relationship between tweets referencing Acute Respiratory Infections (ARI) or COVID-19 symptoms and confirmed cases of these diseases. Additionally, we propose a computational methodology for selecting and applying Machine Learning (ML) algorithms to predict public health indicators using social media data. [...] Read more.
We study the relationship between tweets referencing Acute Respiratory Infections (ARI) or COVID-19 symptoms and confirmed cases of these diseases. Additionally, we propose a computational methodology for selecting and applying Machine Learning (ML) algorithms to predict public health indicators using social media data. To achieve this, a novel pipeline was developed, integrating three distinct models to predict confirmed cases of ARI and COVID-19. The dataset contains tweets related to respiratory diseases, published between 2020 and 2022 in the state of San Luis Potosí, Mexico, obtained via the Twitter API (now X). The methodology is composed of three stages, and it involves tools such as Dataiku and Python with ML libraries. The first two stages focuses on identifying the best-performing predictive models, while the third stage includes Natural Language Processing (NLP) algorithms for tweet selection. One of our key findings is that tweets contributed to improved predictions of ARI confirmed cases but did not enhance COVID-19 time series predictions. The best-performing NLP approach is the combination of Word2Vec algorithm with the KMeans model for tweet selection. Furthermore, predictions for both time series improved by 3% in the second half of 2020 when tweets were included as a feature, where the best prediction algorithm is DeepAR. Full article
(This article belongs to the Special Issue Feature Papers in Computational Biology)
Show Figures

Figure 1

18 pages, 15002 KiB  
Article
Numerical Analysis of the Impact of Variable Borer Miner Operating Modes on the Microclimate in Potash Mine Working Areas
by Lev Levin, Mikhail Semin, Stanislav Maltsev, Roman Luzin and Andrey Sukhanov
Computation 2025, 13(4), 85; https://doi.org/10.3390/computation13040085 - 24 Mar 2025
Viewed by 88
Abstract
This paper addresses the numerical simulation of unsteady, non-isothermal ventilation in a dead-end mine working of a potash mine excavated using a borer miner. During its operations, airflow can become unsteady due to the variable operating modes of the borer miner, the switching [...] Read more.
This paper addresses the numerical simulation of unsteady, non-isothermal ventilation in a dead-end mine working of a potash mine excavated using a borer miner. During its operations, airflow can become unsteady due to the variable operating modes of the borer miner, the switching on and off of its motor cooling fans, and the movement of a shuttle car transporting ore. While steady ventilation in a dead-end working with a borer miner has been previously studied, the specific features of air microclimate parameter distribution in more complex and realistic unsteady scenarios remain unexplored. Our experimental studies reveal that over time, air velocity and, particularly, air temperature experience significant fluctuations. In this study, we develop and parameterize a mathematical model and perform a series of numerical simulations of unsteady heat and mass transfer in a dead-end working. These simulations account for the switching on and off of the borer miner’s fans and the movement of the shuttle car. The numerical model is calibrated using data from our experiments conducted in a potash mine. The analysis of the first factor is carried out by examining two extreme scenarios under steady-state ventilation conditions, while the second factor is analyzed within a fully unsteady framework using a dynamic mesh approach in the ANSYS Fluent 2021 R2. The numerical results demonstrate that the borer miner’s operating mode notably impacts the velocity and temperature fields, with a twofold decrease in maximum velocity near the cabin after the shuttle car departed and a temperature difference of about 1–1.5 °C between extreme scenarios in the case of forcing ventilation. The unsteady simulations using the dynamic mesh approach revealed that temperature variations were primarily caused by the borer miner’s cooling system, while the moving shuttle car generated short-term aerodynamic oscillations. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

9 pages, 915 KiB  
Article
Tree-Based Methods of Volatility Prediction for the S&P 500 Index
by Marin Lolic
Computation 2025, 13(4), 84; https://doi.org/10.3390/computation13040084 - 24 Mar 2025
Viewed by 134
Abstract
Predicting asset return volatility is one of the central problems in quantitative finance. These predictions are used for portfolio construction, calculation of value at risk (VaR), and pricing of derivatives such as options. Classical methods of volatility prediction utilize historical returns data and [...] Read more.
Predicting asset return volatility is one of the central problems in quantitative finance. These predictions are used for portfolio construction, calculation of value at risk (VaR), and pricing of derivatives such as options. Classical methods of volatility prediction utilize historical returns data and include the exponentially weighted moving average (EWMA) and generalized autoregressive conditional heteroskedasticity (GARCH). These approaches have shown significantly higher rates of predictive accuracy than corresponding methods of return forecasting, but they still have vast room for improvement. In this paper, we propose and test several methods of volatility forecasting on the S&P 500 Index using tree ensembles from machine learning, namely random forest and gradient boosting. We show that these methods generally outperform the classical approaches across a variety of metrics on out-of-sample data. Finally, we use the unique properties of tree-based ensembles to assess what data can be particularly useful in predicting asset return volatility. Full article
(This article belongs to the Special Issue Quantitative Finance and Risk Management Research: 2nd Edition)
Show Figures

Figure 1

19 pages, 1891 KiB  
Article
A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems
by Yiming Ren and Shan Zhao
Computation 2025, 13(4), 83; https://doi.org/10.3390/computation13040083 - 23 Mar 2025
Viewed by 94
Abstract
A new high-order hybrid method integrating neural networks and corrected finite differences is developed for solving elliptic equations with irregular interfaces and discontinuous solutions. Standard fourth-order finite difference discretization becomes invalid near such interfaces due to the discontinuities and requires corrections based on [...] Read more.
A new high-order hybrid method integrating neural networks and corrected finite differences is developed for solving elliptic equations with irregular interfaces and discontinuous solutions. Standard fourth-order finite difference discretization becomes invalid near such interfaces due to the discontinuities and requires corrections based on Cartesian derivative jumps. In traditional numerical methods, such as the augmented matched interface and boundary (AMIB) method, these derivative jumps can be reconstructed via additional approximations and are solved together with the unknown solution in an iterative procedure. Nontrivial developments have been carried out in the AMIB method in treating sharply curved interfaces, which, however, may not work for interfaces with geometric singularities. In this work, machine learning techniques are utilized to directly predict these Cartesian derivative jumps without involving the unknown solution. To this end, physics-informed neural networks (PINNs) are trained to satisfy the jump conditions for both closed and open interfaces with possible geometric singularities. The predicted Cartesian derivative jumps can then be integrated in the corrected finite differences. The resulting discrete Laplacian can be efficiently solved by fast Poisson solvers, such as fast Fourier transform (FFT) and geometric multigrid methods, over a rectangular domain with Dirichlet boundary conditions. This hybrid method is both easy to implement and efficient. Numerical experiments in two and three dimensions demonstrate that the method achieves fourth-order accuracy for the solution and its derivatives. Full article
Show Figures

Figure 1

13 pages, 2116 KiB  
Article
Numerical Simulation of Capture of Diffusing Particles in Porous Media
by Valeriy E. Arkhincheev, Bair V. Khabituev and Stanislav P. Maltsev
Computation 2025, 13(4), 82; https://doi.org/10.3390/computation13040082 - 22 Mar 2025
Viewed by 165
Abstract
Numerical modeling was conducted to study the capture of particles diffusing in porous media with traps. The pores are cylindrical in shape, and the traps are randomly distributed along the cylindrical surfaces of the pores. The dynamics of particle capture by the traps [...] Read more.
Numerical modeling was conducted to study the capture of particles diffusing in porous media with traps. The pores are cylindrical in shape, and the traps are randomly distributed along the cylindrical surfaces of the pores. The dynamics of particle capture by the traps, as well as the filling of the traps, were investigated. In general, the decrease in the number of particles follows an exponential trend, with a characteristic time determined by the trap concentration. However, at longer times, extended plateaus emerge in the particle distribution function. Additionally, the dynamics of the interface boundary corresponding to the median trap filling (M = 0.5) were examined. This interface separates regions where traps are filled with a probability greater than 0.5 from regions where traps are filled with a probability less than 0.5. The motion of the interface over time was found to follow a logarithmic dependence. The influence of the radius of the pore on the capture on traps, which are placed on the internal surface of the cylinders, was investigated. The different dependencies of the extinction time on the number of traps were found at different radii of pores the first time. Full article
Show Figures

Figure 1

16 pages, 347 KiB  
Article
Introducing Monotone Enriched Nonexpansive Mappings for Fixed Point Approximation in Ordered CAT(0) Spaces
by Safeer Hussain Khan, Rizwan Anjum and Nimra Ismail
Computation 2025, 13(4), 81; https://doi.org/10.3390/computation13040081 - 21 Mar 2025
Viewed by 209
Abstract
The aim of this paper is twofold: introducing the concept of monotone enriched nonexpansive mappings and a faster iterative process. Our examples illustrate the novelty of our newly introduced concepts. We investigate the iterative estimation of fixed points for such mappings for the [...] Read more.
The aim of this paper is twofold: introducing the concept of monotone enriched nonexpansive mappings and a faster iterative process. Our examples illustrate the novelty of our newly introduced concepts. We investigate the iterative estimation of fixed points for such mappings for the first time within an ordered CAT(0) space. It is done by proving some strong and Δ-convergence theorems. Additionally, numerical experiments are included to demonstrate the validity of our theoretical results and to establish the superiority of convergence behavior of our iterative process. As an application, we use our newly introduced concepts to find the solution of an integral equation. The outcomes of our study expand upon and enhance certain established findings in the current body of literature. Full article
Show Figures

Figure 1

17 pages, 1513 KiB  
Article
Cascade-Based Input-Doubling Classifier for Predicting Survival in Allogeneic Bone Marrow Transplants: Small Data Case
by Ivan Izonin, Roman Tkachenko, Nazarii Hovdysh, Oleh Berezsky, Kyrylo Yemets and Ivan Tsmots
Computation 2025, 13(4), 80; https://doi.org/10.3390/computation13040080 - 21 Mar 2025
Viewed by 167
Abstract
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing [...] Read more.
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing risks, and tailoring treatment plans. However, the inherent limitations of small datasets present significant obstacles. This paper introduces an advanced input-doubling classifier designed to improve survival predictions for allogeneic bone marrow transplants. The approach utilizes two artificial intelligence tools: the first Probabilistic Neural Network generates output signals that expand the independent attributes of an augmented dataset, while the second machine learning algorithm performs the final classification. This method, based on the cascading principle, facilitates the development of novel algorithms for preparing and applying the enhanced input-doubling technique to classification tasks. The proposed method was tested on a small dataset within transplantology, focusing on binary classification. Optimal parameters for the method were identified using the Dual Annealing algorithm. Comparative analysis of the improved method against several existing approaches revealed a substantial improvement in accuracy across various performance metrics, underscoring its practical benefits Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

10 pages, 7877 KiB  
Article
A Molecular Dynamics Simulation on the Methane Adsorption in Nanopores of Shale
by Qiuye Yuan, Jinghua Yang, Shuxia Qiu and Peng Xu
Computation 2025, 13(3), 79; https://doi.org/10.3390/computation13030079 - 20 Mar 2025
Viewed by 135
Abstract
Gas adsorption in nanoscale pores is one of the key theoretical bases for shale gas development. However, the influence mechanisms of gas adsorption capacity and the second adsorption layer in nanoscale pores are very complex, and are difficult to directly observe by using [...] Read more.
Gas adsorption in nanoscale pores is one of the key theoretical bases for shale gas development. However, the influence mechanisms of gas adsorption capacity and the second adsorption layer in nanoscale pores are very complex, and are difficult to directly observe by using traditional experimental methods. Therefore, multilayer graphene is used to model the nanopores in a shale reservoir, and the molecular dynamics method is carried out to study the adsorption dynamics of methane molecules. The results show that the adsorption density of methane molecules is inversely proportional to the temperature and pore size, and it positively correlates to the graphene layer number and pressure. The smaller adsorption region will reach the adsorption equilibrium state earlier, and the adsorption layer thickness is smaller. When the pore size is larger than 1.7 nm, the single-layer adsorption becomes double-layer adsorption of methane molecules. The peak of the second adsorption layer depends on the pressure and temperature, while the position of the second adsorption layer depends on the pore size. The present work is useful for understanding the dynamics mechanism of gas molecules in a nanoscale confined space, and may provide a theoretical basis for the development of unconventional natural gas. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

20 pages, 2405 KiB  
Review
A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research
by Catur Supriyanto, Abu Salam, Junta Zeniarja, Danang Wahyu Utomo, Ika Novita Dewi, Cinantya Paramita, Adi Wijaya and Noor Zuraidin Mohd Safar
Computation 2025, 13(3), 78; https://doi.org/10.3390/computation13030078 - 19 Mar 2025
Viewed by 330
Abstract
Early detection of skin cancer is crucial for successful treatment and improved patient outcomes. Medical images play a vital role in this process, serving as the primary data source for both traditional and modern diagnostic approaches. This study aims to provide an overview [...] Read more.
Early detection of skin cancer is crucial for successful treatment and improved patient outcomes. Medical images play a vital role in this process, serving as the primary data source for both traditional and modern diagnostic approaches. This study aims to provide an overview of the significant role of medical images in skin cancer detection and highlight developments in the use of deep learning for early diagnosis. The scope of this survey includes an in-depth exploration of state-of-the-art deep learning methods, an evaluation of public datasets commonly used for training and validation, and a bibliometric analysis of recent advancements in the field. This survey focuses on publications in the Scopus database from 2019 to 2024. The search string is used to find articles by their abstracts, titles, and keywords, and includes several public datasets, like HAM and ISIC, ensuring relevance to the topic. Filters are applied based on the year, document type, source type, and language. The analysis identified 1697 articles, predominantly comprising journal articles and conference proceedings. The analysis shows that the number of articles has increased over the past five years. This growth is driven not only by developed countries but also by developing countries. Dermatology departments in various hospitals play a significant role in advancing skin cancer detection methods. In addition to identifying publication trends, this study also reveals underexplored areas to encourage new explorations using the VOSviewer and Bibliometrix applications. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
Show Figures

Figure 1

Back to TopTop