Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,751)

Search Parameters:
Journal = Future Internet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 470 KiB  
Systematic Review
A Systematic Review on the Combination of VR, IoT and AI Technologies, and Their Integration in Applications
by Dimitris Kostadimas, Vlasios Kasapakis and Konstantinos Kotis
Future Internet 2025, 17(4), 163; https://doi.org/10.3390/fi17040163 (registering DOI) - 7 Apr 2025
Abstract
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This [...] Read more.
The convergence of Virtual Reality (VR), Artificial Intelligence (AI), and the Internet of Things (IoT) offers transformative potential across numerous sectors. However, existing studies often examine these technologies independently or in limited pairings, which overlooks the synergistic possibilities of their combined usage. This systematic review adheres to the PRISMA guidelines in order to critically analyze peer-reviewed literature from highly recognized academic databases related to the intersection of VR, AI, and IoT, and identify application domains, methodologies, tools, and key challenges. By focusing on real-life implementations and working prototypes, this review highlights state-of-the-art advancements and uncovers gaps that hinder practical adoption, such as data collection issues, interoperability barriers, and user experience challenges. The findings reveal that digital twins (DTs), AIoT systems, and immersive XR environments are promising as emerging technologies (ET), but require further development to achieve scalability and real-world impact, while in certain fields a limited amount of research is conducted until now. This review bridges theory and practice, providing a targeted foundation for future interdisciplinary research aimed at advancing practical, scalable solutions across domains such as healthcare, smart cities, industry, education, cultural heritage, and beyond. The study found that the integration of VR, AI, and IoT holds significant potential across various domains, with DTs, IoT systems, and immersive XR environments showing promising applications, but challenges such as data interoperability, user experience limitations, and scalability barriers hinder widespread adoption. Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
Show Figures

Figure 1

14 pages, 274 KiB  
Article
Multi-Class Intrusion Detection in Internet of Vehicles: Optimizing Machine Learning Models on Imbalanced Data
by Ágata Palma, Mário Antunes, Jorge Bernardino and Ana Alves
Future Internet 2025, 17(4), 162; https://doi.org/10.3390/fi17040162 (registering DOI) - 7 Apr 2025
Abstract
The Internet of Vehicles (IoV) presents complex cybersecurity challenges, particularly against Denial-of-Service (DoS) and spoofing attacks targeting the Controller Area Network (CAN) bus. This study leverages the CICIoV2024 dataset, comprising six distinct classes of benign traffic and various types of attacks, to evaluate [...] Read more.
The Internet of Vehicles (IoV) presents complex cybersecurity challenges, particularly against Denial-of-Service (DoS) and spoofing attacks targeting the Controller Area Network (CAN) bus. This study leverages the CICIoV2024 dataset, comprising six distinct classes of benign traffic and various types of attacks, to evaluate advanced machine learning techniques for instrusion detection systems (IDS). The models XGBoost, Random Forest, AdaBoost, Extra Trees, Logistic Regression, and Deep Neural Network were tested under realistic, imbalanced data conditions, ensuring that the evaluation reflects real-world scenarios where benign traffic dominates. Using hyperparameter optimization with Optuna, we achieved significant improvements in detection accuracy and robustness. Ensemble methods such as XGBoost and Random Forest consistently demonstrated superior performance, achieving perfect accuracy and macro-average F1-scores, even when detecting minority attack classes, in contrast to previous results for the CICIoV2024 dataset. The integration of optimized hyperparameter tuning and a broader methodological scope culminated in an IDS framework capable of addressing diverse attack scenarios with exceptional precision. Full article
Show Figures

Figure 1

39 pages, 4156 KiB  
Review
Enabling Green Cellular Networks: A Review and Proposal Leveraging Software-Defined Networking, Network Function Virtualization, and Cloud-Radio Access Network
by Radheshyam Singh, Line M. P. Larsen, Eder Ollora Zaballa, Michael Stübert Berger, Christian Kloch and Lars Dittmann
Future Internet 2025, 17(4), 161; https://doi.org/10.3390/fi17040161 (registering DOI) - 5 Apr 2025
Viewed by 19
Abstract
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various [...] Read more.
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various initiatives undertaken by industry, academia, and researchers to reduce the power consumption of cellular network systems are comprehensively reviewed. Particular attention is given to emerging technologies, including Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Cloud-Radio Access Network (C-RAN), which are identified as key enablers for reshaping cellular infrastructure. Their collective potential to enhance energy efficiency while addressing convergence challenges is analyzed, and solutions for sustainable network evolution are proposed. A conceptual architecture based on SDN, NFV, and C-RAN is presented as an illustrative example of integrating these technologies to achieve significant power savings. The proposed framework outlines an approach to developing energy-efficient cellular networks, capable of reducing power consumption by approximately 40 to 50% through the optimal placement of virtual network functions. Full article
Show Figures

Figure 1

26 pages, 9870 KiB  
Article
Comparative Feature-Guided Regression Network with a Model-Eye Pretrained Model for Online Refractive Error Screening
by Jiayi Wang, Tianyou Zheng, Yang Zhang, Tianli Zheng and Weiwei Fu
Future Internet 2025, 17(4), 160; https://doi.org/10.3390/fi17040160 - 3 Apr 2025
Viewed by 39
Abstract
With the development of the internet, the incidence of myopia is showing a trend towards younger ages, making routine vision screening increasingly essential. This paper designs an online refractive error screening solution centered on the CFGN (Comparative Feature-Guided Network), a refractive error screening [...] Read more.
With the development of the internet, the incidence of myopia is showing a trend towards younger ages, making routine vision screening increasingly essential. This paper designs an online refractive error screening solution centered on the CFGN (Comparative Feature-Guided Network), a refractive error screening network based on the eccentric photorefraction method. Additionally, a training strategy incorporating an objective model-eye pretraining model is introduced to enhance screening accuracy. Specifically, we obtain six-channel infrared eccentric photorefraction pupil images to enrich image information and design a comparative feature-guided module and a multi-channel information fusion module based on the characteristics of each channel image to enhance network performance. Experimental results show that CFGN achieves an accuracy exceeding 92% within a ±1.00 D refractive error range across datasets from two regions, with mean absolute errors (MAEs) of 0.168 D and 0.108 D, outperforming traditional models and meeting vision screening requirements. The pretrained model helps achieve better performance with small samples. The vision screening scheme proposed in this study is more efficient and accurate than existing networks, and the cost-effectiveness of the pretrained model with transfer learning provides a technical foundation for subsequent rapid online screening and routine tracking via networking. Full article
Show Figures

Figure 1

30 pages, 3565 KiB  
Systematic Review
Internet of Things and Deep Learning for Citizen Security: A Systematic Literature Review on Violence and Crime
by Chrisbel Simisterra-Batallas, Pablo Pico-Valencia, Jaime Sayago-Heredia and Xavier Quiñónez-Ku
Future Internet 2025, 17(4), 159; https://doi.org/10.3390/fi17040159 - 3 Apr 2025
Viewed by 57
Abstract
This study conducts a systematic literature review following the PRISMA framework and the guidelines of Kitchenham and Charters to analyze the application of Internet of Things (IoT) technologies and deep learning models in monitoring violent actions and criminal activities in smart cities. A [...] Read more.
This study conducts a systematic literature review following the PRISMA framework and the guidelines of Kitchenham and Charters to analyze the application of Internet of Things (IoT) technologies and deep learning models in monitoring violent actions and criminal activities in smart cities. A total of 45 studies published between 2010 and 2024 were selected, revealing that most research, primarily from India and China, focuses on cybersecurity in IoT networks (76%), while fewer studies address the surveillance of physical violence and crime-related events (17%). Advanced neural network models, such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and hybrid approaches, have demonstrated high accuracy rates, averaging over 97.44%, in detecting suspicious behaviors. These models perform well in identifying anomalies in IoT security; however, they have primarily been tested in simulation environments (91% of analyzed studies), most of which incorporate real-world data. From a legal perspective, existing proposals mainly emphasize security and privacy. This study contributes to the development of smart cities by promoting IoT-based security methodologies that enhance surveillance and crime prevention in cities in developing countries. Full article
(This article belongs to the Special Issue Internet of Things (IoT) in Smart City)
Show Figures

Figure 1

17 pages, 2956 KiB  
Article
A3C-R: A QoS-Oriented Energy-Saving Routing Algorithm for Software-Defined Networks
by Sunan Wang, Rong Song, Xiangyu Zheng, Wanwei Huang and Hongchang Liu
Future Internet 2025, 17(4), 158; https://doi.org/10.3390/fi17040158 - 3 Apr 2025
Viewed by 25
Abstract
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with [...] Read more.
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with business characteristics. They have problems such as unbalanced traffic scheduling and unreasonable network resource allocation. Aiming at the above problems, this paper proposes a QoS-oriented energy-saving routing algorithm A3C-R in the software-defined network (SDN) environment. Based on the asynchronous update advantages of the asynchronous advantage Actor-Critic (A3C) algorithm and the advantages of independent interaction between multiple agents and the environment, the A3C-R algorithm can effectively improve the convergence of the routing algorithm. The process of the A3C-R algorithm first takes QoS indicators such as delay, bandwidth, and packet loss rate and the network energy consumption of the link as input. Then, it creates multiple agents to start asynchronous training, through the continuous updating of Actors and Critics in each agent and periodically synchronizes the model parameters to the global model. After the algorithm training converges, it can output the link weights of the network topology to facilitate the calculation of intelligent routing strategies that meet QoS requirements and lower network energy consumption. The experimental results indicate that the A3C-R algorithm, compared to the baseline algorithms ECMP, I-DQN, and DDPG-EEFS, reduces delay by approximately 9.4%, increases throughput by approximately 7.0%, decreases the packet loss rate by approximately 9.5%, and improves energy-saving percentage by approximately 10.8%. Full article
Show Figures

Figure 1

24 pages, 3782 KiB  
Article
The New CAP Theorem on Blockchain Consensus Systems
by Aristidis G. Anagnostakis and Euripidis Glavas
Future Internet 2025, 17(4), 157; https://doi.org/10.3390/fi17040157 - 2 Apr 2025
Viewed by 105
Abstract
One of the most emblematic theorems in the theory of distributed databases is Eric Brewer’s CAP theorem. It stresses the tradeoffs between Consistency, Availability, and Partition and states that it is impossible to guarantee all three of them simultaneously. Inspired by this, we [...] Read more.
One of the most emblematic theorems in the theory of distributed databases is Eric Brewer’s CAP theorem. It stresses the tradeoffs between Consistency, Availability, and Partition and states that it is impossible to guarantee all three of them simultaneously. Inspired by this, we introduce the new CAP theorem for autonomous consensus systems, and we demonstrate that, at most, two of the three elementary properties, Consensus achievement (C), Autonomy (A), and entropic Performance (P) can be optimized simultaneously in the generic case. This provides a theoretical limit to Blockchain systems’ decentralization, impacting their scalability, security, and real-world adoption. To formalize and analyze this tradeoff, we utilize the IoT micro-Blockchain as a universal, minimal, consensus-enabling framework. We define a set of quantitative functions relating each of the properties to the number of event witnesses in the system. We identify the existing mutual exclusions, and formally prove for one homogenous system consideration, that (A), (C), and (P) cannot be optimized simultaneously. This suggests that a requirement for concurrent optimization of the three properties cannot be satisfied in the generic case and reveals an intrinsic limitation on the design and the optimization of distributed Blockchain consensus mechanisms. Our findings are formally proved utilizing the IoT micro-Blockchain framework and validated through the empirical data benchmarking of large-scale Blockchain systems, i.e., Bitcoin, Ethereum, and Hyperledger Fabric. Full article
Show Figures

Figure 1

23 pages, 2670 KiB  
Article
Database Security and Performance: A Case of SQL Injection Attacks Using Docker-Based Virtualisation and Its Effect on Performance
by Ade Dotun Ajasa, Hassan Chizari and Abu Alam
Future Internet 2025, 17(4), 156; https://doi.org/10.3390/fi17040156 - 2 Apr 2025
Viewed by 153
Abstract
Modern database systems are critical for storing sensitive information but are increasingly targeted by cyber threats, including SQL injection (SQLi) attacks. This research proposes a robust security framework leveraging Docker-based virtualisation to enhance database security and mitigate the impact of SQLi attacks. A [...] Read more.
Modern database systems are critical for storing sensitive information but are increasingly targeted by cyber threats, including SQL injection (SQLi) attacks. This research proposes a robust security framework leveraging Docker-based virtualisation to enhance database security and mitigate the impact of SQLi attacks. A controlled experimental methodology evaluated the framework’s effectiveness using Damn Vulnerable Web Application (DVWA) and Acunetix databases. The findings reveal that Docker significantly reduces the vulnerability to SQLi attacks by isolating database instances, thereby safeguarding user data and system integrity. While Docker introduces a significant increase in CPU utilisation during high-traffic scenarios, the trade-off ensures enhanced security and reliability for real-world applications. This study highlights Docker’s potential as a practical solution for addressing evolving database security challenges in distributed and cloud environments. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

19 pages, 4754 KiB  
Article
Balancing Prediction Accuracy and Explanation Power of Path Loss Modeling in a University Campus Environment via Explainable AI
by Hamed Khalili, Hannes Frey and Maria A. Wimmer
Future Internet 2025, 17(4), 155; https://doi.org/10.3390/fi17040155 - 31 Mar 2025
Viewed by 55
Abstract
For efficient radio network planning, empirical path loss (PL) prediction models are utilized to predict signal attenuation in different environments. Alternatively, machine learning (ML) models are proposed to predict path loss. While empirical models are transparent and require less computational capacity, their predictions [...] Read more.
For efficient radio network planning, empirical path loss (PL) prediction models are utilized to predict signal attenuation in different environments. Alternatively, machine learning (ML) models are proposed to predict path loss. While empirical models are transparent and require less computational capacity, their predictions are not able to generate accurate forecasting in complex environments. While ML models are precise and can cope with complex terrains, their opaque nature hampers building trust and relying assertively on their predictions. To fill the gap between transparency and accuracy, in this paper, we utilize glass box ML using Microsoft research’s explainable boosting machines (EBM) together with the PL data measured for a university campus environment. Moreover, polar coordinate transformation is applied in our paper, which unravels the superior explanation capacity of the feature transmitting angle beyond the feature distance. PL predictions of glass box ML are compared with predictions of black box ML models as well as those generated by empirical models. The glass box EBM exhibits the highest performance. The glass box ML, furthermore, sheds light on the important explanatory features and the magnitude of their effects on signal attenuation in the underlying propagation environment. Full article
Show Figures

Figure 1

19 pages, 3479 KiB  
Article
Generative AI-Enhanced Intelligent Tutoring System for Graduate Cybersecurity Programs
by Madhav Mukherjee, John Le and Yang-Wai Chow
Future Internet 2025, 17(4), 154; https://doi.org/10.3390/fi17040154 - 31 Mar 2025
Viewed by 48
Abstract
Due to the widespread applicability of generative artificial intelligence, we have seen it adopted across many areas of education, providing universities with new opportunities, particularly in cybersecurity education. With the industry facing a skills shortage, this paper explores the use of generative artificial [...] Read more.
Due to the widespread applicability of generative artificial intelligence, we have seen it adopted across many areas of education, providing universities with new opportunities, particularly in cybersecurity education. With the industry facing a skills shortage, this paper explores the use of generative artificial intelligence in higher cybersecurity education as an intelligent tutoring system to enhance factors leading to positive student outcomes. Despite its success in content generation and assessment within cybersecurity, the field’s multidisciplinary nature presents additional challenges to scalability and generalisability. We propose a solution using agents to orchestrate specialised large language models and to demonstrate its applicability in graduate level cybersecurity topics offered at a leading Australian university. We aim to show a generalisable and scalable solution to diversified educational paradigms, highlighting its relevant features, and a method to evaluate the quality of content as well as the general effectiveness of the intelligent tutoring system on subjective factors aligned with positive student outcomes. We further explore areas for future research in model efficiency, privacy, security, and scalability. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

2 pages, 133 KiB  
Editorial
eHealth and mHealth
by Bernhard Neumayer, Stefan Sauermann and Sten Hanke
Future Internet 2025, 17(4), 152; https://doi.org/10.3390/fi17040152 - 31 Mar 2025
Viewed by 54
Abstract
eHealth (electronic health) and mHealth (mobile health) have been rapidly evolving in recent years, offering innovative solutions to healthcare challenges [...] Full article
(This article belongs to the Special Issue eHealth and mHealth)
26 pages, 430 KiB  
Article
Practical Comparison Between the CI/CD Platforms Azure DevOps and GitHub
by Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2025, 17(4), 153; https://doi.org/10.3390/fi17040153 - 31 Mar 2025
Viewed by 106
Abstract
Continuous integration and delivery are essential for modern software development, enabling teams to automate testing, streamline deployments, and deliver high-quality software more efficiently. As DevOps adoption grows, selecting the right CI/CD platform is essential for optimizing workflows. Azure DevOps and GitHub, both under [...] Read more.
Continuous integration and delivery are essential for modern software development, enabling teams to automate testing, streamline deployments, and deliver high-quality software more efficiently. As DevOps adoption grows, selecting the right CI/CD platform is essential for optimizing workflows. Azure DevOps and GitHub, both under Microsoft, are leading solutions with distinct features and target audiences. This paper compares Azure DevOps and GitHub, evaluating their CI/CD capabilities, scalability, security, pricing, and usability. It explores their integration with cloud environments, automation workflows, and suitability for teams of varying sizes. Security features, including access controls, vulnerability scanning, and compliance, are analyzed to assess their suitability for organizational needs. Cost-effectiveness is also examined through licensing models and total ownership costs. This study leverages real-world case studies and industry trends to guide organizations in selecting the right CI/CD tools. Whether seeking a fully managed DevOps suite or a flexible, Git-native platform, understanding the strengths and limitations of Azure DevOps and GitHub is crucial for optimizing development and meeting long-term scalability goals. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

26 pages, 584 KiB  
Article
GDPR and Large Language Models: Technical and Legal Obstacles
by Georgios Feretzakis, Evangelia Vagena, Konstantinos Kalodanis, Paraskevi Peristera, Dimitris Kalles and Athanasios Anastasiou
Future Internet 2025, 17(4), 151; https://doi.org/10.3390/fi17040151 - 28 Mar 2025
Viewed by 102
Abstract
Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In [...] Read more.
Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In particular, we analyze how key GDPR provisions—including the Right to Erasure, Right of Access, Right to Rectification, and restrictions on Automated Decision-Making—are challenged by the opaque and distributed nature of LLMs. We discuss issues such as the transformation of personal data into non-interpretable model parameters, difficulties in ensuring transparency and accountability, and the risks of bias and data over-collection. Moreover, the paper explores potential technical solutions such as machine unlearning, explainable AI (XAI), differential privacy, and federated learning, alongside strategies for embedding privacy-by-design principles and automated compliance tools into LLM development. The analysis is further enriched by considering the implications of emerging regulations like the EU’s Artificial Intelligence Act. In addition, we propose a four-layer governance framework that addresses data governance, technical privacy enhancements, continuous compliance monitoring, and explainability and oversight, thereby offering a practical roadmap for GDPR alignment in LLM systems. Through this comprehensive examination, we aim to bridge the gap between the technical capabilities of LLMs and the stringent data protection standards mandated by GDPR, ultimately contributing to more responsible and ethical AI practices. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
18 pages, 3210 KiB  
Article
GraphDBSCAN: Optimized DBSCAN for Noise-Resistant Community Detection in Graph Clustering
by Danial Ahmadzadeh, Mehrdad Jalali, Reza Ghaemi and Maryam Kheirabadi
Future Internet 2025, 17(4), 150; https://doi.org/10.3390/fi17040150 - 28 Mar 2025
Viewed by 96
Abstract
Community detection in complex networks remains a significant challenge due to noise, outliers, and the dependency on predefined clustering parameters. This study introduces GraphDBSCAN, an adaptive community detection framework that integrates an optimized density-based clustering method with an enhanced graph partitioning approach. The [...] Read more.
Community detection in complex networks remains a significant challenge due to noise, outliers, and the dependency on predefined clustering parameters. This study introduces GraphDBSCAN, an adaptive community detection framework that integrates an optimized density-based clustering method with an enhanced graph partitioning approach. The proposed method refines clustering accuracy through three key innovations: (1) a K-nearest neighbor (KNN)-based strategy for automatic parameter tuning in density-based clustering, eliminating the need for manual selection; (2) a proximity-based feature extraction technique that enhances node representations while preserving network topology; and (3) an improved edge removal strategy in graph partitioning, incorporating additional centrality measures to refine community structures. GraphDBSCAN is evaluated on real-world and synthetic datasets, demonstrating improvements in modularity, noise reduction, and clustering robustness. Compared to existing methods, GraphDBSCAN consistently enhances structural coherence, reduces sensitivity to outliers, and improves community separation without requiring fixed parameter assumptions. The proposed method offers a scalable, data-driven approach to community detection, making it suitable for large-scale and heterogeneous networks. Full article
(This article belongs to the Topic Social Computing and Social Network Analysis)
Show Figures

Figure 1

19 pages, 2534 KiB  
Article
A Cross-Chain-Based Access Control Framework for Cloud Environment
by Saad Belcaid, Mostapha Zbakh, Siham Aouad, Abdellah Touhafi and An Braeken
Future Internet 2025, 17(4), 149; https://doi.org/10.3390/fi17040149 - 27 Mar 2025
Viewed by 182
Abstract
Cloud computing presents itself as one of the leading technologies in the IT solutions field, providing a variety of services and capabilities. Meanwhile, blockchain-based solutions emerge as advantageous as they permit data immutability, transaction efficiency, transparency, and trust due to decentralization and the [...] Read more.
Cloud computing presents itself as one of the leading technologies in the IT solutions field, providing a variety of services and capabilities. Meanwhile, blockchain-based solutions emerge as advantageous as they permit data immutability, transaction efficiency, transparency, and trust due to decentralization and the use of smart contracts. In this paper, we are consolidating these two technologies into a secure framework for access control in cloud environments. A cross-chain-based methodology is used, in which transactions and interactions between multiple blockchains and cloud computing systems are supported, such that no separate third-party certificates are required in the authentication and authorization processes. This paper presents a cross-chain-based framework that integrates a full, fine-grained, attribute-based access control (ABAC) mechanism that evaluates cloud user access transaction attributes. It grants or denies access to the cloud resources by inferring knowledge about the attributes received using semantic reasoning based on ontologies, resulting in a more reliable method for information sharing over the cloud network. Our implemented cross-chain framework on the Cosmos ecosystem with the integrated semantic ABAC scored an overall access control (AC) processing time of 9.72 ms. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

Back to TopTop