In recent years, deep learning has revolutionized fields such as computer vision, speech recognition, and natural language processing, primarily through techniques applied to data in Euclidean spaces. However, many real world applications involve data from non-Euclidean domains, where graphs naturally represent entities and their complex interdependencies. Traditional machine learning methods have often struggled to process such data in an effective manner. Graph Neural Networks represent a crucial advance in the use of deep learning to interpret and extract knowledge from graph-based data. They have opened up new possibilities for tasks such as node categorization, link inference, and comprehensive graph analysis. This paper provides a detailed analysis of Graph Neural Network (GNN) methodologies, emphasizing their architectural diversity and wide ranging applications. GNN models are systematically categorized into fundamental frameworks such as message passing paradigms, spectral and spatial methods, and advanced extensions such as hypergraph neural networks and multigraph approaches. This paper also explores domains such as social network analysis, molecular biology, traffic forecasting, and recommendation systems. In addition, it emphasizes some critical open challenges, including scalability, dynamic graph modeling, and robustness against noisy or incomplete data. The paper concludes with a proposal for future research directions to improve the scalability, interpretability, and adaptability of GNNs in this fast-evolving field.
Dettaglio pubblicazione
2025, IEEE ACCESS, Pages 62870-62891 (volume: 13)
Graph Neural Networks: Architectures, Applications, and Future Directions (01a Articolo in rivista)
Ponzi V., Napoli C.
keywords