Journals - MOST Wiedzy

TASK Quarterly

Two architectures of neural networks in distance approximation

Abstract

In this research paper, we examine recurrent and linear neural networks to determine the relationship between the
amount of data needed to achieve generalization and data dimensionality, as well as the relationship between data
dimensionality and the necessary computational complexity. To achieve this, we also explore the optimal topologies
for each network, discuss potential problems in their training, and propose solutions. In our experiments, the relation-
ship between the amount of data needed to achieve generalization and data dimensionality was linear for feed-forward
neural networks and exponential for recurrent ones. However, the required computational complexity appears to grow
exponentially with increasing dimensionality. We also compared the networks’ accuracy in both distance approxima-
tion and classification to the most popular alternative, siamese networks, which outperformed both linear and recurrent
networks in classification despite having lower accuracy in exact distance approximation.

Keywords:

NN, Distance, Topology

Details

Issue
Vol. 28 No. 2 (2024)
Section
Research article
Published
2025-07-18
DOI:
https://doi.org/10.34808/tq2024/28.2/c
Licencja:

Copyright (c) 2025 TASK Quarterly

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors

  • Wiktor Wojtyna

    Gdansk University of Technology
  • Jakub Sławiński

    Gdansk University of Technology
  • Radosław Tonga

    Gdansk University of Technology

Download paper