Document Type

Journal Article

Department/ Unit

Office of the Vice-President (Research and Development)

Abstract

In this paper, we will consider the universal approximation properties of a recently introduced neural network model called graph neural network (GNN) which can be used to process structured data inputs, e.g. acyclic graph, cyclic graph, directed or un-directed graphs. This class of neural networks implements a function (G, n) 2 IRm that maps a graph Gand one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property, called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks. Some experimental examples are used to show the computational capabilities of the proposed model.

Publication Year

2009

Journal Title

IEEE Transaction on Neural Networks

Volume number

20

Issue number

1

Publisher

IEEE Press

First Page (page number)

103

Last Page (page number)

122

Referreed

1

Funder

Australian Research Council

Share

COinS