Which roads will be blocked by traffic jams tomorrow ? What are the possible side effects of drugs ? Where do fake news come from, and how do they spread ? Many crucial questions require predictions in a connected world, and Graph Neural Networks (GNNs) allow us to make such predictions. While current AI systems can understand images, such as stop signs for self driving cars, GNNs can make predictions about the physical world, such as cars arriving at an intersection even when they cannot be seen at the moment. Furthermore, GNNs can be used to find better solutions for abstract problems which can then be used for direct improvements in industrial organization.
Since the questions we want to answer with GNNs are much bigger than just recognizing images, we need much more powerful computers to operate them. However, the most powerful computers today are composed of many smaller units, and only by ensuring that all these units work together perfectly can we use powerful computers efficiently. This is crucial for extending the scope of what GNNs can predict, and for limiting the energy consumption of these systems.
Graph neural networks (GNNs), which extend the successful ideas of deep learning to irregularly structured data, are a recent addition to the field of artificial intelligence. While traditional deep learning has focused on regular inputs such as images composed of pixels in two-dimensional space, graph neural networks can analyze and learn from unstructured connections between objects. This gives GNNs the ability to tackle completely new classes of problems, such as analyzing social networks and power grids, or uncovering molecule structures in computational chemistry. Some experts in the field also believe that graph networks, due to their capacity for combinatorial generalization, represent an important next step towards the development of general artificial intelligence. However, such tasks require vast amounts of computation, which can only be provided by parallel processing.
Is well known that parallel computation for irregular problems is much more challenging than for regular ones, and GNNs are no exception. While traditional deep learning has been scaled up to run on entire supercomputers efficiently,
GNNs currently do not scale to multiple processors. This proposal aims to overcome this limitation by drawing upon decades of experience in scalable graph algorithms and sparse linear algebra, and adapting techniques that are proven to be effective for distributing graph computations over large parallel systems to GNNs.
We aim to create a new computational framework which allows users to specify a GNN while the framework handles the task of distributing graphs over parallel machines, as well as selecting and running the algorithms that are best suited for the computation automatically. Recently, frameworks such as TensorFlow, have made traditional deep neural networks accessible for a large number of users. In the same way, our goal is to create a proof-of-concept framework that will be a crucial factor to the successful GNNs real-world appliance.