Breaking conjectures in combinatorics with graph neural networks and reinforcement learning
Abstract
I will present some recent results on using reinforcement learning to find explicit constructions and counterexamples to several open conjectures in graph theory. Reinforcement learning is a branch of machine learning that deals with learning optimal actions in an environment based on feedback and rewards. I will explain how a reinforcement learning algorithm can be used to generate graphs that satisfy or violate certain properties. I will also discuss how using graph neural networks instead of fully connected networks can significantly improve the performance and scalability of this approach. Graph neural networks are neural networks that operate on graph structures and can capture their features and relations. This talk is based on the paper Constructions in combinatorics via neural networks by Adam Zsolt Wagner and is a joint work with Renato Faraone.