There has been a rapid growth in the use of symbolic representations along with their applications in many important tasks. Symbolic representations, in the form of Knowledge Graphs (KGs), constitute large networks of real-world entities and their relationships. On the other hand, sub-symbolic artificial intelligence has also become a mainstream area of research. Many studies have been proposed which focus on learning distributed representations from KGs. These KGs are generated manually or automatically by processing text or other data sources. The workshop also targets the problem of capturing formal semantics in sub-symbolic systems. The focus of the workshop is to allow these two communities to join forces in order to develop more effective algorithms and applications.
Biography: Maximilian Nickel is a research scientist at Facebook AI Research in New York. Before joining FAIR, he was a postdoctoral fellow at MIT where he was with the Laboratory for Computational and Statistical Learning and the Center for Brains, Minds and Machines. In 2013, he received his PhD with summa cum laude from the Ludwig Maximilian University Munich. From 2010 to 2013 he worked as a research assistant at Siemens Corporate Technology. His research centers around geometric methods for learning and reasoning with relational knowledge representations and their applications in artificial intelligence and network science.
Title: Geometric Representation Learning in Symbolic Domains
Abstract: Learning from symbolic knowledge representations is often characterized by complex relational patterns involving large amounts of uncertainty. Moreover, domains such as the Web, bioinformatics, or natural language understanding can consist of billions of entities and relationships. In these settings, representation learning has become an invaluable approach for making statistical inferences as it allows us to learn high-quality models and scale to large datasets with billions of relations.
Recently, new attention has been given to an important aspect of such methods, i.e., the geometry of the representation space. Methods such as hyperbolic embeddings and Riemannian generative models show that non-Euclidean geometries can provide significant advantages for modeling relational data, e.g., with regard to interpretability, scalability, and latent semantics.
In this talk, I will provide an overview of our recent work on such geometric approaches to representation learning. I will first discuss how structural properties of relational data (such as latent hierarchies) are connected to the geometry of the embedding space and how methods such hyperbolic embeddings allow us to learn parsimonious representations in these cases. Moreover, I will show how the embeddings can be used to discover latent hierarchies and be applied for diverse tasks in NLP and bioinformatics. In addition, I will discuss how we can model flexible probability distributions over such geometric representations through Riemmannian continous normalizing flows.
Papers must be formatted in CEUR style guidelines in the two-columned style (no page numbers). See details on CEUR-WS. Papers should be submitted submitted via EasyChair. Submissions can fall in one of the following categories:
Authors are encouraged to submit negative (i.e., failing) results with strong contribution and an analysis of the results.
Accepted papers (after blind review) will be published by CEUR–WS companion volume.
At least one of the authors of the accepted papers must register for the workshop for the paper to be included into the workshop proceedings.