Knowledge Dependencies in Large Language Models
Some of the most pressing issues with large language models (LLMs), such as the generation of factually incorrect text and logically incorrect reasoning, may be attributed to the way models represent and recall knowledge internally. In this talk, we will evaluate the representation and utilization of knowledge dependencies in LLMs from two different perspectives. First, we will consider the task of knowledge editing, showing that (a) using various editing methods to edit a specific fact does not implicitly modify other facts that depend on it, and (b) some facts are often hard to disentangle. Next, we will consider the setting of latent multi-hop reasoning, showing that LLMs only weakly rely on knowledge dependencies when answering complex queries. While these shortcomings could potentially be mitigated by intervening on the LLM computation, they call for better training procedures and possibly new architectures.