Can we develop theoretical explanations for today’s AI systems?

August 4, 2023

In short: Misha Belkin, a leading AI theory researcher, outlines a compelling case for a widespread effort to develop a more rigorous and mathematical understanding of modern AI systems like large language models. He argues that this is not only required for engineering safe and robust systems, but that it is quite tractable despite the apparent complexity of these systems.

In The necessity of machine learning theory in mitigating AI risk, Misha Belkin makes the excellent point that not only is better theoretical understanding of today’s state-of-the-art AI techniques practically required for engineering safe and robust systems, but that there are many good reasons to believe that developing good theoretical models of these systems is actually something on which we can make significant progress even today.

…developing a fundamental mathematical theory of deep learning is a prerequisite for managing risk as our society transitions to wide use of AI technology. Theory in this context refers to identifying precise measurable quantities and mathematically describing their patterns, the way it is used in physics and engineering, rather than proving rigorous theorems

It is difficult to see how deep learning systems with their human-like complexity can be controlled and guided in socially acceptable ways, or countered in adversarial situations, without a fundamental understanding of their principles.

…[the] effectiveness [of deep learning systems] relies on fundamental patterns in data, a “gravitational force” in the data solar system, rather than a serendipitous “alignment of the planets” in specific instances of data analysis. This universality hints at a continuity with other fundamental principles discovered in science and mathematics.