Seeing is Believing with MIT’s Ziming Liu
Ziming Liu discusses Brain-Inspired Modular Training (BIMT) for neural networks, enhancing modularity and interpretability in AI and physics research.
Watch Episode Here
Video Description
Ziming Liu is a Physics PhD student at MIT and IAIFI, advised by Prof. Max Tegmark. Ziming’s research is at the intersection of AI and physics. Today’s discussion goes in-depth on Liu’s paper “Seeing is Believing” where he presents Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable. The ability to see modules visually can complement current mechanistic interpretability strategies.
The Cognitive Revolution is a part of the Turpentine podcast network. To learn more: Turpentine.co
Links:
Seeing is Believing, authors: Ziming Liu, Eric Gan, and Max Tegmark https://arxiv.org/pdf/2305.08746.pdf
Ziming Lu: https://kindxiaoming.github.io/
TIMESTAMPS:
(00:00) Episode preview
(01:16) Nathan introduces Ziming Liu and his research
(04:50) What motivated Ziming's research into neural networks and modularity
(08:06) Key Visuals of "Seeing is Believing"
(08:44) Biology analogy & paper overview
(15:07) Sponsor: Omneky
(26:08) Research precedents
(30:00) Locality vs modularity
(37:40) Shoutout Tiny Stories - https://www.youtube.com/watch?v=mv3SIgDP_y4&t=115s
(39:00) Quantization Model of Neuro Scaling.
(1:00:00) How this work can impact mechanistic interpretability strategies
(1:27:00) Training
(1:46:30) Downscaling large scale models
SPONSOR:
Thank you Omneky (www.omneky.com) for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
MUSIC CREDIT:
MusicLM