RSS · Spotify · Apple Podcasts · Pocket Casts
Below are some highlights from our conversation as well as links to the papers, people, and groups referenced in the episode.
Some highlights from our conversation
“The second implication of RLHF is that prompt engineering will go away eventually. Like, it is something fleeting, and the prompt engineers… it’s just not a real job. Let’s face it. The reason prompt engineering will not be relevant forever is because RLHF – why prompt engineering even exists in the first place – is because these systems are misaligned with what humans want, so we have to kind of coerce the model to give us what we want by typing out very unnatural sentences, and to essentially trick the model into solving the task.”
“I’m still really amazed by how humans do this task. Because we’re doing the lowest level of control, right? Like, we do the keyboard and mouse controls. And if we want to be, like, stricter about the concepts, we’re sending neural signals to our fingers and then controlling the finger torques, the torques in each joint, to operate a keyboard and also using a mouse. It’s incredible how low level we are going, as humans, to do World of Bits, and we seem to have very little problem with our computational efficiency, but I guess procrastination is our unique problem. So that is our unique problem. But otherwise, we’re computationally efficient. We’re very efficient. So I’m just wondering, like, maybe there’s a way to actually make the lowest level, the most general action space, computationally attractive and even, like, more efficient than we thought it would be.”
“When I was starting to play Minecraft, I watched YouTube videos. I also went to Wiki to look up what to do in my first and, and Wiki tells you that, ‘Okay, these are the tools that you must craft and you need to, like, prepare food, otherwise you will starve, and what kind of foods are good, right?’ It’s all in, in the Wiki, and I also go to Reddit whenever I have a question. I treat that as a stack overflow, and Reddit people give a lot of good advice. That’s how I played Minecraft even as a humor. That gets me thinking, right, like why shouldn’t our AI use all of these internet skill knowledge? And if we want our AI algorithm to play this from scratch, it’s almost impossible because exploration is intractable. If you just take random actions, kind of how big is a chance that you stumble upon a diamond–it’s almost literally zero, right? So that also inspired the algorithm approach that we did.”
“What we want is to develop – or maybe discover, right – like, general principles to embody intelligence. That’s what we wanna do. That’s what MineDojo and Avalon want to achieve, want to enable, right? Not just kind of solving these particular 1000 tasks in the, kind of, the most brute force way. So, yeah, just a word of caution to researchers: resist the urge to overfit, to cheat, to use things that are super specific to Minecraft that will not transfer elsewhere.”
Referenced in this podcast
- MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge by Jim Fan, et al.
- ImageNet Classification with Deep Convolutional Neural Networks by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton
- Deep Speech 2: End-to-End Speech Recognition in English and Mandarin by Dario Amodei, et al.
- Prof. Yoshua Bengio
- SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark by Jim Fan, et al.
- World of Bits: An Open-Domain Platform for Web-Based Agents by Tianlin (Tim) Shi, et al.
- Mini World of Bits
- Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos by Bowen Baker, et al.
- ACT-1: Transformer for Actions
- WebGPT: Browser-assisted question-answering with human feedback by Reiichiro Nakano, et al.
- MineCLIP: Foundation Model for MineDojo
- CLIP: Connecting Text and Images by Alec Radford, et al.
- VIMA: General Robot Manipulation with Multimodal Prompts by Yunfan Jiang, et al.
- MetaMorph: Learning Universal Controllers with Transformers by Agrim Gupta, et al.
- Attention Is All You Need by Ashish Vaswani, et al.
- Boston Dynamics
- (DALL-E) Zero-Shot Text-to-Image Generation by Aditya Ramesh, et al.
- Stable Diffusion
- Ilya Sutskever
- Proximal Policy Optimization Algorithms by John Schulman, et al.
Thanks to Tessa Hall for editing the podcast.