Ilya Sutskever | Deep learning is the future of AI | AGI will be born after GPT5

44,475
0
Published 2024-07-14

All Comments (21)
  • @kursadakpinar
    This is an excerpt from the interview Sutskever gave to journalist Craig Smith in March 2023.
  • @tmmerlo
    Incredible interview with one of the great minds of our generation. Not sure why the very end was cut though. He was right in the middle of making a big point. 😐
  • @mrknesiah
    More processing isn’t intelligence and intelligence isn’t consciousness. The vast majority of conscious life is not very intelligent.
  • @tunahelpa5433
    I perked up when Ilya said a sequence of xxx. I would equate xxx with any of these - tokens, words, symbols, visual sensations, other sensations, memories, thoughts, concepts, ideas, worldviews, and so on, in increasing complexity. I think that is a good description of thought!
  • @briandoe5746
    It's remarkable to me how few people understand that this man is John Galt. Whether humanity dies in an apocalypse or we receive a malevolent AI God is completely in his hands. And apparently the only person worthy of holding the ring of power is Truly altruistic.... This is actually John Galt.
  • @master7738
    This is so important everyone should know deep learning
  • @armadillo-ol
    I find it interesting that parallels can perhaps be drawn between a child with an active imagination and these early AI's and their hallucinations, and how we handle this in our current society. Great chat thanks!
  • @user-qg8qc5qb9r
    00:00:00 - Introduction and Early Interests 00:00:42 - Early Work with Jeff Hinton and Motivation for AI 00:01:59 - Focus on Machine Learning and Neural Networks 00:03:15 - Realization and Breakthroughs in Deep Learning 00:06:01 - Emergence of Transformers and GPT Project 00:08:25 - Influence of Rich Sutton's Scaling Hypothesis 00:09:53 - Limitations and Future Potential of Large Language Models 00:12:00 - Understanding the Nature of Predictions and Compression 00:14:37 - Discussion on Sydney and AI Behavior 00:16:04 - Improving Language Models and Addressing Hallucinations 00:19:09 - Multimodal Understanding and Joint Embedding Predictive Architectures 00:24:36 - Human-AI Teaching Interaction and Reinforcement Learning 00:26:17 - Efficiency and Automation in AI Training 00:29:33 - Research Focus: Reliability, Control, and Learning Efficiency 00:31:41 - Comparisons Between Human Brain and Large Models 00:33:47 - Challenges of Scaling and Faster Processors 00:35:56 - Potential Impact of AI on Democracy and Society 00:38:38 - Analyzing Variables and Comprehensive Understanding
  • @agenticmark
    I love how he says "slow neurons" when the brain uses FAR less power and works far faster than the DNNs :D
  • @WerdnaGninwod
    I think it's a mistake to assume that the goal is to have no hallucination. Hallucination is remarkably like imagination, that is required for creativity. We just need to be able to be clear about how much of this we want in the current conversation, and to have that respected. If you want it to generate new and original poetry, dial it up. If you want it to analyse your tax return, dial it down.
  • @kyatt_
    When did this interview take place?
  • @brulsmurf
    He's starting to look more and more like The Doctor (Emergency Medical Hologram) from star trek. Coincidence?
  • interesting interview, but surprisingly missleading title. this interview is from 2023 - pre OpenAI drama
  • I think AI will create digitalscapes of the machines thus allowing the interconnectedness of all things Eg bridge between Lightscapes of the mind and landscapes of the Earth... AI network can be utilized to join hidden dots and help humanity understand who they are within their environment/territory. Information - Knowledge - Wisdom
  • By "YouSum Live" 00:00:00 AI's early challenges and breakthroughs 00:00:18 Interest in AI stemmed from consciousness curiosity 00:00:38 Collaboration with Jeff Hinton began at 17 00:01:02 marked a pivotal year for AI learning 00:02:26 Neural networks were initially untrusted for tasks 00:04:05 Large datasets enable deep neural networks' success 00:06:31 Transformers addressed limitations of recurrent networks 00:07:12 GPT project emerged from predicting next elements 00:09:04 Scaling models is crucial for AI advancements 00:10:14 Deep learning utilizes scale effectively for improvements 00:11:01 Language models learn statistical regularities, not reality 00:16:00 Hallucinations limit usefulness of language models 00:17:00 Reinforcement learning improves model output accuracy 00:20:08 Multimodal understanding enhances AI's world comprehension 00:21:11 Text-based learning can still yield meaningful insights 00:23:05 Current models handle high-dimensional predictions effectively 00:23:32 AI models can generate complex images effectively 00:23:37 Transformers applied to pixels yield impressive results 00:24:14 Current approaches can predict high-dimensional distributions 00:26:36 Language models already understand underlying reality 00:28:30 Reinforcement learning enhances model behavior and accuracy 00:29:18 Human teachers use AI tools for efficiency 00:33:28 Models learn faster with less data needed 00:34:09 Faster processors are essential for scaling AI 00:37:30 Democratic input could guide AI decision-making 00:39:00 AI may analyze complex societal variables effectively By "YouSum Live"
  • @jaybrodnax
    My question 30 mins in: if the model leaves training understanding so much about underlying reality, isn’t “desirable output” part of that reality - or shouldn’t it be? Why is a second “output training” step needed? What is the nature of the gap with the original training data, which includes innumerable questions and responses? It seems like a near term hack rather than a fundamental need. Maybe the original training process can eventually fill this gap.
  • @piotr780
    we dont even know why DL works so it is hard to say if it is future or not, until now it works