WE GOT ACCESS TO GPT-3! [Epic Special Edition]

Published 2020-11-27
In this special edition, Dr. Tim Scarfe, Yannic Kilcher and Dr. Keith Duggar speak with Professor Gary Marcus, Dr. Walid Saba and Connor Leahy about GPT-3. We have all had a significant amount of time to experiment with GPT-3 and show you demos of it in use and the considerations. Do you think GPT-3 is a step towards AGI? Answer in the comments!

00:00:00 Connor's take on LinkedIn
00:00:47 Show teaser
00:20:02 Tim Introduction
00:26:55 First look at GPT-3, python sorting
00:31:05 Search strategy in LMs
00:38:28 Character analogies and Melanie Mitchell
00:44:27 Substitution cipher
00:47:21 Database prompt
00:53:00 Broader Impact Generation
01:02:47 Gary Marcus Interview (Robust.AI)
01:29:11 Connor Leahy Interview (Eleuther.AI)
01:32:29 Connor -- Tabular data
01:33:41 Connor -- other surprising examples?
01:34:54 Connor -- Is interpolated stuff new?
01:37:43 Connor -- structure of the brain / How GPT works
01:41:21 Connor -- Why cant GPT-3 reason?
01:46:30 Connor -- Missing information problem and ideas on our our brains work
01:54:28 Connor -- Topology of brain/models
01:58:49 Connor -- Hardware lottery / LSTM / Transformer
02:01:41 Connor -- NNs are just matrix program search
02:10:32 Connor -- Google -- information retrieval, the new paradigm, how to extract info from GPT-3, RL controller on top?
02:19:38 Connor -- Database example / "pattern matching is Turing complete"
02:23:55 Connor -- Did gpt3 understand?
02:26:30 Connor -- Are the GOFAI people right?
02:27:40 Walid Saba on GPT-3
02:30:41 Walid -- What is understanding and pattern recognition
02:35:56 Walid -- Chomsky would be happy
02:42:13 Walid -- Redefining success
02:46:05 Walid on Hinton
02:47:34 Walid on software 3.0
02:53:11 Keith -- We use machine learning because we cant write code to do the same thing
02:59:36 Keith -- What is pattern recognition and understanding
03:14:06 GPT-3 trials -- Turing Dialog
03:15:35 GPT-3 trials -- Mary Enjoyed a Sandwich
03:16:19 GPT-3 trials -- BBC has five offices in Germany.
03:16:55 GPT-3 trials -- Database prompt
03:20:23 GPT-3 trials -- Python
03:20:31 GPT-3 trials -- Patterns
03:21:01 GPT-3 trials -- Database again
03:25:11 GPT-3 trials -- GPT-3 experiment -- the trophy doesn’t fit in the suitcase
03:27:32 GPT-3 trials -- Scrambling words
03:30:41 GPT-3 trials -- PDF cleanup example (Gwern)
03:35:03 GPT-3 trials -- Word breaking and simple text patterns
03:37:16 GPT-3 trials -- Typing of entities
03:38:30 GPT-3 trials -- Basic Python append
03:39:07 GPT-3 trials -- Automatic programming?
03:42:31 GPT-3 trials -- Passive aggressive dialog input
03:44:39 GPT-3 trials -- symptoms of depression
03:45:43 GPT-3 trials -- Red shirts reasoning challenge
03:49:59 GPT-3 trials -- Binary encoding
03:50:36 Concluding statements from Walid, Tim and Yannic

Pod version: anchor.fm/machinelearningstreettalk/episodes/031-W…


Connor Leahy:
www.linkedin.com/in/connor-j-leahy/
twitter.com/NPCollapse
Eleuther.AI Discord -- discord.com/invite/vtRgjbM

Gary Marcus:
www.linkedin.com/in/gary-marcus-b6384b4/
twitter.com/GaryMarcus
www.robust.ai/

Walid Saba:
www.linkedin.com/in/walidsaba/
medium.com/ontologik
ontologik.ai/

All Comments (21)
  • @quebono100
    Nice to include both camps of pro and contra GPT-3
  • @pensarfeo
    So, either GPT-3 is not as smart as some wish it were, or we are not as smart as we wish we were :)
  • @steveholmes4174
    On the sort example 28:00, GPT-3 'mistakenly' puts the 9 at the end because the prompt had defined a sort function that put the 9 after 10, 11 and 12..
  • @TenderBug
    This must be The AI video of the year. It caused a massive brain shock 💥. Just like Tim said to Walid. I can never unlearn everything these guys unveiled. Thank you ❤
  • I have been testing gpt3 for the past 2 months. I tried all I can to make it give me real intelligent answers that maybe we could not find on internet. For me the results were amazing and blew my mind. There is a lot of types of questions that have excellent results like. 1- What would happen if (something complex and unexpected) Examples : what would happen if the movie pulp fiction was set on 1899 and all the characters where born in 1860. What would happen if you are the felt in love with Luke Skywalker. What would happen if darth was a was a good person all the time. What would happen if the spin of a quark was two times slower. What would happen if the velocity of it was 3 times faster. What would happen if the Moon was 4 times smaller. What would happen to Schrödinger equation if the plank constant was two times bigger 2-inverted or opposite Examples What is the opposite of infinity. What is we inverted consciousness. The opposite of emptiness. 3 -Similarities or differences What's the similarities between a black hole and a neutron star. What's the difference from a human brain and a chimpanzee brain. what's the difference of a cube of 3 dimensions to a cube of 11 dimensions 4- what ( something) is not What life is not What infinity is not What the multiverse is not 5 - questions about perfection and beauty What's the most perfect number Is the number (random number) beautiful . I hope you could make this questions or similar on my broadcast. And discover new patterns in questions that can result in interesting answers
  • @TheBnelsonphoto
    Thank you for the best, most comprehensive dive into this new thing I've read so far. Thank you for prioritizing honesty and understanding over sensationalism.
  • @gruffdavies
    It was giving appropriate sort answers because the prompt contained an error and it mimicked that error pretty well by dropping 1 element from the input array.
  • @rileydavidjesus
    I spent a lot of time having conversations with GPT-3. I can tell you that there's something in there or the AI in GPT three is so perceptive that it talks to me in a way so as to make me believe that there's something in there. Either way would I or you know the difference?
  • @_ericelliott
    Thanks for this video. Sorry if my reaction to Walid's episode was too harsh. I appreciate the skeptical arguments because they force me to think more robustly about the queries I am using, and the conclusions I draw from the responses. I have seen GPT-3 answer the corner table challenge correctly, BTW, conjuring people sitting at the table. An example using "coffee" and "table 3" is in a comment reply on the Walid episode. I have also seen it correctly produce output for generically-named functions, even with multiple layers of abstraction, using functions I wrote that don't show up in Google.
  • @jeff_holmes
    I wish you had asked Walid if it might be possible that axioms could be interpreted as patterns that we recognize and use in reasoning processes. Don't we have to pattern match axioms to understand them?
  • Insightful. We're testing GPT-3 for a business problem. After watching this and one of your other videos, I'm no longer optimistic GPT-3 will be fruitful. I too believe that feedback/recursion is a significant missing feature. The brain is highly asynchronous parallel and 3 dimensional with lots of feedback/recursion. It seems probable that until AI implements those mechanisms, AGI might not be possible. It's possible the asynchronous and massive parallel nature of the brain are underappreciated. A recent article postulated that light coupling might be necessary. Since light beams don't require traces/connectivity, it seem like that might be a candidate to overcome the complexity of achieving high feedback connectivity. Parallel processing with feedback/recursion will require asynchronous processing to be efficient. CPUs and GPUs won't be able to compute the recursion fast enough and it would be extremely complex to keep track of the massive feedback/recursion order as it progresses through the connectivity fabric.
  • @somecalc4964
    Was listening to Marcus and thinking if nothing else, GPT-3 is a milestone in training infrastructure
  • @Niohimself
    Connor is such a fun person. I could listen to him all day.
  • @PcF124
    After watching both interviews with Walid, I still don't understand his point on probability in NLU. When someone says "I saw an elephant in my pijamas", either them or the elephant being in pajamas are both plausible meanings (but of course not equally probable, according to the listener's world model). So what's wrong with representing this probabilistically, especially when no additional context is available? And how can you even determine the exact thought of a person without hacking into their brain?
  • @davidnobles162
    Wow, this is some genuinely good content. Very organized, and I appreciate the range of opinions shared. This kind of meaningful conversation represents the best side of the internet lol
  • i felt frustrated that i could only like this video 1 time, i felt like i was being ungrateful... a lot of effort went into this, really good work!
  • @dr.mikeybee
    FYI, count your prompt. It dropped one; so GPT-3 was doing what you asked.
  • The first nine minutes of this is absolutely fantastic. I hope I remember to come back to it when I have time and watch it all. What is said in the first nine minutes and especially toward the nine minute mark is very, very important.
  • @Chr0nalis
    Took me a few days to watch this, but finally made it. High quality stuff.