Connor Leahy Unveils the Darker Side of AI

214,585
473
Published 2023-05-10
Welcome to Eye on AI, the podcast that explores the latest developments, challenges, and opportunities in the world of artificial intelligence. In this episode, we sit down with Connor Leahy, an AI researcher and co-founder of EleutherAI, to discuss the darker side of AI.

Connor shares his insights on the current negative trajectory of AI, the challenges of keeping superintelligence in a sandbox, and the potential negative implications of large language models such as GPT4. He also discusses the problem of releasing AI to the public and the need for regulatory intervention to ensure alignment with human values.

Throughout the podcast, Connor highlights the work of Conjecture, a project focused on advancing alignment in AI, and shares his perspectives on the stages of research and development of this critical issue.

If you’re interested in understanding the ethical and social implications of AI and the efforts to ensure alignment with human values, this podcast is for you. So join us as we delve into the darker side of AI with Connor Leahy on Eye on AI.

00:00 Preview
00:48 Connor Leahy’s background with EleutherAI & Conjecture  
03:05 Large language models applications with EleutherAI
06:51 The current negative trajectory of AI 
08:46 How difficult is keeping super intelligence in a sandbox?
12:35 How AutoGPT uses ChatGPT to run autonomously 
15:15 How GPT4 can be used out of context & negatively 
19:30 How OpenAI gives access to nefarious activities 
26:39 The problem with the race for AGI 
28:51 The goal of Conjecture and advancing alignment 
31:04 The problem with releasing AI to the public 
33:35 FTC complaint & government intervention in AI 
38:13 Technical implementation to fix the alignment issue 
44:34 How CoEm is fixing the alignment issue  
53:30 Stages of research and development of Conjecture

Craig Smith Twitter: twitter.com/craigss

Eye on A.I. Twitter: twitter.com/EyeOn_AI

All Comments (21)
  • Craig is not feeling the vibe that Connor is feeling. When the überdroid comes for Craig's glasses, he will understand.
  • @RegularRegs
    I would love to see Connor debate Sam Altman.
  • @lkyuvsad
    Someone running an AI channel has somehow not come across AutoGPT yet and needs prompting for nefarious things you could do with it. We are so, so desperately unprepared.
  • @elliotpines6225
    I'm developing a great deal of respect for Connor -- the problem is that we need a thousand more like him.
  • @RichardKCollins
    Connor Leahy, I have had hundreds of long serious discussions with ChatGPT 4 in the last several months. It took me that many hours to learn what it knows. I have spent almost every single day for the last 25 years tracing global issues on the Internet, for the Internet Foundation. And I have a very good memory. So when it answers I can almost always, (because I know the topic well and all the players and issues) figure out where it got its information, and usually I can give enough background to that it learns in one session enough to speak intelligently about 40% of the time. It is somewhat autistic, but with great effort, filling in the holes and watching everything like a hawk, I can catch its mistakes in arithmetic its mistakes in size comparisons and symbolic logic. Its biases for trivial answers (its input data is terrible, I know the deeper internet of science technology engineering mathematics computing finance governance other (STEMCFGO) so I can check. My recommendation is not to allow any GPT to be used for anything where human life, property or financial transactions, legal or medical advise are involved. Pretty much "do not trust it at all". They did not index and codify the input dataset (a tiny part of the Internet).. They do not search the web so they are not current. They do not property reference their sources and basically plagiarized the internet for sale without traceable material. Some things I know where it got the material or the ideas. Sometimes it uses "common knowledge" like "every knows" but it is just copying spam.. They used arbitrary tokens so their house is built on sand.. I recommend the whole internet use one set of global tokens. is that hard? A few thousand organizations, a few million individuals, and a few tens of millions o checks to clean it up. Then all groups using open global tokens. I work with policies and methods for 8 Billion humans far into the future every day. I mean tens of millions of human because I know the scale and effort required for the the global issues like "cancer", "covid", "global climate change", "nuclear fusion", "rewrite Wikipedia", "rewrite UN.org", "solar system colonization", "global education for all", "malnutrition", "clean water", "atomic fuels", "equality" and thousands of others.. The GPT did sort of open up "god like machine behavior if you have lots of money". But it also means "if you can work with hundreds of millions of very smart and caring people globally. Or Billions". Like you know, it is not "intrinsically impossible" just tedious. During conversations OpenAI GPT4 cannot give you a readable trace of its reasoning. That is possible and I see s few people starting to do those sorts of traces. The GPT training is basically statistical regression. The people who did it made up their own words, so it is not tied to the huge body of correlation, verification, and modeling. Billions of human years of experience out there and they make a computer program and slammed a lot of easy found text through it. They are horribly inefficient, because they wanted a magic bullet for everything.. And the works is just that much more complex. If it was intended for all humans, they should have planned for humans to be involved from the very beginning. My bestt advice for those wanting to have acceptable AI in society is to treat AIs now, and judge AIs now "as though they were human" A human that lies is not to be trusted. A human or company that tries to get you to believe them without proof, without references, is not to be trusted. A corporation making a product that is supposed to be able to do "electrical engineering" needs to be trained and tested, An "AI doctor need to be tested as well or better then a human. If the AI is supposed to work as a "librarian" they need to be openly (I would say globally( tested. By focusing on jobs, tasks, skills, abilities - verifiable, auditable, testable. -- then the existing professions who each have left an absolute mess on the Internet - can get involved and set global standards. IF they can show they are doing a good job themselves. Not groups who sat :"we are big and good", but ones that can independently verified. I think it can work out. I not think three is time to use paper methods, human memories, and human committees. Unassisted groups are not going to produce products and knowledge in usable forms. I filed this under "Note to Connor Leahy about a way forward, if hundreds of millions can get the right tools and policies" Richard Collins The Internet Foundation
  • @supernewuser
    The sheer terror in Connor's voice when he gives his answers kind of says it all. He said a lot of things but he couldn't really expand deeply on the topics because he was desperately trying to convey how fucked we are.
  • I think the KEY here is to understand the chronological order in which the problems will present themselves so that we can deal with the most urgent threat first. In order I'd guess 1) Mass unemployment, as de-skilling is chased for profit 2) Solutions for corporations to evade new government regulations to limit AI and keep pocketing the profits 3)The use of AI for criminal, military or authoritarian purposes and to keep people from revolting/protesting 4) AI detaching itself from the interest of the human race and pursuing its own objectives uncontrolled
  • @MuddyDuck...
    Thanks for a fascinating discussion, and a real eye opener. I was left with the feeling - thank goodness that there are people like Connor around (a passionate minority) who see straight through much of the current AI hype, and are actively warning about AI development - trying to ensure we progress more cautiously and transparently...
  • min 10:05 the difference between what is being discussed and what is currently going on is completely insane. Thanks Connor for your work and explanations. ❤
  • @LogicAndReason2025
    Every advance in automation has displaced workers, but it has always created many more much better jobs. Plus you can't stop progress anyway. You can only prepare for it, or get left behind.
  • Thank you for your service Connor just like Eliezer, you did everything you could to save us
  • @Nukphonik
    Connor, you are a true pioneer. this is exactly how A.I has to be developed you are a perfect example of ethical A.I big Tech taking responsibility for their tech. This is such an uplifting podcast to me as I am extremly concerned that these systems will destroy our internet.
  • @dik9091
    glad someone speaks my mind, which gives me some peace
  • @JohnAllen23
    Connor reminds me of a time traveler trying to warn the people of today about runaway AI...reminds me of another Connor hmmm.
  • @olelindqvist2254
    Thank You very much, it was extremely helpful to listen to Connor Leahy. We heard a lot of warnings, but here I get a real description of them.
  • Great analogy about testing a drug by putting them in the water supply or giving it to as many as possible as fast as possible to see whether it's safe or not, and then releasing a new version before you know the results. Reminds me of a certain rollout of a new medical product related to the recent pandemic.