The Free Energy Principle approach to Agency

Published 2024-01-01
"Agency" extends beyond just human decision-making and autonomy. It describes how ALL SYSTEMS, interact with their environment to maintain their existence.

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon:
patreon.com/mlst (public discord)
discord.gg/aNPkGUQtc5
twitter.com/MLStreetTalk

DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya

According to the free energy principle, living organisms strive to minimize the difference between their predicted states and the actual sensory inputs they receive. This principle suggests that agency arises as a natural consequence of this process, particularly when organisms appear to plan ahead many steps in the future.

Riddhi J. Pitliya is based in the computational psychopathology lab doing her Ph.D at the University of Oxford and works with Professor Karl Friston
twitter.com/RiddhiJP



References:

THE FREE ENERGY PRINCIPLE—A PRECIS [Ramstead]
www.dialecticalsystems.eu/contributions/the-free-e…

Active Inference: The Free Energy Principle in Mind, Brain, and Behavior [Thomas Parr, Giovanni Pezzulo, Karl J. Friston]
direct.mit.edu/books/oa-monograph/5299/Active-Infe…

The beauty of collective intelligence, explained by a developmental biologist | Michael Levin
   • The beauty of collective intelligence...  

Growing Neural Cellular Automata
distill.pub/2020/growing-ca

Carcinisation
en.wikipedia.org/wiki/Carcinisation

Prof. KENNETH STANLEY - Why Greatness Cannot Be Planned
   • #038 - Prof. KENNETH STANLEY - Why Gr...  

On Defining Artificial Intelligence [Pei Wang]
sciendo.com/article/10.2478/jagi-2019-0002

Why? The Purpose of the Universe [Goff]
amzn.to/4aEqpfm

Umwelt
en.wikipedia.org/wiki/Umwelt

An Immense World: How Animal Senses Reveal the Hidden Realms [Yong]
amzn.to/3tzzTb7

What's it like to be a bat [Nagal]
www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.p…

COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION)
   • DANIEL DENNETT - Can we trust AI?  

We live in the infosphere [FLORIDI]
   • WE LIVE IN THE INFOSPHERE [Prof. LUCI...  

Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398
   • Mark Zuckerberg: First Interview in t...  

Black Mirror: Rachel, Jack and Ashley Too | Official Trailer | Netflix
   • Black Mirror: Rachel, Jack and Ashley...  

Prof. Kristinn R. Thórisson
en.wikipedia.org/wiki/Kristinn_R._Th%C3%B3risson

All Comments (21)
  • @neon_Nomad
    These are the conversations i wish my friends had
  • @betel1345
    Love the attention to language and agents. I had been wondering if words can be considered as agents with markov blankets, and this conversation helps me think more
  • @stevemartin4249
    As undergrad biology major some 50 years ago, I cracked open that Shaggy-Dog story of Wittgenstein's Tractatus, went forward and back in my readings of Russell, Whitehead, Kuh, Popper, etc. Moved to Japan40 years ago and went on to grad school at Temple University Japan (linguistics) and matriculated into the doctoral program. This discussion is what I had imagined would be taking place at the doctoral level. Far from it ... I now believe that "institutional education" is an oxymoron. A lot of the language of this podcast is beyond me, but I especially like Friston's dismissal of language as having an agency of its own ... but fascinating to imagine agency emerging in LLMs. This discussion is a great entry into reconsidering the foundations and assumptions of science and A.G.I. Will have to hit the slow speed option and listen to this a few times. Particularly interested in what Pitliya has says at about 31:10 because I have experienced and seen so much marginalization of individuals by tightly knit in-groups in Japan, and members of those groups seem to be particularly drawn to rule driven behavior (institutions) at the expense of empathy-driven behavior (communities). I can't help but cringe a bit when hearing the word "explain" used for the relationship between computational models and psychological phenomena. Might as well say that bossa nova explains my feelings. Perhaps "describe" would trim a bit of intellectual hubris from the dialog.
  • @Rockyzach88
    I love the upgrade from "human chauvinist" to "anthropocentric biases". Unless of course you find them individually more useful in specific cases lol.
  • @gaz0881
    Is there a difference between being agentic and being an agent? So agents as originators because of the existence of planning, and then a number of subordinate processes that are agentic (they do stuff), but the lack of planning behind them separates them out?
  • @BrianMosleyUK
    Fabulous so far... You're making these theories so much more accessible - I can't express how grateful I am for your work. 🙏👍
  • @betel1345
    Thank you mlst for sharing these fabulous conversations!
  • @garystevason1658
    I am thinking that we should perhaps ask AI, itself for help with this solution. I'm an old-school AI guy (chess, backgammon, poker, pinball, etc.). And yes, those early deductive methodologies are likely too innocent compared to the new Armageddon inductive concepts - that is, it beat us just through speed and the number and accuracy of considerations possible. I am hoping that it may be possible to have a universal auditing function running simultaneously that ensures each AI plays nice. I just wouldn't, couldn't trust mere humans to police our proposed limitations, and yes, as I mentioned earlier: any limitations our friend, AI itself recommends for itself. The machine isn't bad, it is the greedy malevolent users that need to be bridled by the auditing code.
  • I thought that I had an above average education etc .. but the use of English in this video was well beyond me ... and I am a Physicist! I will need to process & simplify the transcript using AI.
  • @teleologist
    Reinforcement Learning and Active Inference agents are definitely not agential. These frameworks have no deep explanation of where the reward function comes from (RL) , or where generative model comes from (Active Inference) and how both of these functions should change over time as the agent learns more about the world. The functions "encode" (so-to-speak) the normativity of the agent, but are always crafted by humans at the end of the day. Agents should be able to derive their own values from an evolving understanding of the world.
  • ⭐️⭐️⭐️⭐️⭐️ this is the most encouraging talk that I’ve heard. I think we are on the right track.
  • @Soul-rr3us
    Great conversations, really loved this one.
  • @stopsbusmotions
    I consider myself as a guy who thinks that 'reality' is as it is, but not as we want it to be, and the best we can do is to understand some of its aspects, but on the other hand I've also discovered that I would easily give up all my understanding in exchange for getting rid of such a burden as depression. So, squeezed between my curiosity about how the mind works and my desire to alleviate depression, I've found myself listening to this talk twice).
  • @memegazer
    Strongly disagree that we "can't know" anything about physical or objective reality. At a fundamental level we can know that distictions are pssoble for example, bc it would not be possible to demark some boundary of the knower if in real and objective ontological sense distinctions did not exist or were not possible. Furthermore the idea that "we can only know ourselves" would be utterly meaningless in any formal sense without a proper treatment to account for selfrecursive computation with no distinctions or boundary conditions. This is why Wolfram's work is so important to my view, bc it illustrates at a fundamental level some things are necessarily entiailed in order to even form perceptions, regardless of the specificity of unweltness.