The First Neural Networks

78,431
0
Published 2024-06-13

All Comments (21)
  • @dinoscheidt
    I’m in ML since 2013 and have to say: wow… you and your team do really deserve praise for solid research and delivery. I’ll bookmark this video to point people to. Thank you
  • @strayling1
    Please continue the story. A cliffhanger like that deserves a sequel! Seriously, this was a truly impressive video and I learned new things from it.
  • Got this recommended to me after getting my first digit recognition program working. The neural networks know I’m learning about neural networks
  • @fibersden638
    One of the top education channels on YouTube for sure
  • @hififlipper
    "A human being without life" hurts too much.
  • @dwinsemius
    The one name missing from this from my high-school memory is Norbert Weiner, author of "Cybernetics". I do remember a circa 1980 effort of mine to understand the implication to my area of training (medicine) of rule-based AI. The Mycin program (infectious disease diagnosis and management) sited at Stanford could have been the seed crystal for a very useful application of the symbol-based methods. It wasn't maintained and expanded after its initial development. Took too long to do data input and didn't handle edge cases or apply common sense. It was, however, very good at difficult "university level specialist" problems. I interviewed Dr Shortliffe and his assessment was that AI wouldn't influence the practice of medicine for 20-30 years. I was hugely disappointed. At the age of 30 I thought it should be just around the corner. So here it is 45 years later and symbolic methods have languished. I think there needs to be one or more "symbolic layers" in the development process of neural networks. For one thing it would allow insertion of corrections and offer the possibility of analyzing the "reasoning".
  • @MFMegaZeroX7
    I love seeing Minsky come up as I have a (tenuous) connection to him as he is my academic "great great grand advisor." In that, my PhD's advisor's PhD advisor's PhD advisor's PhD advisor was Minsky. Unfortunately, stories about him never got passed down, I only have a bunch of stories with my own advisor, and his advisor, so it is interesting seeing what he was up to.
  • @tracyrreed
    5:14 Look at this guy, throwing out Principia Mathematica without even name-dropping its author. 😂
  • @PeteC62
    Your videos sre always well worth the time to watch them, thanks!
  • @amerigo88
    Interesting that Claude Shannon's observations on the meaning of information being reducible to binary came about at virtually the same time as the early neural networks papers. Edit - The Mathematical Theory of Communication by Shannon was published in 1948. Also, Herb Simon was an incredible mind.
  • @stevengill1736
    Gosh, I remember studying physiology in the late 60s when human nervous system understanding was still in the relative dark ages - for instance plasticity was still unknown, and they taught us that your nerves stopped growing at a young age and that was it. But I had no idea how far they'd come with machine learning in the Perceptron - already using tuneable weighted responses simulatong neurons? Wow! If they could have licked that multilayer problem it would have sped things up quite a bit. You mentioned the old chopped up planaria trick - are you familiar with the work of Dr Miachel Levin? His team is carrying the understanding of morphogenisis to new heights - amazing stuff! Thank you kindly for your videos! Cheers.
  • @jakobpcoder
    This is the best documentary on this topic i have ever seen. Its so well researched, its like doing the whole wikipedia dive
  • @JohnHLundin
    Thanks Jon, as someone who tinkered with neural nets in the 1980s and 90s, this history connects the evolutionary dots and illuminates the evolution/genesis of those theories & tools we were working with... J
  • @HaHaBIah
    I love listening to this with our current modern context
  • You always bring up interesting topics. Keep it up, it's great job 👍.
  • this was great. A more in depth one will be awesome. The fall and rise of the perceptron. Going from single to multiple layers.
  • @Wobbothe3rd
    Recurrent Neural Networks are about to make a HUGE comeback.
  • @TheChipMcDonald
    The Einstein, Oppenheimer, Bohr, Feynman, Schroeder and Heisenbergs of a.i.. McCulloch-Pitts neuron network, Rosenblatt's training paradigm, took 70 years to get to "here" and should be acknowledged. I remember as a little kid in the 70s reading articles on different people leading the symbolic movement, and thinking "none of them really seem to know or have conviction in what they're campaigning for".