One theme that hit the twitterverse live blogging about NeurIPS-19 is that neural network deep learning research (but not application) is "hitting the wall" because of the narrow set of problems it solves (optimization with clear objective and loss functions). At least two keynote talks highlighted the limits in current approaches and our woefully inadequate capabilities to solve more interesting "general AI" problems. In particular, this fantastic talk by Blaise Aguera y Arcas proposes a deeper simulation of biological systems (including neurons, evolution, & biological systems' learning) together with long-term short-term memory (LSTM) units in a simple topology as the basis for adaptable metalearning. And in this talk, Yoshua Bengio explains some ideas for abstraction that he hopes will bring AI closer to general AI, including attention, consciousness, and causality.
No comments:
Post a Comment