Wednesday, December 25, 2019
The Fifth Head of Cerebus by Gene Wolfe
Saturday, December 21, 2019
Permanent Record by Edward Snowden
The Emperor's New Mind by Roger Penrose
Sunday, December 15, 2019
21st Century Life Skills
Saturday, December 14, 2019
NeurIPS 2019 in Vancouver
One theme that hit the twitterverse live blogging about NeurIPS-19 is that neural network deep learning research (but not application) is "hitting the wall" because of the narrow set of problems it solves (optimization with clear objective and loss functions). At least two keynote talks highlighted the limits in current approaches and our woefully inadequate capabilities to solve more interesting "general AI" problems. In particular, this fantastic talk by Blaise Aguera y Arcas proposes a deeper simulation of biological systems (including neurons, evolution, & biological systems' learning) together with long-term short-term memory (LSTM) units in a simple topology as the basis for adaptable metalearning. And in this talk, Yoshua Bengio explains some ideas for abstraction that he hopes will bring AI closer to general AI, including attention, consciousness, and causality.