Saturday, December 14, 2019

NeurIPS 2019 in Vancouver

In Vancouver this week (2019 week 50), over 13,000 of your closest personal friends are at NeurIPS-2019New this year, videos of the talks and their slides are webcast and recorded in real time here. The exponential growth of the conference predicts there will be over a million papers in four years. Personally, I attend neither NIPS NeurIPS, nor the international conference for machine learning (ICML) anymore unless I have a paper accepted, or I get sent there by my company to recruit.  In the words of Yogi Berra, "No one goes there anymore; it's too crowded."  

One theme that hit the twitterverse live blogging about NeurIPS-19 is that neural network deep learning research (but not application) is "hitting the wall" because of the narrow set of problems it solves (optimization with clear objective and loss functions).  At least two keynote talks highlighted the limits in current approaches and our woefully inadequate capabilities to solve more interesting "general AI" problems. In particular, this fantastic talk by Blaise Aguera y Arcas proposes a deeper simulation of biological systems (including neurons, evolution, & biological systems' learning) together with long-term short-term memory (LSTM) units in a simple topology as the basis for adaptable metalearning.  And in this talk, Yoshua Bengio explains some ideas for abstraction that he hopes will bring AI closer to general AI, including attention, consciousness, and causality.


No comments: