Ever since the beginning of digital computing, scientists have been fascinated by the concept of artificial intelligence (AI), a form of computation that mimics human-level reasoning and decision making. What was a mere vision in 1951 when Alan Turing proposed the imitation game to assess whether (or not) a computer program is intelligent, is affecting all our lives today: there is hardly an area of society that is not enhanced using AI algorithms ranging from applications in marketing & advertising, e-Commerce, gaming, communication to medicine and transportation. However, we are starting to reach an inflection point in AI research where predictive accuracy is no longer the key success criteria but the amount of data, compute and, ultimately, energy, becomes the limiting factor for future AI algorithms. This change has profound implications on (1) the system-level aspects of machine learning – which digital technologies and hardware are best suited to trade off predictive accuracy and energy consumption – (2) the method-level aspects of machine learning – how can we achieve a human-level data-efficiency where algorithms can learn from a handful of examples and episodes as opposed to the thousands of training examples needed today –, and (3) the theory-level aspects of machine learning – how do we merge the physical notions of energy with the notions of information and learning in one unifying theory. In this talk, I will discuss these three aspects and share research problems in each of these three areas.
Conference report
Inglés
Àrees temàtiques de la UPC::Informàtica::Arquitectura de computadors; High performance computing; Càlcul intensiu (Informàtica)
Barcelona Supercomputing Center
http://creativecommons.org/licenses/by-nc-nd/4.0/
Open Access
Congressos [11156]