You're reading...
Operational Research

OR Society Blackett Memorial Lecture 2016: Machines that learn: big data or explanatory models?

blackett-lecture-2016

Prof Andrew Blake, Director of the Alan Turing Institute, gave the 2016 Blackett Lecture to a packed house in Westminster Central Hall yesterday.

Before the lecture, OR Society President Ruth Kaufman introduced the evening and presented the annual medals and awards for the best OR papers of 2016 and to those individuals who have made significant contributions to OR in their careers.

In her introduction, Ruth referred to “Post-truth”, a newly added word in the Oxford Dictionary. She said that the role of OR has been and would continue to be important in providing an evidence  base for policy-making and decision-making.

Professor Blake’s topic was Machines that learn: big data or explanatory models?

He explained that a key question about machines that learn concerns two distinct styles of learning. Will they turn out to depend more on probabilistic models that explain the data, or on networks that react to data and are trained on data at ever greater scale? In machine vision systems, for instance, this boils down to the comparative roles of two paradigms: analysis-by-synthesis versus empirical recognisers. Each approach has its strengths, and empirical recognisers especially have made great strides in performance in the last few years, through deep learning. It is a particular challenge to understand how the two approaches could be integrated, and already progress is being made on that.

He presented a number of examples of machine vision systems where this question boils down to the comparative roles of two paradigms: analysis-by-synthesis versus empirical recognisers. Each approach has its strengths, and empirical recognisers especially have made great strides in performance in the last few years, through deep learning. He said it is a particular challenge to understand how the two approaches could be integrated, and already progress is being made on that. He showed how the error rate for image classification had reduced from around 30% in 2010 to around 5% now and similar progress has been made with voice recognition. We are probably all familiar with  Apple’s Siri, Google Translate and more recently the Amazon Echo. All of these demonstrate the improvements made in the accuracy of voice recognition, but also in “understanding” what the user is asking for.

You can watch a  previous presentation by Andrew, here.

.

.

.

.

.

.

.

..

.

.

.

 

Advertisements

Discussion

Comments are closed.

Connect with Ian Seath

Find us on Facebook Improvement Skills Consulting Ltd. on LinkedIn Follow IanJSeath on Twitter

Archives

Copyright Notice

© Improvement Skills Consulting Ltd. and Ian Seath, 2007-17. Unauthorised use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Improvement Skills Consulting Ltd. and Ian Seath with appropriate and specific direction to the original content.
%d bloggers like this: