Sunday, January 17, 2016

Deep learning just got deeper – writing on blackboard for some forms of teaching?

'I think there is a world market for maybe five computers', so Thomas Watson of IBM NEVER said. It’s just one of many made-up and misattributed quotes (mostly from Einstein) which pepper slides at education and tech conferences. But in a weird sort of way this often mocked quote (oh how we laugh) is turning out to be true. The only people with the computing power to solve the big problems may just be be Google, Microsoft, Facebook, Amazon and IBM. They bring services to the cloud, power on tap, making AI a utility, like electricity. Nicholas Carr wrote about this in The Big Switch, but underestimated the ultimate reach of such cloud services.
Deep Learning
We’re in the Age of Algorithms. They find things for you on Google, stop porn appearing on Twitter, protect your savings and online transactions, filter out spam, allow you to use files and share files. The world of learning is not immune, where there’s 5 levels at which AI currentlyoperates. But it’s Deep Learning by software, that is sprinting ahead at the moment.
Microsoft – image recognition
An interesting, but only one case, of deep learning is visual recognition. Only last month IBM wiped the floor with their image recognition system. The point, of course, is not to mimic the human eye but to produce perceptual apparatus that is better – higher fidelity, more range on electromagnetic spectrum and so on. It’s really the cognitive recognition of images that matter – that’s the hard bit.
It’s best to see neural networks, not in terms of the meat brain, but in terms of layers of algorithmic maths. As these layers get deeper and more complex they can handle more complex tasks with higher degrees of success. The problem with the layers has been a law of diminishing returns. A success on one layer gets diminished as it moves through lower levels. The trick is to ‘preserve’ success by moving success forward on a conditional basis, only taking it to other 'relevant' layers. Microsoft has done this down to over 150 layers.
Given the increase in speed and reduction in cost of processing power, deep learning researchers also run many models and allow the software to learn through many iterations. Raw experimentation then produces optimised solutions. The resources needed to do this well are mind-blowing, with all but a few heavyweights excluded. The winners are likely to be those who have the deep pockets and deep commitment to succeed - these are the big tech companies.
AI passes University entrance exams
I first heard about this from Professor Toby Walsh in Berlin, who stated that in November 2015, an AI programme had passed the entrance exam for Tokyo University that includes maths, physics, English and history. This was the Todai Robot Project. Remarkably it had scored a much higher than average score (53.8% against a national human average of 43.8%), with its highest marks in maths and history. The point, of course, is NOT to get a piece of software or robot into a top university. It is to act as the basis for research into the development of machine intelligence to solve problems.
AI predicts student performance (85%)
Other researchers, such as the Chris Piech’s team at Stanford and Google, have developed AI that does detailed analysis on student performance as the student learns and predicts how they will perform on subsequent problems. Their approach used 1.4 million student answers to maths problems posed by the Khan Academy. As the internet and global education projects, such as Khan and MOOCs, slew off huge amounts of data, we are now in apposition to exploit AI (a neural network) to be predictive on the basis of an enormous amount of real human data. We can, in a sense, bypass traditional cognitive psychology and use large data sets in conjunction with smart sets of algorithms, to diagnose what students are likely to get right or wrong. More than this, it can tell what went wrong and why. The accuracy stands, presently at around 85%. This has obvious applications in terms of doing what a teacher can do, assessing and predicting performance, only better.
What’s the point

Why are these three, and many more successes, all so interesting? Well, image recognition, (and speech and other forms of data) has already revolutionized search, fraud detection and can be used in online assessment to authenticate students for online exams. Adaptive learning systems to present personalized learning to each and every student, according to their measured progress. This gets away from the obvious faults in one-size-fits-all, linear curricula and teaching. It also allows the system to track each and every student to a degree that is impractical for real teachers. This one-to-one diagnosis works in all sorts of other areas of online activity, such as Google, advertising, Amazon, online dating and Netflix. There is every reason to suppose that it will work in optimizing learning journeys. The net results may be faster progression, less dropout, the ability to deliver on scale and volume, therefore lowering the currently skyrocketing costs in education. For me, the ultimate goal is to satisfy growing demand in the developing world, which we will never satisfy using our existing, expensive methods. The point of projects like Tokai is not that such a piece of software can pass an exam but that it can do things which graduates think is their sole domain. If a machine can do a graduate level task in the workplace, as robots can in factories, then their jobs are under threat. The interesting point is the degree to which AI and deep learning will result in the erosion of middle class professions, including teaching. Augmented intelligence and augmented teaching are already in operation. But the writing is on the blackboard for other forms of learning and teaching.

No comments: