Thursday, November 14, 2013

Big Data – bums on seats measures wrong end of learner

Education and training has always coveted data. But in any honest appraisal of this data collection we have to admit that it is largely the wrong data. There has, historically been too much focus on start and end point data. All dull inputs and outputs, it’s like judging a person simply by measuring what he eats and then excretes. It may even stretch to how long that process took to complete. What we need to focus on is the cognitive improvement of the learner. Here’s five examples of data. Mostly superficial, that accounts for the vast bulk of the data collected in education and training:
1. Bums on seats
To measure attendees, or bums on seats, is to measure the wrong end of the learner. Yet this is what so much ‘contact time’ is in our colleges and universities. I once attended a talk by the Head of Training for a global bank, where she proudly showed that x number of meals had been served in her canteen on the training campus. And they wonder why banks failed?
2. Contact time
Contact time is essentially an excuse for not measuring what is learnt. Turning up is hardly a measure of learning. Attendance is not attainment. In some cases the contact time is even more illusory, as in Higher Education in the UK they do not even count the number of students that turn up for lectures.
3. Course completion
Completion is not a measurement of attainment or competence, yet so many courses measure simply this. We have already seen how turning up is not a great measure but this is so often simply a measure of how many people just hung about until the end.
4. Summative assessments
The problem with final test and exam data is that it’s all too late. The deed is done. Exams are too often the final act in learning and an end-point. As Professor Black has shown, this final mark so often stops even the best learners from trying any harder and marks the poorer students out as failures. There’s also the problem of cramming and short-term memory.
5. Happy sheets
The evaluation of education and training is plagued with end-point data. None more futile than the obsession with happy sheets, as they measure nothing. It’s a staple of classroom courses and often the only data that is collected. Yet it says nothing about what has actually been learnt.
6. SCORM
Even in online learning SCORM was really just a package and delivery tracking mechanism for LMS vendors, build on the false premise of learning objects and although it provided a ‘standard’ for interoperability, it largely measured simple inputs and outputs.
Missing data
What’s so often missing is the data on competence. We teach what is easy to test and test what is easy to teach. That means lots of academic knowledge which is tested through paper tests, from multiple choice to essays. The actual competence measured is often just the ability to cram and remember data to pass tests, quickly forgotten.
Conclusion
Most data collection in education and training skates over the surface with data about superficial attendance, end-point assessment and opinion. What’s missing is hard data on actual performance, competences and retained knowledge. What really matters is data collected from learners as they learn. This is when data really is needed so that we can help learners succeed. So much data focuses on the deficit model in education – failure, drop-outs.

7 comments:

Unknown said...

Having worked as a MIS manager for lots of years in FE you *really* need to take care what you measure because those measured optimise around that metric. Report SSR and class sizes rise, report attendance and you see bribes offered to get em in. Lesson is measure as directly as you can those things you wish to improve directly as part of the process (skills and competence) not through via abstract process (testing)

I could go on but you have already guessed that. Good post Donald is there a happy sheet I can tick?

Crispin Weston said...

Hello Donald, looking forward to debating MOOCs with you in Berlin, where my other presentation is "Measuring capability - cornerstone of the new digital pedagogy" - precisely this topic.

I agree with you that (a) most data we currently measure is superficial and (b) the key is to measure competency/capability (I prefer the latter for reasons I will explain in Berlin). But it is not so easy as it sounds: what is important, I think, is to demonstrate predictive reliability of statements about capability.

BTW, I can't see why people get so hung up about the "deficit model of education", or indeed about failure. Both seem to me to be essential to learning through practice, the implementation of formative assessment, and having clear learning objectives.

criticallearner said...

Great points.

However, I would like to add that summative assessments aren't just too late. They are also too early.

For most designs, it tests short term recall, and in best cases, short term application removed from the workplace. Assessing in this manner in no way will predict that skills will transfer to, and sustain in, the workplace.

So exams "too often the final act in learning and an end-point" is absolutely a problem- because at best a summative assessment is only a checkpoint that some skills may have been acquired and have potential to apply in the workplace. And the "check for transfer" measures I have seen have also been too early and too simplistic.

ContraryMan said...

While agreeing with most of this it is difficult to find alternatives. Even as we move towards competency based learning, learners and employers still want certification and so we still have to have assessments that don't cost too much. We still have to quantify the "volume" of learning so that we can aggregate it towards major awards and despite extensive searching I have not yet found any measurement other than the "student effort hour" that is obviously quite variable. So, given that a lot of us agree with this, what are our next steps?

Donald Clark said...

First, if HE is serious about 'student effort hours' why don't they record lecture attendance? So even the base data is fictional.
Competence is the key and corporates have been doing this for years - as have many in vocational learning. It's not easy but you need to assess by observation of practice. Simulations do it particularly well if you want to automate the process. I'll be blogging on this soon.

Charles Jennings said...

An excellent article, Donald. the vast majority of what passes as 'learning metrics' is meaningless pap. Nothing to do with outputs and often little to do with learning.

While I can't agree with Brian that 'it is difficult to find alternatives' it is true that many certification and membership bodies still create a 'drag' by demanding or reporting activity measures as if they matter.

If you look at the annual ASTD 'State of the Industry' benchmarking report you'll find that like many others it's almost exclusively based on activity measures - training expenditure per employee, as a percent of payroll, number of days training per employee per year etc.

I've never understood why people think this type of data is useful. If people in my organisation spend more time in training than others what does that mean?
Does it mean:
[a] we're more focused on developing our people?
[b] were less efficient at doing it, so we take longer?
[c] we're still using event-based training when we could be using more suitable on-the-job techniques?
[d] our recruitment processes are so poor that we have to spend more time, effort and money getting people up to speed than we should?
[e] (any number of other reasons)

I think you can break measurement down into two categories.

1. Stakeholder metrics - these are all about IMPACT - real world measures of an increased ability to 'do'. Such as fewer errors, increased CSat figures, better sales, improved levels of innovation and product delivery, speed etc. In the education world this translates into demonstration of competence. Can people achieve tasks to a defined level. These are the metrics that really matter to senior leaders.

2. L&D metrics - these should be all about efficiency. If the class occupancy rate goes up that's probably a good thing - more efficient use of resources - so long as the right channel is being used. eLearning completion rates may tell something about the quality of design (alternatively they may just tell you that you have a workforce that's either compliant or just terrified of being seen as not following orders) . Cost/hour of learning may tell something about efficiency of process (but may also indicate cutting corners) etc.

Stakeholder metrics are the important ones. L&D metrics should only ever be used internally to improve process and make the skids glide a little better. Even then, they should be taken with a pinch of salt.

Donald Clark said...

Spot on Charles.