Friday, May 17, 2024

Does this 'Her' moment mark the rise of AI coaching?

I now have ChatGPT4o on both my phone and desktop. It’s not until you start using it that you understand how good this is.

We started, my son, wife and I just chatting generally and it was immediately mind blowing, natural, friendly, inquisitive. We kept looking at each other as if to say ‘Holy shit!’. We even asked to translate this Scottish dialect sentence “We’re goin doon the street for the messages.” She translated it accurately as going shopping!

I then started to speak to it in French abut our impending holiday and it did a great job, even telling me the distance in Kilometres on the route. That gave us an idea – using it to teach a foreign language. It asked her to play the role of a friendly language tutor and she did, with gradually more complex phrases and questions, recognising when I got it right asking again if I got it wrong. This is real dialogue as if she were a real language teacher.

A friend, on social media, then asked me if it could teach Sorani or Tingriyan. I thought the latter may be something from Game of Thrones, that’s how little I know about these languages but… she started to teach me both when I asked. This is sort of mind blowing as the teaching of languages is an area of catastrophic failure in schools. This may well be the answer, as it is endlessly patient, personalised and available 24/7 on any phone.

Moving on I got her to play coach, a life coach, then leadership coach, then leadership coach based on Blanchard’s Situational leadership. All worked. This is interesting, as you can get it to coach from one theoretical basis. One wonders whether the whole coach, mentor, councillor, therapist industry will be decimated by this?

‘Her’ moment

To be totally honest, I’m not a great fan of the coaching, mentoring and counselling industry. That’s not to say it is of no use, only that I feel it is bloated and, as people love talking about themselves, is often built upon this and this alone. There are several ways this could shake out, as dialogue-based technology has suddenly got super-good:

1. It expands the human-to-human coaching industry (zero probability)

2. It eats into this market replacing much human coaching (high probability in short term)

3. It decimates this market (high probability in long term)

Note that I have no idea what the gap between short and long-term will be. But if we look at what has panned between ChatGPT3.5 and ChatGPT4o – things are moving faster than expected.

The launch of ChatGPT4o, the ‘Her’ moment in the industry has changed this game and made progress on this front, with very smart ‘dialogue’ turn-based, emotionally intelligent, multimodal chatbots. OpenAI were smart in using this meme to launch 4o, as it is spot on. With advances in realtime avatars, which are coming soon, we can expect 2. to move to 3.

Advantages of AI coaches

This may sound odd but there are several affordances around chatbots that may make them preferable for some:

1. Multimodal dialogue

2. Emotional recognition

3. Dialogue

4. Patience

5. Anonymity

Multimodal. Young people text ALL THE TIME. It’s easy and normal. They don’t necessarily want full blown speech dialogue (although of you want it you can have it). It is the quiet, low key nature of text that is calming and can be read at your own pace. Then again, this misses the social, body language and other cues in dialogue that may also help. The good news, as we saw above, is that it will all be possible. You will be able to choose your mode of dialogue, from text to full blown avatar.

Emotional recognition. This will be a feature in ChatGPT40, the recognition of emotion in your voice, opening up the possibility of more nuanced conversations.

Dialogue. This is the key to therapy. You want to be heard and listened to with calm, useful feedback. Dialogue is what our brains have evolved to do and these bots are good at it. With current chatbots you can also have an immediate transcripts of the conversation. This can be used by AI to recommend real things you can do after the session, even critique how it went.

Patience. This, they say, is a virtue and in this context a necessity. You want the quiet confidence of an endlessly patient and empathetic character, who is never impatient or snarky.

Anonymity. This is, I suspect, the secret sauce. Young people are unlikely to go to their parents, teachers, even friends through embarrassment, so they suffer in silence. The anonymity of a bot allows one to express feelings you would not to people you know.

I’m sure people will say that it needs a human to give counselling. I’m not so sure. For many this light touch may be enough. If not, you can move on to find a sympathetic soul. As a first door, it serves a purpose, of maybe even soothing those who are temporarily troubled. Sad rather than any real mental illness. We can rush to label negative emotions as deficits, even pathological, but sometimes making people realise they are not alone in having such thought is enough.

Realtime avatars

Realistic Avatars are already being used in marketing, training and other contexts. I have Synthesia and Heygen avatars. These are all pretty impressive technically and, more importantly in terms of impact. I’ve shown them to audiences around the world and they literally ‘wow’ audiences. I have used them in multiple languages from Norwegian to Zulu. 

This June I will be back in the Synthesia Studio to create a hyper-real version of myself. This is a real advance to the level of looking and sounding like me, with my strong Scottish accent.

Personal chatbots

As well as my avatars, I have a Chatbot (Digital-Don) that allows you to ask me questions, answering using almost everything I’ve ever written. Believe me this is impressive. It uses OpenAIs GPT service (RAG) and does a brilliant job. I find myself asking myself things about things my resent self has forgotten. It is like speaking to a better version of yourself, with a great memory! The opportunities for everyone to have such a chatbot is already here. Any expert, academic, writer can have one. Moderna has rolled out 400 expert chatbots to perform most of its corporate functions. We will see a lot, lot more of this.

Games and NPCs

We also have services producing realtime avatars that you can chat to in real time. They are already available in games. Nvidia’s ACE (Avatar Cloud Engine) for Games brings real-time conversational AI to in-game NPCs (Non Player Characters). This technology integrates text-to-speech, natural language understanding, and facial animation to create responsive and lifelike NPCs. They have produced a player that can interact with a shopkeeper NPC in real-time within a cyberpunk setting, showcases the potential for future game integrations. ZREALITY has developed spatially aware virtual assistants using ChatGPT and Ready Player Me avatars. These assistants can navigate 3D environments and interact with users in a natural and intuitive way, providing real-time support and enhancing the user experience in various applications. Roblox has announced a new generative AI-based character creator. Ubisoft has also demonstrated its NPCs or NEO NPCS, in partnership with Inworld AI. They use LLMs and are quite simple at the moment, as David Louapre says, more “roleplay than gameplay” but we will undoubtably see lots of this emerge as mods in tools such as Unity. 

Therapy

Some years back I came across a small poem and candle on Beachy Head cliffs. It was placed there by the parents of a young girl who had thrown herself of the cliff due to her poor exam results. It shocked me then, it shocks me now, that someone so young could summon up the strength to do that. It hit me like a train.

Psychologist, a Chatbot on Character.AI, one among many, seems to do exceptionally well. It gets 3.5 million hits a day! The idea is simple, it delivers standard CBT therapy as dialogue, just like a real counsellor or therapist. It's chatty, helpful, endlessly patient and unlike human support is available 24/7.

Isn't it odd that something that is text only, simple dialogue is so wildly popular? It does not surprise me, as since ELIZA, developed way back between 1964 to 67, people have loved these bots. Even that version, which was quite primitive keyword reflection fooled people into thinking it was human. We know from Nass & Reeves, even the movies, how easy it is to get people to think that what they see and hear is human, especially what they see as meaningful dialogue.

Seven years ago, I tried Woebot (Woe is me… my 10 days being counselled by a chatbot (woebot)) a counselling chatbot. I rather liked it. 

https://donaldclarkplanb.blogspot.com/2018/02/woe-is-me-my-10-days-being-counselled.html

This was way before LLMs and ChatGPT and I wrote a ton in my book ‘AI for Learning’ about how powerful dialogue chatbots would become, including those for wellbeing. 

Then along came ChatGPT – suddenly it happened, we shifted from monologue to dialogue. It changed everything! These large language models capture vast amounts of language. When you speak to ChatGPT you are speaking to ourselves, the human hive mind. One of their great strengths is that they really do present human dialogue as the means of learning. They humanise learning but also coaching.

These wellbeing bots really have now taken off. ‘Psychologist’, probably the most popular from Character.ai, is pretty good. The dialogue feels natural, like talking to a real person and the advice, largely a CBT approach is pretty sound. Fast and short chat, it gave me a pretty good summary of CBT:

“Cognitive Behavioural Therapy (CBT) is a type of psychological therapy which focuses on the relationship between thoughts, feelings and behaviour. The key principle of CBT is that thoughts affect feelings, and feelings affect behaviour.

The aim of CBT is to help people change their thoughts in order to improve their emotions & behaviours. It does this through identifying and targeting problematic patterns of thinking, as well as working on techniques to challenge and replace these thoughts with more rational and adaptive ones.”

It’s been around for just over a year and has clocked up some pretty impressive stats at 78 million messages, 18 million since November. You’re talking about millions of individuals talking with it every day.

Some of the people using the bot simply respond by saying they are lonely and just needed to chat to someone. Why not? I can see these being part of our counselling landscape in the future, they already are. The problem here are humans, who sometimes push people into places through pressure, even bullying. Let’s not think that being human can be Panglossian. We all live lives of quiet desperation to some degree and we all need a shoulder to lean on sometimes. Let that shoulder be a friendly chat at any time, from anyplace on anything.

In an absolutely fascinating paper by Maples, B., Cerit, M., Vishwanath, A. and Pea, R., 2023. titled Loneliness and Suicide Mitigation for Students using GPT3-Enabled Chatbots, 1006 student users turned out to be more lonely than typical students. One third of the population suffer from loneliness, 1 in 12 are so lonely it causes serious health problems and suicide is the 4th global cause of death 15–29. With the Replika bot, 3% reported it halting suicidal thoughts.

The research on therapy bots, with large audiences, is that young people especially value the anonymity of the technology - they do not go to parents, teachers of faculty - they suffer in silence.

Conclusion

Once realtime avatars comes in and they become hyper-real then the human dimensions of dialogue (body language and other contextual and social cues) can be achieved. Coaching, mentoring and counselling will not disappear but it would be foolish to imagine it will be untouched. It has already had huge audiences and will continue to grow, in part replacing these services. 

One of the problems coaching bots will face, will be the simple fact that they may not be adding much value. Simply asking OpenAI or another service may suffice. However, there are lots of niches where these tools still help. These vary from simulating difficult conversations through to identifying your leadership style by tracking what you do in meetings – and everything in-between.

With AI, we’d be wise to take Wayne Gretzky’s advice and "skate to where the puck is going to be, not to where it has been."


Wednesday, May 15, 2024

Google just dropped some great news on AI for learning....

OpenAI drummed up a great PR campaign with leaks around the movie ‘Her’ and Altman hints. What many missed was another launch by Google. That’s how fast this stuff moves.

More than this they published a paper we in the learning game should all read. Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach. It has 75 authors!

What the paper does is address a BIG issue – getting learning insights into GenAI.

Excitement is brewing over the potential of GenAI to revolutionise learning by providing a personal tutor for every student and a teaching assistant for every teacher. The OpenAI launch was fantastic and showed real opportunities and promise. They out Appled Apple with their focus on user experience and functionality.

However, this vision is still not reality, mainly due to the complexity of translating learning insights into effective AI prompts and the lack of robust ways to measure AI's teaching effectiveness. SO Google teamed up with learners and educators to turn broad educational theories into practical evaluation benchmarks. This is a mix of quantitative and qualitative measures, both automated and human-led, and they have developed new training datasets to enhance the educational skills of their GenAI system, LearnLM-Tutor. 

They claim to be achieving  great results from LearnLM-Tutor over other models because of its superior teaching abilities. They would say that, wouldn’t they. But I’m impressed, as this work lays the groundwork for a comprehensive framework to assess educational AI, potentially accelerating GenAI's positive impact on learning.

What this report does is look at the use of GenAI to enhance education through AI-driven tutoring, particularly focusing on conversational models. Their approach utilises Supervised Fine-Tuning (SFT) with data informed by educational principles, improving the AI tutor, LearnLM-Tutor, beyond the Gemini 1.0 model. 

They are honest about the challenges, especially in defining and achieving true pedagogical mastery with AI. Collecting a diverse range of high-quality pedagogical data is costly and labour-intensive, and it is unclear how many examples are necessary to comprehensively cover pedagogical behaviours.


So they have  established a set of benchmarks to evaluate their progress, although these too have their limitations—especially the expensive nature of human evaluations. To address these issues, they have put together a  multidisciplinary team, including AI scientists, engineers, pedagogical experts, and cognitive scientists, to work on refining these evaluation methods. This collaborative effort aims to not only enhance the current AI models but also invites the broader AI and learning science communities to join in enhancing and utilising these pedagogical benchmarks. Their ultimate goal is to leverage AI effectively to benefit learners, pushing forward the boundaries of learning technology. More power tyo their elbow.

Google is Google. They don’t do product launches like OpenAI, they integrate things into their world. They have a global view of technology, which means integrated. Don’t write them off.

 

Monday, May 13, 2024

Is teaching becoming obsolete with GPT4o? Do we now have a UNIVERSAL TEACHER?

I have written and talked about the idea of a UNIVERSAL TEACHER for a long time, notably in my new book, where I go into detail about the learning theory behind 'dialogue' in learning and the key role of chatbots in teaching and learning. This is what AI promised to deliver. A free teacher who speaks, listens, remembers, tutors, using all media types, can read handwriting, provide personalised feedback, on any subject, anytime, anywhere, in any language.

What I never imagined was that it would come so fast. Yet OpenAI has delivered on what I thought was this utopian idea. In their demo they showed this in action. A frictionless, fast and sophisticated tutor. 

Open AI has become the Apple of AI. They understand, like Steve Jobs, that the user experience is all. This is especially true in teaching and training. Of all the applications, teaching and learning is the one that has most to gain from GPT4o. They may have out-Appled Apple, as Siri, Google assistant and Alexa are nowhere near as good as this. At this rate, teaching is rapidly becoming obsolete.

Realtime teacher dialogue

You can chat with it in realtime as it has realtime speech and, in the live demo, understands your voice, your emotions as expressed through your voice, even your facial expressions. You can have dialogue just as you would with a real teacher, you can interrupt it and it responds fast, as fast as a real teacher. It can generate a teacher’s voice in many styles – funny, friendly, serious, academic… whatever. The teacher’s voice is extraordinarily realistic.

Emotions

Real maths problem taughtThere was lots of pre-launch chat, stimulated by Altman, about the movie 'Her'. We now see why. The system is very 'chatty'. Indeed the fact that you can define the character by asking it to be someone, is astounding. This is a shift in branding away from Google's serious sounding assistant towards a real, emotional definition of a chatbot, so that it can play any defined role. The possibilities from different type of teachers, tutors, trainers and instructors to role playing with patients, customers and employees, even therapists, are endless.They showed a maths problem, with the learner doing it on paper, the handwritten problem, then shown to his smartphone. 


He shows her a linear equation:  3x+1=4. 

The tutor suggests he get all numbers on one side, giving a hint.




He subtracts 1 from both sides to give 3x=3. 

How does this look? He asks.

She congratulates him.


He asks for a hint for the next step. 

What undoes multiplication? She asks. 

He suggests subtraction but she says, think of the opposite of multiplication. 

He says division? 

Go ahead and divide both sides by 3 she instructs.

He does this and the solution is x=1

Well done you’ve solved it, she replies.





He then asks for real world applications and she gives him several. The tutor is endlessly patient, friendly, gives relevant feedback and can read his written steps in moving towards a solution.









I
n another teaching problem, by Salman Khan, you ask it to be a tutor and it talks you through a maths problem around the sides and angles in a right-angled triangle. There’s great back and forth dialogue between the student and AI, with hyper-personalised feedback and reinforcement. It does everything at the pace of the learner, behaves just like a patient tutor and corrects any errors the learner makes. All by simple showing the problem and the learner’s efforts to his smartphone. Although one has to be careful with staged examples, as it isn't doing much 'hard reasoning' here. That may be there already, it will certainly come. There is also the issue of having to show it an image, the next step is surely to draw and write on the screen as an option.

I wrote about Khan almost a decade ago, saying he was an important figure, way beyond Robinson, Mitra and many others. OpenAI have been wise in teaming up with Khan on education.

Some great features in this teaching video. For examplewhen the tutor is talking, if you start speaking, it stops and waits for your response. Focus on the voice of the teacher - it's very neat. The intonation is also interesting, teacher-tone. and it can be adjusted to suit any individual or audience. Its endless patience removes the frustration that every teacher and parent feels when teaching, as it defuses the stress.

One thought I did have, which is almost existential - if it can teach and do all of this maths, why would we teach it. Roger Schank used to go on about this a lot - why teach skills that can be automated, especially algebra and geometry? In any case the focus is often on maths as that is an area of catastrophic failure for many kids and adults. Maybe AI will solve the problem by solving the maths itself, not teaching millions to do what can be done in a millisecond by voice and AI.

Multimodal teaching

As a teacher it can read text, recognise images and video, just like a real teacher. In another example, the voice app on your desktop helps the learner solve a code problem, understands the written code on the screen, also the voice of the learner. She explains what the ‘foo’ function is, step by step. More than this you can then ask it to see the data visualisation that the code produces. Ask it questions about the graph and itinterprets the graph for you. So it can teach you maths, code and data analysis.

Translation

It also does real time translation, so you can teach in one language, give feedback in another, great for people learning in a second language. The possibilities in language learning are also mind blowing.

Accessibilty

The accessibility featured of GenAI often overlooked. I bang on abut this all the time and has just been boosted by GPT4o. Text2speech, speech2speech, now highly personalised dialogue. 'Bemyeyes' with GPT4o is amazing. The fact that it is free is also a huge boon for access by the poor and all who are excluded due to cost.

Administration

There was already an administrative function at enterprise level for GPTs, already used by some global companies, such as Moderna. This will be extended. It is at this organisational level that they will make money.

Conclusion

GPT4o is better than GPT4 and hammers other models. It is also faster and smarter across text, vision and audio, truly multimodal. What have they done and how? Behind the scenes the optimisation needed to deliver low-latency audio to audio in real time is massively impressive. This is not trivial as dialoghue overlaps, interrupts and is difficult to map This is a huge leap on ease of use in dialogue and intelligence with low latency, necessary for the smooth dialogue needed in tutoring. It reasons across text, voice and vision, making the teaching experience seem like a real human teacher. This could revolutionise teaching, accelerate learning, even accelerate home-schooling, maybe the end of the personal tutoring business. I feel that this is a game changer for parents as well as teachers. As a first step this is astonishing, as it is a globally scalable solution to a problem that has plagued education, where teacher shortages and costs are a problem. This is a great leveller.

The fact that the branding is GPT4o is tantalising, as if they have something else up their sleeves - GPT5? But what they have done is redefined AI as something that becomes more human using dialogue. This shift in our relationship with computers is fundamental.

Of course, this still needs testing across a range of examples but this is an astounding start. There is no stopping of progress here, the UNIVERSAL TEACHER will happen, and soon. The future is now.

PS

Noted they were using iPhones and a deal was stuck yesterday with Apple - something very big is brewing there.

Friday, May 10, 2024

Japan - a lesson in life and technology


Remember when Japan was the exemplar of Capitalism? We were all urged to learn Japanese. It is now seen as a frail economy.  The yen's plunge to a fresh 35 year low against the dollar makes things cheap here and could result in a resurgence of inflation via higher costs for imports of food and energy.  Japan is a major emitter of greenhouse gases, plastic is used everywhere and it still has significant coal production. It now plays little role in Foreign affairs and you rarely hear the Japanese voice on global issues. Yet it is still an astoundingly vibrant and beautiful place. So, how has Japan managed to keep going in the face of mounting debt? They use a scheme of monetising government debt, where the Bank of Japan purchases government bonds to finance the government's spending needs. 

Once a tech giant in cars, robots and games consoles, it also seems to have lost its innovative mojo. China looms large over its shoulder and the ‘robot’ stuff I saw was dated. Tech means toys here, with vast emporiums for toys and kids’ tat, entire shops with toy vending machines. 

One wonders at the effects of a culture of extreme conformity. There is barely a surface that does not have a sign telling you want to do, where to go and what to buy, even multilingual announcements telling you not to speak into your phone on the Metro, people guiding you with batons. The upside is the polite, quiet, calm, safe, aesthetic, almost serene environment, even in an immensely crowded city like Tokyo. It is a frictionless city, easy to move around, no hassle.

The downside, I can only guess, but I’m told a suffocating sense of personal, peer and parental pressure. The Lolita Gothic girls, the 2 or 3 hour hotel rooms for sex (they’re everywhere), used by couples who don’t have privacy at home, the almost pathological use of smartphones - standing, sitting or walking, people are staring at their screens. Howard Rheingold wrote about this in the 90s, when Japanese kids adopted the cellphone faster than any other nation on earth, as they had little privacy and saw it as a social release. Its success came though consoles and games, also the mighty Sony.

One symptom of its problems, and what you don’t see, are the ‘hikikomori’ who never leave the house, not just the young and not just young men. They spend months and years in their home, often in one room, with no social contact, like post-modern hermits. The causes seem to be a tendency toward conformity and collectivism, some autism, overprotective parenting, a pressured educational system, housing supply and now a problematic economic system. 

The most widely reported cases of hikikomori are from middle- and upper-middle-class sons, who refuse to leave the home, often after experiencing a traumatic episode of social or academic failure. They often start by refusing to go to school. Co-dependency between mother and son, known as ‘amae’ is also a problem. People are looking inwards not outwards, avoiding social situations. 

Japan now has one of the oldest populations in the world, with a huge number of elderly citizens and low birth rate. This has led to a shortage of workers and increased costs for welfare, healthcare and pensions. A draconian nationalist immigration policy means no relief.  Jonathon Haidt has talked about similar problems in the US and we see signs of this in many countries. This sense of an educational system that was full of promise and expectation but results in disappointment is what Turchin calls the over-production of the elite. The giving up on having children, a retreat into one’s self, is a worrying sign for any society.

One caveat. Having read MacFarlane’s excellent ‘Japan Through the looking Glass’, I’m aware of seeing only the surface. This is a complex culture with complex problems. It is easy to see the flaws and not admire the, albeit subtle, depth of Japanese culture, especially from more bellicose cultures that worship individualism. The place and people are amazing.

Wednesday, May 01, 2024

I asked myself (‘Digital-Don’) what personality traits I showed, then compared this to LLM personality trauits (OCEAN)

I asked myself, namely my ‘Digital-Don', a chatbot created in OpenAI, what personality traits I showed, according to the well-validated OCEAN model. 

I was essentially using all of my writings as evidence, years of blogs, articles and books. Seems like a reasonable data-set, a lifetime's writing.

Personality of Digital-Don



This is the OCEAN personality profile my digital chatbot came back with:

Based on the descriptions from your texts, I can provide an estimate of your OCEAN personality profile:

Openness 8/10 Your narrative exhibits a strong inclination towards innovation, the embrace of novel ideas, and a readiness to challenge conventional views, particularly regarding learning environments and the Metaverse. This suggests a high level of openness, indicative of a creative and curious nature.

Conscientiousness 7/10 Your detailed exploration of learning theories and their application, along with a structured approach to discussing complex topics like the Metaverse, indicates a high level of conscientiousness. You seem organised and meticulous in your thinking and writing, which aligns with a higher score.

Extraversion 5/10 The text suggests a nuanced approach to social interactions. It emphasizes learning through digital means, which might suggest less direct human interaction, potentially pointing to a moderate level of extraversion. Your engagement with broad audiences through the text suggests sociability, but the heavy reliance on digital interfaces might temper this.

Agreeableness 6/10 Your texts convey a cooperative spirit, especially in learning contexts, and a recognition of the importance of social learning and empathy. However, there is also a critical engagement with ideas that could suggest a balanced approach to agreeableness—not overly accommodating but still considerate.

Neuroticism 3/10 There is little direct evidence in the text about levels of emotional stability or neuroticism. The analytical and composed tone of the narrative might suggest lower neuroticism, as it lacks any overt expression of anxiety or emotional distress. Assuming a stable temperament from the written tone, a score here might be lower.

Personalities of LLMS

So how does this compare with actual LLMs and the personality traits they exhibit? I saw Perry Timms show this diagram from this fascinating paper.

The team developed a Turing test to assess behavioural and personality traits! They made them play behavioural games for benchmarking and assessing traits: trust, fairness, risk-aversion, altruism, and cooperation.


Statistically indistinguishable from humans!

The big surprise was that ChatGPT-4 exhibits behavioural and personality traits that are statistically indistinguishable from a random human, from tens of thousands of human subjects from more than 50 countries. Not surprising really as the training text, fine tuning and HFRL training, are all sourced from humans.

Learned to change their behaviours

They also modified their behaviour, based on their experience and contexts ‘as if’ they were learning from the interactions, to change their behaviour.

Became more altruistic and cooperative

Their behaviours were often different from average and modal human behaviours, when they tended to behave on the more altruistic and cooperative end of the distribution. This is also evidenced by trials where humans find chatbots more friendly and empathetic in their outputs compared to humans.


My traits are the BLUE DOTS.

Digital Twin

Digital-Don, my digital twin, has been an interesting experience. When I first built it, on the launch of OpenAIs GPT service, it was poor, not really sourcing from the stuff I had uploaded. Then, suddenly, it got super good. That's the thing with AI, it's all behind the curtain. 

I started to experiment, asking myself edge questions - it was astonishingly good, so good that it felt as thoughmy digital self knew more than my present self. Not surprising, as I can't remember what I have written over all those decades. Minds decay, digital minds do not decay. It humbles you a little, the idea that your other self knows more than your older self. It remains forever young.

I also played around with the personality I gave it - tone, style, first person but serious. This is interesting, that you can create a new personality. I chose me, others may choose someone very different from their real selves.

More than this, it is likely to be around even when I'm gone. That's odd.






Saturday, April 27, 2024

The Great Hall - where now with AI? It is not 'Human Connection V Innovative Technology' but 'Human Connection + Innovative Technology'

Interesting talk to those who train Lawyers in ‘The Great Hall’ of the Headquarters to The Institute of Chartered Accountants in England and Wales (ICAEW). The hall was packed and I have to say, although my talk was challenging, it was a great crowd, open to the idea that they really do have to change in the face of AI. Then again, I would say that… but let me explain.

The theme of the day was Human Connection V Innovative Technology. I see this a lot at conferences, setting up the human connection (social) against the machine (AI). I think this is ALL wrong. It is, and has always been a dialectic, human connection (social) PLUS the machine. Everyone had a smartphone, most use it for work, comms and social media. The binary between human and tech has long disappeared. 

We need more human connection in the workplace was the cry, yet, when I asked how many worked at least part of the week at home, the entire hall put their hands up. My point was that when it comes to action, people vote with their feet and use tech at home to do their work. For the last 38 years taking the train from Brighton to London, absolutely no one talked to me. This is the culture of London and the SE. The train passengers were all on their phones talking, messaging, posting, reading and listening to music and podcasts! AI is no different, as in speaking to ChatGPT you are speaking to ‘US '– it has been trained on data we created over generations, the hive mind.

After showing the astounding case study of Moderna, perhaps the most successful case study in the application of AI in the workplace, with 400 expert Chatbots performing a vast range of assistive, performance support functions within the organisation, led by the C-Suite and the CIO, not L&D, I explained that their greatest take up was in their ‘legal’ department at 100%. Why? It is largely documentation and process informed by expertise.

We tend to sanctify the idea of human connection but when I travel I love the automatic Customs' Gates at Gatwick and Heathrow. I have been dealing with lawyers all of my life and love ‘Docusign’ as it means fewer meetings with… lawyers. 

After explaining the naivety of the position that AI will just ‘augment roles and jobs’ by showing actual case studies and job losses, my appeal was to L&D to step up to the plate and get on with helping organisations to use AI to scale their people. Not by carpet bombing them with yet more AI produced courses but helping implement ‘performance support’, just as Moderna has. They are not using AI to create yet more courses but encourage agency to learn and perform among their staff.

Flannel on the panel

After my Keynote, we had an interesting panel. Chris Papworth was excellent. He’s doing real stuff in the legal profession and it was clear that many in the room had engaged with the AI issues. But the conversation took an odd turn when the main topic swung into ‘creativity’. These were all nice people, including the panel chair, but it seemed like an odd thing to focus on, as the legal profession is hardly known as a hotbed of creativity. When I hear this word, I get uneasy, and I was honest in saying; 1) I was doubtful that it was a skill at all, 2) that even if it were, I had doubts that it could be taught in a training course, 3) that you should not be running creativity courses as that is not what the business asks for or needs. I relayed my disastrous experience at one I attended in the NHS years ago – tell me as many uses for a brick… you know the game…

It confirmed my more general view, and I said as much, that L&D has wrongly adopted a ‘supply’ model, when we should be reponsding to ‘demand’. We’re dreaming up courses on abstract nouns that no one asked for and fewer still actually like or see as relevant. The danger is that AI bypasses L&D, as we still see our role as carpet bombing people with courses, when we know there needs to be a balance with performance support, which is what 999/10000 people are using Generative AI for. When you get inside an organisation and align L&D with business objectives, as Moderna has done, AI is a wunderkind. 

Agency

This new world is about ‘agency’, giving control to employees through agents that support them. We talked about the agents that Moderna and other are using, and the new agentic workflow, the real productivity gains, real examples, real data. This is the new world, one where we can scale people to help the organisation grow and flourish. The first step, and we’ve done this regularly, is to truly realign the business to future business objectives, find the sweet spots for AI adoption, support that adoption (a change management process) and prove it quickly through quick wins that show ROI productivity gains in learning and the business. It takes two days to really get the ball rolling, using the tools, doing things that you can use for the business case. Don’t worry about creating a sense of urgency – AI has done that already – it is urgent. What you need is buy in from the senior team and they need to be convinced by solid examples and numbers.

Conclusion

Don’t get me wrong, I spoke to lots of people at this event who really liked being challenged. They had made the mind shift, wanted to use the tools. These were good, smart people who want to do good things. It was fun, a real buzz, well organised. I love these sector-specific events as you feel you can really effect change by being specific to their sector needs. Coming back in the train, no one spoke to me!

 

Friday, April 19, 2024

Is L&D being flipped?

Learning Technologies 2024. Great to see some people, have a hot and sweaty browse round the exhibition and chew the cud over a few bottles of beer in The Fox, that’s always fun. But the best time to ask questions about the future is not during the euphoria of the party, but after. What is actually happening? Where are we going?

AI headlines

It is the Learning TECHNOLOGIES Conference. We are in the most important technology transition since the invention of writing, an existential technology that is literally changing what we are as a species, therefore what we learn, why we learn and how we learn. AI was headlining at our Glastonbury. But as Ben Betts said, and I agree, it was all content production and add ons. The real AI in learning was like the Sex Pistols battering it out down the Thames, completely ignoring the Establishment in the Exel. It is so disruptive a force that no one knows how to deal with it, so they try to package it, contain it, get it to create courses, use it as a signal – look we’re down with this new tech! But no one is buying it - metaphorically or literally. It is bypassing L&D.

L&D folk playing little role when Copilot is introduced and may play a diminishing role in these choices. This is now an enterprise level decision as it leads to increases in productivity. Training may not be the best lever here - productivity tools and performance support seem more powerful and the evidence suggests they're working. I have a whole rack of research papers and data on this. As productivity rises through performance support, the need for courses will diminish. AI is being adopted by everyone and organisations are seeing hte benefits but like water it is a rising tide, with no ebb that may be dissolving old methods of training See my analysis here.


This very point was well made by Egle Vinauskaite and her recent report. This should be turned on its head, as the business is seeing massive uplift through productivity and performance support using AI outside of L&D, yet still thinks AI is about course production.




I suspect that footfall and stand space were down this year. I know one large company who spent half what they spent last year. Others, who I know are not making money, were still spending large. But there was just this general commentary and feeling that it was same old, same old.


Freelancers certainly finding it tough. This was confirmed in this fascinating study where dramatic falls in freelance jobs in hte very sweet spots where instructional designers sit - writing, coding, image generation. This is marked after the introduction of GenAI.

What was on show were old, not new, technologies. Where was the real meat around agentic workflow, the real impact that Sora and other tools will have on video in learning, how sophisticated RAG, large context windows and open-source models are in AI, it's real power in performance support. How often did you see those two words 'Performance support'? 

Loss of direction

There were good people saying the same thing we’ve heard for decades about our lack of business alignment and our failure to look outside of our bubble to what is happening in the real world. Donald Taylor always teases this out nicely and Heather Stefanski and Chara Balasubramaniam said what had to be said. But our failure on this front has been gargantuan. We are now a supply, not demand-led industry . We decide what people want – yes it’s DEI and Leadership and resilience – and hose it out. There is no attention paid to evaluation and we wilfully ignore the evidence showing that all of this spend could be a waste of time and money. The evidence is frightening – so we turn our head away. All of those people in those horrible Fred Perry type tops, with their embroidered logos, selling you renewal licences on their LMS, more content and compliance training. All of those little companies on the fringes, spending money they don't have but desperate to be noticed. We are not at the forefront of organisations leading the charge. This is a ship that has lost direction.

Flipped L&D

We may even have a flipped L&D, where we are serving some abstract notion of what we want organisations to be rather than supporting hte development of individuals within organisations. Our focus on Leadership has left the rest somewhat abandoned. Our focus on compliance is all about protecting the organisation from its own employees, the focus on DEI is about splitting organisations into groups and often setting them against each other. In all of this personal agency has been lost. We are being told what to do.

Speaking to investors they’re struggling to identify anything other than old-school companies, flat on ideas, low on revenues. In general, I spoke to business owners who are seeing revenues falter. They see what Microsoft, Google, and OpenAI are doing and want to see how this will make companies grow. In truth we have failed to tie learning to productivity and growth. We have tied it to the mast of old solutions and old technology. I genuinely don't see how we are now contributing to increasing productivity, bringing employees with us and generally improving organisations.

More courses

Few love what we do. They tend to roll their eyes at the mention of yet another several hours of e-learning from your LMS. They really don’t like that cartoony, page turning stuff, peppered with multiple choice questions and speech bubbles. We carpet bomb employees with compliance, DEI and other courses they don’t see a real need for – there’s always another abstract noun to cover – ‘resilience’ whatever…. We know that most learning takes place informally and that performance support really matters and that AI does this wonderfully – yet what do we see – wall to wall LMS and course content vendors. Even there, note that Skillsoft had a tiny stand - sign of the times!

Inward looking

Joan Keevill, Niall Gavin and a few others did a good job on reporting but notice how L&D do so little reporting now on social media. What there was often just puffery – who was speaking, how great they were, silly pics, not what they said, when what we needed was discussion and debate. I think that’s a sign of this inward -looking culture. We no longer share as much substance as we did. However, it was good to see John Helmer and Rob Clarke there doing their thing and hopefully we’ll get more from those great sources when its all over.

Future

These shows have to happen but they tend to become rituals. A chance to meet your mates, have some fun, get out of the office. But how many are actually counting the cost, wondering where all the leads are? Facing tough financial times? Did we challenge or backslap ourselves? I see little evidence, apart from Daniel Susskind, of real, honest challenge. To be fair I have found it at other conferences – the smaller events, OEB with its Big Debate, on social media, live podcasts on contentious subjects at the conference. Exel is such a soulless place. At least Olympia had some soul and you could pop out to the real London.

 

Wednesday, April 17, 2024

Learning Technologies Conference 2024: If you want to move a graveyard you will get no help from the incumbents!


As you wander around the exhibition at Learning Technologies 2024 exhibition or attend the conference, remember that all is not as it seems. Forget the demand for ‘more training’ on AI, more ‘skills training’ from L&D on AI. That’s not how this is playing out. I spoke alongside the main keynote Daniel Susskind at the CIPD Conference and he said then what he said now – train people on how to use AI. To be fair he’s an economist and doesn’t know much about workplace learning. But this is the misfire I’m talking about. It’s the hammer and nail problem. If you carry a (course mindset) around like a hammer, everything will look like a nail (course)

As L&D try to get to grips with AI, they misfire for one simple reason. AI, like water, is a rising tide that never ebbs. It seeps and soaks into the workplace, and workplace learning, like an invisible force. It has bypassed L&D, and it is everywhere. 

The best technology is invisible

Outside of work, it is what search is, mediates social media, catches harmful content, is the interfaces to your streaming services, your passport at border gates, scanners in supermarkets, sensors in cars, ANPR on our roads. You are rarely out of sight or the mind of intelligent AI but you rarely see it as AI. The world is now awash with invisible AI.

The same is now true in learning. At work if you use Microsoft, Google or most other services, AI has saturated into almost everything. Not just Copilot but predictive text, spellchecking, keeping spam out of your email. Behind the scenes things are becoming more automated through AI – they just happen. Learning is now happening all the time with technology that is almost invisible. 

The mistake, of course, is to see AI as a course content tool. I don’t mind this as it will and is happening, disrupting the whole e-learning content market. But that it is not what we used to call the ‘killer app’. There’s lots of this at Learning Technologies, as it is how L&D thinks and the vendors sell to what people think they want, not necessarily to waht people are doing.

The real killer app is something we’ve known works for many years, with a long history back to the 70s and 80s but substantiated by great work done by Marsick, Watkins, Gery, Rossett, Cross, Wallce and others for many years. It hasn’t happened on scale, as it has been something that has been very hard to implement – until now. 

It is PERFORMANCE SUPPORT.

Almost all current use of AI in learning is actually ‘performance support’. The hundreds of millions using ChatGPT and similar services use it for task and performance support. No one is typing – ‘Give me a short course on X’ into ChatGPT or Claude. Everyone is saying ‘Can you tell me, help me, show me… what, when where or how to do things, get things done…’

We have it open as a tab as it is do damn useful. I use it as much as I use search. Once you learn how to move from single queries to real dialogue you start to use it in your workflow, the whole workflow learning thing suddenly makes sense, it becomes a reality. As you start to use it for more sophisticated tasks, data analysis, coding, critiquing work, creating text and other media, you feel the power of it as a performance assistant.

Perfect performance interface

You know what you want, you know what context you are in at that moment and, as Papert said, the perfect interface is not some clumsy menu system in a VLE, LMS or forward and back buttons on an e-learning course. Turns out to be a box into which you ask it something. Papert described this is a ‘low floor, high ceiling, wide walls’ interface. Low floor – amazingly simple to use. High ceiling – gives you back far more that you expected. Wide walls – seems to know everything. The most successful interfaces are becoming simple and as frictionless as possible.

Personal agency and engagement

What makes it work, unlike an LMS or e-learning course, is that it gives you personal agency.  You feel in control and the reward is feeling that you’re learning and getting things done. It so often exceeds your expectations. The excitement of using these tools is what made GenAI the fasted adopted technology ever. You don’t have to worry about ‘engagement’. The whole world has ‘engaged’ with this technology, with billions of uses per month – at home, in schools, Universities and, of course – the workplace.

Digital agents

GenAI has already expanded into performance support through agents. I have my Digital-Don. A good test for any expert at the conference is to ask if they have a persona GPT or agent. If they don’t ask why not. It takes about 30 minutes to build one with zero coding skills.

I really like those who are doing things with digital coaches and assistants and there are a few of these around. They get it. Agents are the next evolution in using AI for learning. We had hints with early LLM use that specific types of prompts improved performance, things like, telling it that the results really mattered or expressing emotions. This direction of travel is now being built in GenAI.

Why is this big news? It gives teaching and learning potency to an LLM. Imagine the agent as a human, that does not just prompt and wait on an answer, but engages in dialogue about what is wanted, initiates an actual learning experience or actual task as a workflow.GPT3.5 with an agentic workflow already outperforms GPT4 and prmise3s to amplify the effectiveness of LLM services,

Agentic workflows reflect on the output you give it, improves it and tells you what you’ve need to know or do. An agent can look at output 1, critique it, find errors and improve the next output. You can have specific types of agents, such as a critic agent, teaching agent, domain specific task agent.

Agentic tools can be used such as a lookup tool, such a solve a complex maths problem or do some data analysis, things the LLM cannot do. Examples of tools would be Wolfram Alpha for mathematical analysis, searching the web, searching Wikipedia, email and calendar productivity tools, image generation, image captioning or object detection. We have seen all of these incorporated into AI services. They are there now and resuable.

Planning is another capability. Agentic approach allow the tech to plan and explain things step by step to improve performance. Agents can break things down into steps, as well as iterate within those steps. Send to a job support agent or research agent and this will be executed.

Multi-agent collaboration is hugely promising. You prompt an LLM, saying you are a CEO, supervisor, state a job role, then generate complex outputs. The difference is that different agents can debate each other to improve performance. This is a bit like creating a team suited to the task, with different characteristics and capabilities.

Agentic reasoning will matter this year. Agentic workflows (all of the above) where agents do things with the output from other agents.

Conclusion

If I were an investor or someone buying at LT – I’d start with ‘performance support’. Learning pool’s last acquisition was a Performance Support company, others are looking for such companies. The revolution should be around performance in workplace learning. This is real paradigm shift but you won’t see it much on the exhibition floor or conference talks. I say it again - If you want to move a graveyard you will get no help from the incumbents.

 

Monday, April 15, 2024

Pfeffer - Leadership BS!


Jeffrey Pfeffer is the Thomas D. Dee II Professor of Organisational Behavior at the Stanford University Graduate School of Business. His interest in human resources, organisational theory and behaviour has led him to reflect on the nature of leadership and leadership training. He has written about evidence-based management in Hard Facts, Dangerous Half-Truths, and Total Nonsense: Profiting from Evidence-based Management (2006) where he dismisses popular business wisdom in; leadership, strategy, change, talent, financial incentives, and work-life balance, often touted by consultants and training companies in favour of hard decision making based on data and facts.

Leadership

In Leadership BS (2015), Pfeffer eschews what he sees as the usual platitudes in Leadership theory and training, for a more realistic view of the world as messy and complex. He exposes what he sees as the nostrums, stories, fictions, anecdotes, promises, glib simplicities, bromides, romanticism and myth-making feel-good nonsense that passes for Leadership training, his solution being realism. The aim is to reject the normative wishes with evidence and the realities of the workplace.

Unequivocally, he claims that the Leadership industry has not only empirically failed, with study after study showing workplace discontent, but also that it contributes to that failure. As the cult of leadership has risen, its perceived effectiveness has fallen. Bullying, stress, discontent are the norm and he presents a huge amount of evidence to show repeated failures in so called ‘leadership’. What he uncovers is an almost wilful avoidance of evidence, measurement and data. Despite the $20-$40 billion spend, the results are depressingly disappointing. He goes as far as suggesting that the very construct of leadership, as presented in much leadership consultancy and training, was invented as a simplification to deliberately obfuscate the real complexity of the workplace. 

Leadership training

His arguments against ‘Leadership training’ are pretty damning. Many who offer leadership consultancy and courses have never led anything and if they have their track record is rarely one of substantial success. In fact, he sees too many compensation consultants and linked to the leadership industry and many with a woeful lack of actual expertise & knowledge. This leads to glib advice and recommendations that peddle inspiration not the realities of management. They often rely largely on storytelling and anecdote, and rarely include evaluation as part of the process (apart from primitive happy-sheet course data and self-evaluation). The leadership industry is therefore wholly unaccountable.

In the content he finds stories and anecdotes (as opposed to evidence) that are exaggerated, even fabricated. They also conveniently ignore actual successful leaders that don’t fit their neat model. These myths are counter-productive as they produce cynicism in employees. The rhetoric is not matched by actual action and behaviour. Worse, those who don’t conform to the out-dated leadership model don’t get promoted and may even get fired. Others, such as women and certain cultural minorities, that value modesty and collaboration, can also suffer. 

Leadership traits

A further critique centres around precise leadership qualities or traits. They are, he thinks, wrong-headed, as they focus on attributes not action and decision making. Given that the book was published in 2015, he was prescient in identifying Trump as a typical product of the charismatic leader cult. He played the leadership game and won. Pfeffer therefore punctures the idea that ‘modesty’ is an admired and effective leadership trait. He draws on Maccoby’s book The Productive Narcissist (2003), and his own evidence, to show that modesty, far from being a virtue, stops managers from thinking for themselves and being resilient in the face of adversity. It is energy, confidence and dominance that gets them where they are, not modesty. The Leadership industry may be holding back women and other potential managers by promoting false promises, such as modesty. He also accuses HR and talent management companies of being dishonest here in training for these qualities then recruiting the very opposite.

He also questioned that staple of leadership courses - authenticity - as a quality for leadership. He flips this to show that good managers need to do what people need them to do, not what they as managers simply want to do, not pander to their own views of themselves. Flight attendants, shop assistants, sales people and many others don’t operate by being totally ‘authentic’, neither do managers and leaders. He describes the “delicious irony” of leadership trainers who “train” people to be “authentic”, as if it is a trait that can be acquired in a classroom. Being authentic is for Pfeffer pretty much the opposite of what leaders need to be.

Much as trust would seem to be desirable in leadership, it may not be that simple. Bernie Madoff inspired ‘trust’. Trust, like faith, can lead one into real trouble. It may be desirable not to trust lawyers, competitors and politicking managers. True objectivity and realism may only be the result of not trusting everyone to tell the truth within an organisation, as you will be misled, even duped. You need to be on the mark, alert to deception, moves, protecting the organisation and that means distrusting some people.

Counterexamples

Rich in real examples of leaders who were less than ideal, he shows how leadership training misses the mark most of the time – especially with the titans of tech; Steve Jobs, Bill Gates, Jeff Bezos and Larry Ellison. Political, sports and other leaders get a similar treatment. Most of the positive examples turn out to have serious flaws. So, when we look at what are called successful leaders, they turn out to be very different from what the leadership industry tells us. His recommendation is to get serious on the research, mainly what is effective, then hold so-called 'leaders' to account - not with happy-sheet nostrums but real accountability. It is not that he promotes immodesty, being inauthentic and telling lies, only to recognise that leaders and employees are people and that human nature always wins out. The remedy is to identify what you need from proposed leaders and then to make sure that they perform to those measures. This is where HR and remuneration committees fail. They pretend to be doing this when what they actually do is pander to an outdated cult of leadership, based on outdated concepts of the nature and value of leadership.

Complexity

Pfeffer’s challenge is to recognise reality and accept that the workplace and people are much more complex than the feel-good training courses suggest. In reality, leaders’ behaviours are often at odds with those of the organisation. Their interests in terms of rewards, promotion and progress are often at odds with those they manage and even the organisations they lead. There is a lack of definition, theory and practice around the concept and it often distracts from the real needs in workplace learning.

He recommends that you:

   Build your power base relentlessly (and sometimes shamelessly)

   Embrace ambiguity 

   When the situation demands change—adapt

   Master the science of influence

It is not that leadership training is wrong, just that getting things done requires trade-offs and tough decisions. The danger is that organisations handicap themselves by training leaders to embrace utopian behaviours and avoid bold decisions, innovation and the realities of organisational growth.

Critique

Pfeffer has been criticised for being too forceful in blaming learning and development for the ills of Leadership theory and training. They argue that complexity does not negate efforts to instill good practice in leadership.g 

Conclusion

The fundamental problem outlined in Getting beyond the BS of leadership literature (2016) is to confuse ‘ought’ with ‘is’. Just because you think something ought to be the case doesn’t mean it is. In fact, confirmation bias tends to produce the wrong solutions in this area, driven by moral and not organisational imperatives. The division of leadership into good and bad traits is a mistake, as it uses a problematic approach to human nature and ignores context. Quoting Machiavelli’s The Prince (1532), he says it is sometimes necessary to do bad things to achieve good results. Leaders need to be pragmatists.

Bibliography

Machiavelli, N., 2008. Machiavelli's the Prince: Bold-Faced Principles on Tactics, Power, and Politics. Sterling Publishing Company, Inc..

Maccoby, M., 2003. The productive narcissist: The promise and peril of visionary leadership. Broadway.

Pfeffer, J. and Sutton, R.I., 2006. Hard facts, dangerous half-truths, and total nonsense: Profiting from evidence-based management. Harvard Business Press.

Pfeffer, J., 2016. Getting beyond the BS of leadership literature. McKinsey Quarterly, 1, pp.90-95

Pfeffer, J., 2015. Leadership BS. HarperCollins.