AXIOM Insights Podcast – Artificial Intelligence for Learning & Development

In this episode, we discuss artificial intelligence (AI) and its implications for learning and development. Tools like Chat-GPT and other large language model (LLM) and generative AI tools can be a transformative element of workflows, including in the creation of text and visual learning content. But the adoption of AI goes deeper than learning content, and as AI is integrated across the enterprise, it represents a digital transformation which can position learning and development as a strategic partner in the business.

In this discussion, we hear from three experts in the theory and application of transformation technologies. 

Judy Pennington is a Management of Technology Fellow and the James J. Renier Chair in Management of Technology with the Technological Leadership Institute at the University of Minnesota. Judy’s career has focused on the intersection of people and technology, specializing in IT workforce transformation, IT leadership development, IT learning and development, culture, and change management. She was previously Managing Director and CIO Fellow with Deloitte Consulting and a member of the faculty and leadership team for the CIO Next Generation Academy.

David Nguyen, Ph.D., is a Management of Technology Senior Fellow and the Edson W. Spencer Chair in Innovation and Entrepreneurship with the Technological Leadership Institute at the University of Minnesota. David has a background in the startup world by way of research in human-computer interaction (HCI), and in the innovation space, has published numerous articles on novel technologies and interactions. As an inventor, he has filed or issued more than 40 patents. He earned his doctorate in Information and Computer Sciences at the University of California, Irvine.

Susan Nabonne Beck is Vice President, Learning Design and Strategy with AXIOM Learning Solutions. An experienced and expert learning and development practitioner, Susan specializes in e-learning design, development and strategy. Susan has extensive experience creating custom learning with an emphasis on user/learner experience and measurable outcomes, and her career includes roles leading learning and development teams for multiple Fortune 50 companies.

 

Additional Resources

Episode Transcript

Scott Rutherford

Hello, and welcome to the AXIOM Insights Learning and Development Podcast. I’m Scott Rutherford. This podcast series focuses on the ideas and best practices that drive organizational performance through learning. And today we’re going to spend some time exploring AI, artificial intelligence. And I think to many of us, it feels like this technology is kind of happened to us almost overnight, especially with the public discourse and excitement around things like Chat GPT. It’s worth noting, I think, that GPT One came out six years ago, and it wasn’t really until GPT-3 four years ago that we started to see broad public adoption and discussion. So it’s an overnight success that’s been a few years in the making. We’re going to talk about how it came about and how we ended up where we are and where to go from here.

And for that, I’m really thrilled to have a panel of three experts, guests, Judy Pennington and Doctor David Nguyen, both of whom are with the Technological Leadership Institute at the University of Minnesota, and Susan Nabonne Beck, who is the vice president of learning, design and strategy here at AXIOM learning solutions. So, hi, everybody. Good to have you here.

So, David, I’m going to start with you for a little bit of the background, a history lesson, and let’s all sort of jump on this. But I want to start from understanding, when we’re talking about AI and some of the associated terms, generative AI, large language models, what are we talking about? How does it work at a very high level so we understand the tool that we’re messing about with.

David Nguyen

So large language models are named large, large language models because it’s taking effectively the entirety of the Internet from language, all the words, all the books, everything, and trying to figure out a model of how that all works. Right? So it’s looking at structures. So an interesting story about this is when. So I’m an immigrant to this country, and when I was going through school and learning grammar, and my teachers used to tell me, just say it out loud, you can hear it, right? I can’t hear these things. So things like, do we say the big, red dog or the red, big dog? Right. There’s a rule, right? There’s structure there. But as a non native speaker, I don’t have enough examples to figure out, is it red, big dog? Big, red dog. Right. And when the teacher says, just say it out loud, you can hear it. We can hear it, because when we have enough examples of these things, it makes sense. There’s a structure there. So what’s really interesting about Chat GPT and what’s happening there is all they’ve done is taking all the corpus of the Internet, like all text, and just fed it into this model to try to figure out what’s the structure here. What’s really mind blowing is their structure in language. There’s so much structure such that when you ask it a question, right, it’s just filling out the rest of that structure, and it fills out this seemingly very articulate, intelligent response. So there’s a school that says, yep, it’s intelligent. And there’s the other school of thought that says, nope, it’s just, it’s just put outputting the structure and language that we fed it through, through all that training.

Scott Rutherford

So it’s finding the patterns in the well of knowledge that we’re providing to it to look through. Is that a fair summary of the summary.

David Nguyen

Yes. Patterns.

Scott Rutherford

In the terminology we used to hear before the onset or the popularization of artificial intelligence, we used to hear about machine learning a lot. So how is what we’re looking at now different or an evolution of machine learning?

David Nguyen

Yeah. So with machine learning, we were also trying to look at patterns, right? But we had, I don’t want to say small, because there were still big, big, large data sets. But we had data sets that we were trying to figure out, patterns that we assumed there were patterns there. And then that’s what machine learning does. It just pulls out and figures out. You just feed it data. It tells you, here’s the pattern that I found. What’s really interesting with what’s happening with Chat GPT is all they’ve done is taking that and make it bigger. The size of the data set is now bigger. Effectively put every writing that we’ve ever seen into the model and see what pattern comes out. To do that, they had to have a ton more computing power, right? And so what’s happening with that is on this other end where we have this graphic processing units, right? The GPU’s the — what’s happening there was, gaming was becoming a big thing, right? And bitcoin became a big thing, and those chips were now just being pushed out. Pushed out. So there was a ton more of these chips. And what they’ve done is they’ve leveraged that, you know, that spike and now grabbed all that computing. And so now you have a bunch more computing with a bunch more data, and this is what’s coming out. And a lot of the linguists, a lot of the computer scientists, a few assume it would be so, but most of us are just blown away, like, wow, this seems like abnormal human talking, but effectively, it’s looking for a pattern, and it’s outputting back a pattern for you.

Scott Rutherford

And so part of that is the generative term, right. That’s generating something new, maybe, Judy, is this something I could lean on you to help understand? What’s the difference between when we say generative versus older systems or earlier versions that I’m assuming were non generative?

Judy Pennington

Yeah. Well, you know, I think it’s tied to this notion that there’s so much more available to us, and it happens in a way that is almost instantaneous. And so when we had a limited set of either language or data or something, that made it a little bit smaller and generative kind of says, okay, you can ask a question out there and instantaneously get quite an answer. Now, the trick, of course, is, is the answer true? Is it right? Is it valid? Et cetera? And the other thing, and David, I was thinking about this as the exercise that we did in your class a couple weeks ago. The language part of this is different, too. So we have, one of the students in the class is a native polish speaking person. And so we did a little exercise in class where he entered a question verbally in polish to, I think it was Chat GPT, and came back with the answer in Polish in, what, about 30 seconds? And, you know, it was a pretty good answer. And it was both verbal answer and a written answer. Well, those are the kind of things that may or may not have been available when we had a smaller set of information. So clearly, someone has the language translator embedded in there that can help. And, you know, that’s pretty amazing. It’s very amazing. I think.

David Nguyen

So here’s the model that I have in my head that makes a lot of sense to me. So think about if you’re trying to figure out the pattern of the orbit of a planet around the sun, you give the system a bunch of points in that orbit. So the generative part is, it’s figuring out what’s in between those points, right? But with language. So instead of, like, a two dimensional space, it’s like tens or hundreds of thousands of dimensional space, right? And so it’s finding a pattern in language, and it’s generating the stuff in between the pieces of language, and what comes out is based on what’s there, right? So back to the planet example. It’s coming out like, you gave me this point. You gave me this point. I’m going to predict this point. You gave me these 10,000 dimensional space of language. You gave me this thing and this thing, and I’m going to put, I’m going to find a pattern in here. And that’s the mind blowing thing where what, and that makes sense.

Susan Beck

And with things like Chat GPT, or some of the graphics programs out there, Leonardo AI, for example, what you’re speaking of is the prompt that you’re telling it. You’ve given them some information, and the more information you can give them for your particular scenario, the better equipped that particular program is to give you back what you are looking for. Going back to your example of big red Dog or red big dog, you would most likely get very different outputs from Leonardo AI. If you put big red dog or red big dog, it would that differently. Yeah. What you’re speaking of is exactly the experience of a user, an end user of why we may not be getting what we’re looking for from any of the systems that we’re using. And it all comes back to what you are telling it in order for it to go and pull from these vast databases of information.

Judy Pennington

That is the trick. Susan, the prompt is so important and revising that getting really good at asking the right question and then maybe asking two or three questions to get to the level that you want to get to is so important. Absolutely.

Scott Rutherford

Yeah. The notion of prompt engineering or understanding enough about how the system is working behind the scenes, which is sort of famously a black box, but to give you some assurance that the result you get is going to be based on the information you would want to have curated. And Susan, I think this is really important where we’re trying to connect the dots in a learning environment where not only we’re looking to AI tools to help us be more efficient and connect the dots and fill in the gaps that we might be missing. Right. But also we have to be comfortable or confident that the blanks in the middle aren’t going to be filled in with the wrong thing or misinformation.

Susan Beck

Yeah, yeah.

Judy Pennington

No hallucinating, right? No hallucinating.

Scott Rutherford

The famous hallucinations of GPT and for that matter, DALL-E. I’ve gotten, I’ve done some experimentations with that myself and come up with some very creative new plant species and things like that.

Susan Beck

Oh, yeah. 100%.

Scott Rutherford

It doesn’t seem to be able to quite get reality right.

Susan Beck

Yeah. I think one of the biggest struggles as a learning and development practitioner is how does AI fit into what I am doing today? And I started with the mindset of it’s a collaborative partner, just like me picking up the phone and calling Judy and asking her something. I still need to articulate what I need her to help me with any of the AI programs that I’m using today. It is a collaborative partner, but I have to be able to be very specific on what I need from them.

Judy Pennington

Yep. Yep. That’s so true. It’s, you know, the trust but verify. And if you don’t ask me the right question, it’s likely I might come back to you with something that is partially inaccurate, totally inaccurate or whatever. And I think what is very engaging about some of these tools is that you speak to them in a language like you would speak to another person. And so it sort of seems freaky that you’re talking to, theoretically, a machine and saying, could you help me with this? Or I’m struggling to do this, or whatever, and then it comes back with something and it’s like, whoa, but it really is the trust but verify

Susan Beck

100%.

Scott Rutherford

In our initial chat a few weeks ago, as we were talking about putting this episode together, Judy, you had used the comparison. You said, and I’m paraphrasing you, so you please correct me, but you said, using an AI tool is like working with a consultant.

Judy Pennington

Yes.

Scott Rutherford

Let’s explore that, because I think that helps make it to me a little bit more concrete and tangible. I understand what it’s like to work with a consultant. Theoretically, you’re working with someone who has broader perspective, broader expertise than you do and can give you sound advice based on that breadth of knowledge.

Judy Pennington

Right. Or not!

Scott Rutherford

Right.

Judy Pennington

You know, I have been on both sides in my career. I spent a number of years with American Express and other financial services institutions, and then I spent about the last 15 years of my career with consulting organizations. I was a managing director with Deloitte. And so I got to see it from both sides. And oftentimes when I was inside a company and using consultants, I actually loved using them because I always learned something. You know, they did have a different perspective. They had seen maybe the issue we were dealing with twelve times before. However, I couldn’t turn my brain off. I couldn’t say abdicate to Scott because Scott is way smarter than me, and so he’ll give me the right answers. I would work collaboratively with Scott and come up with the right answer for whatever it was that I was trying to solve. On the other end of the continuum, there are some people who micromanage a consultant. So Scott comes in with a good idea, and because it’s not my idea, I sort of say, well, nice, Scott, thank you very much. But I don’t know that we’re going to do that and we might miss out on a really good direction. It occurred to me that it’s not dissimilar. Anytime you collaborate with someone, it either is, you have a partnership with them. Right. Sort of in between abdication and micromanaging. Right. You kind of come in the middle and you share ideas and you go back and forth. And that truly is kind of how it’s like with these AI products. And it’s weird because it’s a machine, but, you know, it is what it is.

Scott Rutherford

So we’re talking about sort of the use of AI as a foil or collaborator, which I think is one well established use case. And I work with it like that. Sometimes it can be a tool to aid brainstorming or to fill the consultant role. I also wanted to think about it from the perspective of, in our world of the consumer of learning content. There’s a lot of discussion about how do we, as a function, as the function of learning, provide relevant learning at the moment of need. And a lot of that is based around curation of content. And so another way to apply a large language model would be based on a company’s internal body of knowledge to help employees or people find what they need at the moment they need it.

David Nguyen

When these models are made, right, there is a part of the training process. It’s called unsupervised training. This is the part where you just feed it the entirety of the Internet. You feed it sentence after sentence after sentence, book after book after book. And what you’re telling is if this string of letters are coming, these strings of sentence or string of words are coming in, predict the next word, right? And then you show it the next word, and you change. You change things around. Okay, make it more like that. Right. That’s unsupervised training, where you’re having it learn the structure of the language. Then there’s a second part, which is the supervised part of the training where you give it things. So what you’re talking about things in your domain, things in your company, and you train it that you train it beyond just the, the, what’s called the foundational model is the model with just language, just unsupervised language. And then you put on top of that the supervised things of, in my space, in my company, when I give you this set of strings, you give me these sets of strings. Right? And so that’s the part where I think you’re talking about now, where to fully leverage this as company, you know, Big Co X, like, you have to have that data and that’s super important, right. And for a long time, most companies either threw that data away because they didn’t know how to use it or they’ve stuffed it somewhere and now they’re like, oh, maybe it’s good, right? So now they have to go and clean it up to see if there’s structure there.

Judy Pennington

Well, you know, David, it’s also the, what we talked about earlier about proprietary and non proprietary, right. This happened in the olden days of knowledge management, right. When knowledge management sort of grew up as a tool that organizations use. The trick was always, how do you get the right information in there for others to learn from? And, you know, the old garbage in, garbage out, if you don’t have the right people giving you, this is the direction our company is going in, or this is the, these are the technical tools that we want to confirm use of or whatever it might be, whatever it is that you’re trying to, if you have someone in your organization that just loves to see themselves in voice or print or thought or ideas which may or may not be in alignment with you want what you want, you have a problem with that, right. And so it’s similar to what happens now. It’s just that you have an easier way of accessing more information and it’s a fabulous tool for that, I think. And it gives it to you instantly, right? And you don’t have to say, I have to call David Nguyen or I have to email David Nguyen to get an answer to this. You just enter a prompt and say, I need to learn about generative AI and give me all the brilliance that is in David’s mind.

David Nguyen

Well, so this is where things get really interesting for me. So if you’ve ever mentored anyone, right, part of what you’re doing is you’re modeling for them how structures work, how things work. So you ask them, just pattern match me at the beginning and we can work through how it all works. What’s interesting is at the beginning, at the novice level, when they’re pattern matching, they’ll tell you I’m totally pattern matching. But what they’re pattern matching is just weirdness. What? No, that part’s unimportant. But it’s only unimportant because as an expert, you see it like, yes, those stuff are unimportant, but like when you train a new person, when you train an AI model, unless you tell it, it doesn’t know which part of the structure is unimportant. Which part is important? Right. And the challenge is, because it is a black box, it’s difficult to figure out when is this AI model ready, because it has internalized the right parts and not pattern match some oddness from over here somewhere. And sometimes that oddness turns out to be like, oh, that’s brilliant, because we don’t think of it as an important part. Right. But it’s that black box, I think is super interesting.

Scott Rutherford

It can be very powerful. I do like the notion, though, Judy, you alluded to knowledge management and sort of the disciplines that have come before that led us to this point. I’m of the mindset that I think there’s a continuum that we can learn from there. Because, you know, the whole notion of knowledge management, or even more recently in the Internet of 15 years ago, we had workplace training struggles where companies were struggling with employees who were going onto YouTube and finding an answer to the question that was the wrong answer.

Susan Beck

Yeah.

Scott Rutherford

Because it didn’t rely on the internal volume of knowledge, of the internal curated content. So I think that the challenge then, with GPT, or generative tool that’s recommending answers is, again, it’s excluding the wrong direction from being institutionalized.

Susan Beck

Something as simple as a chatbot within a system. On demand answers of how do I complete a particular form, whatever it may be, in working with some of our clients, often the question is, well, where do we get the information for that? How did you train them before on it? I’m hoping that information exists somewhere. Yeah, well, it’s only in XYZ. Let’s formalize then, how we’re capturing that information so it can be used more than just a checkbox of, oh, they’ve already been trained on it. Let’s be able to feed these types of systems so that they can learn those internal processes. And then I think, I think, David, if I’m understanding correctly, that would be the unsupervised. Correct. If we’re just feeding data and then the supervised, I think would be, OK, in our industry, we don’t call them a representative, we call them a specialist. So if they type in representative, they actually mean specialist. So pull up anything specialist, but something as simple as a chatbot that so many people use daily on Internet sites or whatever it may be. Companies are now realizing, wait a second, this is a very relatable piece of AI software that we can use in our everyday business is going to be efficient and save time for picking up the phone and calling David. He’s not always available.

Scott Rutherford

What’s interesting, and I had come across a, a model, this is new to me. It was published by the Learning and Performance Institute out of the UK, and they’ve done some work to try to assess where organizations are at their level of maturity with the use of AI. And so what I thought, and I shared this with the panelists before the call, briefly, but then I’ll post it, put it on the, on the episode page of the website, because I think it is the useful framework. But. So they put a seven or six point scale going from haven’t started, don’t know anything about AI, to fully, fully operationalized, optimized usage. And as part of that release, they said they’re not seeing companies that are below or above the first couple of steps on that continuum, which is essentially, we know it exists, we know there’s potential, but it’s being used generally in sort of ad hoc, small experimental methods and ways. It hasn’t become systemic yet.

Judy Pennington

So those models are so helpful. And I like the one that you sent through Scott, and they’re very similar to the software development lifecycle model out of Carnegie Mellon, and we have a data analytics model out of the University of Minnesota that are very similar. And I find those so incredibly helpful, because part of this is what is the organization ready for? You can’t just go to level six, let’s say, of AI overnight. If your organization is freaked out by it, if your senior leaders are freaked out, and don’t think it’s going to be helpful if you’re not set up with the right information available so that a bad actor could get in there and show bad stuff, you know, so you need to sort of build there. It doesn’t necessarily mean that it’s going to take six years to get there, but you really do have to understand where you are.

Scott Rutherford

And that’s an interesting point, too, because, and I’d like to connect that to the notion of the strategic value of the learning operation, which, of course, we’re coming into this conversation with an interest in the learning and development organization. But one of the challenges we’ve had in our vertical, perennially, is how do we integrate ourselves and connect ourselves to organizational strategy? How do we become a core function of the business? L and D is not the only function of the business looking at AI tools. So is this an opportunity then, for, if we’re looking at how do we approach AI tools and systems systemically within learning, is this a moment where we should be talking to the COO, CFO, other members of the C suite and their respective organizations to say, well, should we be looking at how does the enterprise use AI?

Judy Pennington

Absolutely, yes.

David Nguyen

Can I Yes, And that?

Scott Rutherford

Yeah, yeah, go ahead, please do.

Judy Pennington

And then I’ll add.

David Nguyen

So I think, I like where all this is going. And I think the. And part for me is I think I can make it muddier. I can make it worse. It gets muddier because timing matters, right? So in business, timing matters. Like if you’re Uber, right, but before cell phones became a thing and your dead on the start trade, what’s really difficult for now at the C suite level and at any level with AI is it seemingly every week, every month there’s a new level. So the very challenging part for a lot of companies I can see is if we use our understanding of AI as it is right now and apply that into our internal workings, well, next month we’re going to have to reset because the things have shifted. Right. And I say, like, that’s a super difficult challenge of, I suspect there are people who want to go all in, but if they go in all in now, in a month or two, they have to retool. Right. I think. And that’s a bigger, that’s also another challenge that makes the whole process really muddy beyond, you know, at what level are we at? Are we at the beginner level or with the expert level when the tool itself is shifting underneath us?

Judy Pennington

Yeah, I would actually sort of get back to what you were talking about a minute ago, Scott. I think getting all of the C suite involved, at least in understanding where their heads are at, there’s a lot of information out there right now about these tools and the goodness of them and the scariness of them, the unethical behavior that could happen, et cetera, et cetera. And so I would suggest, like any other big transformational activity that you want to do, and I would include L&D in this, and HR, and make sure that the viewpoints are known. Security is a huge one because you’re going to have somebody saying it isn’t secure. You can’t use this tool because so on and so forth. But doing, you know, finding out the lay of the land, talking to everybody and asking the questions about what have you been? What do you believe about this? That’s part of an activity. I think this is still a term that’s used out there, the technical fluency of your senior team. Right. You want to kind of get on the same level of understanding about what it is, what’s good about it, what’s bad about it, and then you can start understanding.

It isn’t going to go away. That’s the thing. We have had this conversation amongst the university people here. Do we use it? Do we allow students to use it? You know, is it a tool that should be out there? Well, truth is, it is not going away. It’s going to be available. And what we need to do is make sure our students learn how to use it ethically and understand the top side of it, the bottom side of it, and what they need to do to use it appropriately. And the same thing happens for every company that you’re going to deal with. And so I think the L&D people need to become very flexible and very strategic and very change-able about whether the tool today is AI. And in six months from now, we’ll have a podcast and some new thing that is the new shiny object that’s out there. How do you understand? Is it right to use? Is it not right to use? But I think you for sure need to start with the C suite and at least understand where, where the good stuff and like all good change management strategy, find who already thinks it’s cool, work with them to demonstrate how it can be so helpful and then communicate it and celebrate it to the rest of the organization. And guess what? Everybody’s going to go, I want to do what Susan’s doing, and then they’re going to, they’re going to be asking for it. But it can’t just be a check the box that we’re putting out a training program on AI and think it’s done. That’s not going to work properly.

Susan Beck

Develop those use cases, celebrate the successful ones and learn from the ones that did not work out. But all in all, they’re all use cases that you present almost like a grassroots. Learning and development has been using this for a while, if you think about it with things like, Wellsaid Labs for AI voiceover, we’ve been using it for several years now before AI became this threatening, looming. It’s going to take my job well, no, we’re just using it because it’s quick and easy for big jobs. We’re still using voice actors. But my point is that there is an effort, I think, that’s been going on within learning and development for quite some time about AI, that we can come to the C suite and say, listen, we’ve actually been using it for quite some time. And this is how, and this is where we see it’s going to kind of working backwards from knocking on the be door of the C suite right in the beginning.

Scott Rutherford

Yeah, that’s an answer to the ‘how should we use it’ question across the enterprise. But also it doesn’t, it doesn’t. What we’re, what we haven’t talked about here explicitly is the need for learning to lead the organization and help people. How do we use it? How can we use it to train people on, on the tool itself? Maybe using the tool, which would be the greatest line of looping code that I’ve written in a podcast in a long time. But we have to train people at the same time as we’re using the tool, don’t we?

David Nguyen

Yes. I see two levels of how AI as a tool can be used. So there’s some low hanging fruits and then there’s some higher hanging fruits. And I think one of the things that we often talk about perhaps over past each other is there’s this notion of the fear of missing out, right. So C level people are just like, oh, it’s the next new shiny thing. We just need to add it. It’s like blockchain. Oh, it’s a gotta add blockchain because it’s a thing. Right. And I think there needs to be more intentional, more deliberate understanding of at which level do we want to insert this in. Right. So the low hanging fruits, I think, are things like the everyday business practices, sending email, things that we do generally, that if we do the same thing every day, then you realize there’s a pattern here. And if that pattern, we can probably apply AI to it and fix that pattern. So I think that’s the lower hanging fruit. The higher hanging fruit is as a business, we have data that is proprietary, that is part of something that we can get to, that no one else can get to. Right. And then use that, use our internal knowledge that is actually in all this data, and then use that, as you know, to solve our business problems. But there’s two levels. And we might be, one of us might be talking down here and the other one of us might be talking here. Same with when you’re chatting with an AI bot, when we use pronouns, what pronoun points to what I think it’s important for us to see there are things that we can do now that’s good enough and not confuse that with there are some heavier, much harder things, like how do we solve the actual problems of our industry?

Scott Rutherford

And how do you identify the problems that your competitors haven’t yet identified using the tool?

Judy Pennington

Well, and it goes back to the biggest question that we always have to ask, what problem are we trying to solve? And if we don’t identify that appropriately, then we kind of scatter plot all over the place. So being very clear about what is it that we’re trying to learn is really important.

David Nguyen

And I think that this is important to bring to the C suite to give them that distinction. Here’s all the different things that we’re actually talking about. Where do we want to spend our initial foray, initial resources? Trying to address the hard problem right away. Might be a play, might be not. Do we want to do that low hanging fruit first?

Judy Pennington

Sure.

David Nguyen

Do we teach our team what’s AI, what are structures, what are patterns to get them to start seeing the problems that they see every day in their context. And it’s the problem finders in AI, I think, is going to be the big secret weapon because you give it to the system and they’ll find a structure and find a solution, but you have to identify, this is the business problem that we need to solve.

Judy Pennington

You know, one of the things this reminds me, I think it fits one of the courses in our program is digital transformation. And the definition of digital transformation is kind of what any company wants to make it. There is. It’s really hard to have anybody say, and this is the answer. And so, you know, it seems to me similar to the struggle that we’re having in how to describe AI. But one of the concepts that we talk about there, too, I guess, is there’s an act of digital transformation where you take how you do business now and you completely transform it into a more technical sort of digital way of doing it. Think Domino’s pizza and their new way that, you know, totally online and even sometimes memory of a drone, you can get a pizza. That’s more transformation. And then the second one is digital optimization, so that it’s not quite as dramatic as transformation. And it might be more. It might be more palatable to an organization to say, we’re not going to change everything right at this moment with AI, but we’re going to optimize a few key things. Things. And that’s where I think L&D can really be helpful in understanding what are those few key things strategically that we want to get better at, whether it’s customer service or whether it’s whatever it might be, and you want to optimize it. Now. It’s a kind of a sneaky, backdoor way of saying you’re going to end up at transformation, but it’s a more palatable term, and that might be what your company is ready for. And I think L&D, people that are strategic thinkers are, again, excited about the change that goes on is really, really important.

Scott Rutherford

The term transformation in and of itself can be daunting. People get afraid of it. It’s big and it’s sort of hairy and sounds expensive, but if you sort of break it down and say, well, actually what we’re doing is we’re helping you as a functional contributor or a team or a department, do your job more effectively. Yeah, stop there. That’s what we’re doing. And we’re doing that a bunch of times. And the sum total of that is the transformation. But we don’t start, I don’t think, at that umbrella level necessarily.

David Nguyen

Well, can I make things muddier?

Scott Rutherford

Go ahead.

Judy Pennington

Yeah, that’s what you do so well.

David Nguyen

So in the innovation space, you guys have probably heard the term incremental innovation versus disruptive innovation, right? That’s one is the step by step, little step changes. That’s incremental. And then the disruptive one tends to be like, you just change the entire game, right. And I think this is where you, one ought to share with one team, what is it that we want to do, right? Do we want to do the incremental thing and in some context that makes more sense, or do we want to do the disruptive thing and change the game? And I think what’s really interesting about AI is there is a model called the task artifact model. The task artifact cycle. I’ll quickly explain it. So the task is the thing that you’re doing, like the problem you’re trying to solve, right? And humans, once we have a problem or a task that we need to do, we create an artifact, a tool to solve that problem. But what’s interesting is once we have this tool, that problem is no longer the same problem. It shifts to a different problem because we’ve already solved this one. It creates a different problem. So then we create a new tool, a new artifact, which then creates a new task, and that cycle just keeps on going, going. And I think what’s interesting about AI is I think that might accelerate that cycle. Now we have to think about what is the actual task that we want to do. Do we want to do the incremental innovation stuff of making that task faster, better, more, somehow optimized? Or do we want to go upstream and realize that this is something that we do, but the thing that we’re actually doing is up here? So forget this part. Go up here, right? Go to the, go upstream to the source of that task, the source of that problem, and solve that. Once we solve that everything else disappears because the low hanging fruit or the chain down here is just an artifact of, of that big problem. Have I made it muddy enough?

Scott Rutherford

I think so, but I think it also, it reminds me of one of the oldest management adages I, and this goes back. I’m trying to remember who to attribute this to, and I think it may be maybe George Patton, I’ll have to correct myself afterwards. But the long and the short of it is, is, you know, don’t tell someone how to do the job. Tell them what, you know, let them surprise you with their ingenuity.

Judy Pennington

Yeah.

Susan Beck

Uh huh. Yeah.

Judy Pennington

You know, since we’re looking at little things like that, one of the things that I sort of had crystallized for me probably four or five years ago is, you know, we talk about what are the skills and the competencies that people need to be successful in this time of such rapid change, of which AI is just one of those examples that swirls around us all the time. I believe one of the skills or the competencies, name your favorite title, is something called intellectual humility. So, you know, over the years, we’re all going to be really, we have gotten models in our head and things have worked for us in a certain way, and that’s what we believe to be good and true. And because of all the disruptive change that’s going on around us, whether it’s AI or other technologies or other just best practices that we’re learning from other organizations, we might be not wanting to let go of what we think is right. And the best leaders, the best practitioners that, you know, if you think about it, are those that can step back and say, huh, you know, guess what? I used to think it was A, and now I’m really understanding it’s C, and I need to understand what that is. So you need to have that intellectual humility to be able to step back and say, there is a better mousetrap out there, and I need to start using it. I work with technology people all the time. That’s been my whole career. I sort of work at the intersection of people and technology. And all too often we have gotten rewarded for what we know that nobody else knows. Right? So we’re the hero in the hallway, and we’re smarter than everybody else because we know something you don’t know. And that’s not the culture that many of us want. We want to have that. No, let’s step back and be open to new ways of doing things. And I think AI is going to be one of those tools that is exponentially going to help us all get way better.

Scott Rutherford

Let me lean into that with the idea of a closing thought, which is, I know we don’t have crystal balls and I’ll say, if I had a crystal ball, I probably would have retired by now.

Judy Pennington

That’s a very small one.

Susan Beck

Yeah, but we have AI!

Scott Rutherford

We do have AI. But in our room here, if we look forward, say, three to five years with the evolution that we’ve seen so far, what are you most excited to see come from this sort of emergent world of AI? And Susan, I’m going to throw it to you first. What are you excited to? What do you hope to see?

Susan Beck

Well, first and foremost, the adoption. More willingness to adopt it in our field and get beyond the idea of AI is going to take our jobs as instructional designers and developers. Many of the systems out there now are saying, oh, we can develop a course. Just tell us what the subject is and who your audience is and we’ll develop the course for you. There will never be a time, at least in my lifetime, where the human element doesn’t need to come in. So we then become more of the higher level consultants of the AI output. So I am full force embracing it. So I look for adoption. Number one, I think that the next biggest thing is really honing some of these tools in an L&D perspective. In other words, stop using them as the next Google. It’s not a search tool, more as that collaborative partner. And I mentioned that earlier, that is the other thing that I’m hoping to drive my colleagues in my network into really thinking of it, like calling up Susan instead, use some of the AI tools out there to get some ideas that Susan probably doesn’t have. And I know I’m speaking of myself in the third person.

Scott Rutherford

Thanks, that’s great, Susan. David, why don’t you go next?

David Nguyen

So this is my idealistic self talking, and my idealistic self has been what’s the word, wrong, so often. But I like this self. I think what’s interesting about AI is it’s going to help us solve problems faster. And what my hope is then is we stop looking at the current problem then, so we can go upstream and figure out what’s the actual problem. Right. What’s the actual problem that we’re trying to solve. So in education, I see a lot of fighting now with if AI can write essays for students, right? What does that mean about education? And my question to that group is, what’s the actual problem that you’re trying to solve by having students write essays, the essay is not the thing that you’re trying to teach them. The knowledge is not how to write an essay. An essay is just a tool, right? The knowledge and understanding and learning, that ought to be the thing that we’re trying to teach. And if the tool is now there’s a tool to write essays, can we go upstream and figure out what’s the actual problem here that we’re trying to solve? I hope that we can get, we can go upstream more.

Scott Rutherford

Yeah, what do we really want to accomplish.

David Nguyen

I hope that we can go upstream more.

Scott Rutherford

Judy, what do you think?

Judy Pennington

I’m going to go way upstream. I agree with Susan and David, what you both said, but I have thought this from the very beginning and I want to believe it to the core of my being. I think AI in some form is going to help cure cancer. And I don’t mean that just sort of on the side, but I think there’s so much power in being able to get data from so many different sources, synthesize it, and maybe not come up with the answer, but give our research scientists, you know, three or four ideas that then they can go explore in further depth. I don’t think any of this will ever not need human touch. I don’t believe in that at all. But I think the power of being able to aggregate a whole bunch of stuff is going to be amazing. And that’s what I believe and that’s what I want to believe. And you’re not going to talk me out of it.

Scott Rutherford

It’s exciting and we will see how it develops and I’m excited to be a part of it myself. So Judy Pennington, David Nguyen and Susan Beck, thank you each very much, I enjoyed talking to you. Thanks very much.

Judy Pennington

Thank you.

David Nguyen

Thank you.

Susan Beck

Thank you.

Scott Rutherford

This has been the AXIOM Insights Learning and Development podcast. This podcast is a production of AXIOM learning solutions. AXIOM is a learning and development services firm with a network of learning professionals in the U.S. and worldwide, supporting L&D teams with learning staff, augmentation and project support for instructional design, content management, content creation and more, including training, delivery and facilitation, both in person and virtually. To learn more about how AXIOM can help you and your team achieve your learning goals, visit AXIOMlearningsolutions.com. And thanks again for listening to the AXIOM Insights podcast.

Scroll to Top