AXIOM Insights Podcast

The award-winning AXIOM Insights Learning & Development Podcast series highlights conversations with experts about supporting and creating organizational performance through learning.

Artificial Intelligence in Learning and Development

In this bonus episode of the podcast, we are resharing a recording from a live stream discussion. We are joined by Dr. Guido Minaya (CEO and Chief Learning Officer at Minaya Learning Global Solutions) and Susan Nabonne Beck (Vice President of Learning Design and Strategy at AXIOM Learning Solutions).

The discussion covers the current experience using AI tools in L&D, and the pros and cons of how AI tools can be used to enhance the learner experience and the development of learning content. The discussion also explores the opportunity created by enterprise adoption or experimentation with AI, as the L&D function is a resource for AI skills development and helping the organization manage AI related risks.

Listen to the audio version of this episode on your favorite podcast app, Spotify, Apple Podcasts, or by clicking the Play button below.

Listen to This Podcast Episode

Related Resources

Episode Transcript

Scott Rutherford
Hi, I'm Scott Rutherford and this is a bonus episode of the AXIOM Insights Learning and Development podcast. We focused on AI in several episodes of the podcast series and we'll have links to related episodes on the page for this one at axiomlearningsolutions.com/podcast.

This conversation was originally presented as a live stream. In it, you'll hear about how L and D is and can use generative AI, how L& D leaders can navigate the use of AI in the learning function, including the pros and cons of using AI in the creation of learning content. And the discussion goes further to explore the larger issue of how L and D can support the adoption of AI throughout the enterprise. And so with that, I'm going to hand it over to the discussion with my live stream guests. Dr. Guido Minaya is the CEO and Chief Learning Officer of Minaya Learning Global Solutions, and Susan Nabonne Beck is the Vice President of Learning Design and Strategy at AXIOM Learning Solutions.

So let's begin with looking at some successes within the L&D team. What do you think is working? And I'm going to start that one with Susan.

Susan Nabonne Beck
Yeah, so things that are working today you may not even notice completely go unnoticed. For example, if your company has an LXP, you're currently using AI. LXPs are based on AI to give the learner what they need or want as far as learning goes. So you're using it today and you may not even notice things like Grammarly. If you're using Grammarly, that's AI based. So you've been using it all along and you didn't notice. Where it's going or where it is, I should say right now from a baby perspective, is that we are now learning that we can partner with AI as a collaborative coworker that used to be next to us or a phone call away or via teams or Slack. Now we can use AI to ask questions, to refine our work, or to get a starting spot, a starting place, excuse me, for creating content outlines or even storyboards. If you're doing elearning or instructor led content ideas for interactivities, it is that collaborative partner. What would I ask that person next to me if I had a chance? We can now use AI for that.

Scott Rutherford
I think it's maybe a good moment to bring in, this is actually a clip of a podcast that we had done recently with our instructor, technologist John Kidd, talking about using AI as a little bit of a brainstorming foil to avoid starting from just a blank sheet of paper. Why don't we watch this and then we can respond and build off of what John says.

John Kidd
Whether I'm writing code, whether I'm generating content, whether I'm building outlines, whether I'm building a learning path because the tools are really good for people like me that don't do real well with a blank sheet of paper, if that makes sense. Give me a starting point. And as long as I understand what I'm trying to accomplish, man, these tools are fantastic. I love it because I can have it give me, for instance, if I was trying to build a learning path of taking a group and a technology from point A to point Z, I'll have it build the path for me, and then I'll go in and look at it and say, no, that's not correct. I wouldn't do that. But it gives me something to edit rather than building something from scratch. The same thing is true with course outlines.

You know, build me an outline for working for learning. I don't know. Pick your technology. Well, the outline that it builds oftentimes is an outline that's going to take months to accomplish. So obviously I'm going to go back in and I'm going to say, no, that doesn't make sense. Yeah, that's a good order. And then do the editing on top of it. So I guess what I'm kind of circling the field with, I'll land the plane on it, is, you and I still need to be the expert on it. I cannot take what's given to me as gospel. Right. So to speak. It's going to be a tool for getting me over that initial creative hump, if you will, for moving me down the path to the goal that I'm trying to accomplish.

Scott Rutherford
With that as context, Guido, I'd love to bring you into the conversation here and just to get your thoughts on the role of, and I'm going to build a little bit on what John was saying in that clip, which is to say there's a utility to these tools to perhaps elevate the knowledge of the learning professional to say that, you know, well, I'm not going to invest my time doing, you know, the five hours of work it's going to cost or cost me developing a course outline when I can put that in AI, generate that more quickly, and then edit and tweak based on my expertise.

Guido Minaya
Right, right. I think that Susan covered great points, as did John. I think the one interesting element that we're still looking for more definition is in terms of intellectual property. So whose property is that? So generally where it stands as of today, if we develop the outline, if we say to, let's say ChatGPT or whatever tool we're using, here's my framework, now I want you to enhance it. Now I need this additional way of looking at it for this audience. We're introducing that initial outline, and it could still be a rough outline, but that's a first step where it's our intellectual property that's going in and the tool is assisting us. That's one caveat to consider, but absolutely, this 5x 10x's our capabilities and allows us to refine, refine, refine and get to that end product much faster in a more effective way. It definitely sets up our teams to be much more productive, thoughtful, and refine the content that we're developing for our clients.

Susan Nabonne Beck
Yeah, and I think also as a new instructional designer or developer, it also gives you a sense of confidence, right. If ChatGPT, for example, or Copilot, something that day to day, if I can put my outline in and it gives me feedback of, yeah, this looks great, maybe add this or that, I have a vote of confidence from this all powerful AI. Right. But I completely agree with you. It is a matter like John said, it is a matter of getting beyond the blank page.

Dr. Guido Minaya
Right. Just one added element is the, are we talking about the free version of some of these tools versus the paid version? Right. So in the paid version you get some phenomenal capabilities that even further your ability to rely on those tools. Right. So as an example, you could upload different kinds of outlines. First off, all of that information is protected just for you. Right. Or for your client or for your company. And then as you're uploading outlines of past courses that you've developed, it's learning from that. And then as you ask it questions, it's leveraging that framework and documentation to answer the next question that you ask it. Right. So there's some pretty powerful tools given which, which plan of the tool that you're using.

Scott Rutherford
And I guess this comes back to what I would, what I would, I would posit is a mischaracterization of the potential of AI, which is to say that there's, a, perhaps a misconception that, that all we should be doing with AI is, is striving for the faster and cheaper.

And I think that that misses the point of we can strive for the better and really upend the whole. You know, it used to be the old adage that you can have it fast and cheap or good, but you have to pick two of the three. I don't know, but I suspect that AI sort of disrupts that triangle a little bit.

Dr. Guido Minaya
Yes, yes, definitely. I mean, it's tempting to say we'll be able to do this faster, cheaper. Really. It's the personalization, the alignment, the higher level of potential output that you can have that specifically speaks to that target student audience. So it's elevating us. Right. Are we going to be more productive? Yes, but we'll also be able to spend our time in really seeing what is the highest value for that learner audience that we're developing for or that we're delivering to. And that's really what drives value and performance. And at the end of the day, our stakeholders, our leadership, they're looking for that next level performance from our workforce. Therefore, this is spot on to what we as a company, they as leaders need to be competitive in the marketplace.

Susan Nabonne Beck
Sure. I think it adds a level of tailoring as well. That may have taken more time previously, but now we have that ability to quickly tailor or ask the questions necessary to tailor and get ideas of how to tailor where we could have been sitting for hours in the past trying to think of those tailoring points. So I think it does give a different level of efficiency. Maybe I'll call it efficiency that we didn't have in the past.

Scott Rutherford
One thing I wanted to interject here, and I guess this is a little bit of an aside, and I don't want to belabor it for folks who understand how generative AI works, but it does tie back into a point that you were making earlier about what's the source of knowledge? What's the intellectual property behind this?

Now, I'm going to probably mischaracterize by using a broad brush, but when you put a query into ChatGPT or any of the other tools, the words you type in are converted to numerals. They're tokenized, they call it. And then those are compared against the pattern that the tool has developed behind the scenes. This leads to some very interesting glitches within the systems. The the famous one that's being shared around right now is how many Rs are there in the word Strawberry? And the tools can't get it right. They'll come back and say, well, strawberry has two Rs. Well, actually it has three. But because of the tokenization of it and the way it feeds the data into and out of the system, it has trouble handling some of those things because all it's doing really is pattern recognition and pattern extension. Again, I'm going to go to a clip here. David Nguyen was on our podcast a little while ago, Dr. Nguyen is a technologist with University of Minnesota. So let's hear his perspective here.

Dr. David Nguyen
Yeah, so large language models are named large language models because it's taking effectively the entirety of the Internet language, all the words, all the books, everything, and trying to figure out a model of how that all works. So it's looking at structure. So an interesting story about this is when I'm an immigrant to this country and when I was going through school and learning grammar and my teachers used to tell me, just say it out loud, you can hear it, right? I can't hear these things. So things like do we say the big red dog or the red big dog?

There's a rule, there's structure there. But as a non native speaker, I don't have enough examples to figure out is it red big dog, big red dog? When the teacher says, just say it out loud, you can hear it. We can hear it because when we have enough examples of these things, it makes sense. There's a structure there.

What's really interesting about ChatGPT and what's happening there is all they've done is taking all the corpus of the Internet, like all text, and just fed it into this model to try to figure out what's the structure here. What's really mind blowing is there's structure in language.

There's so much structure such that when you ask it a question, it's just filling out the rest of that structure and it fills out this seemingly very articulate, intelligent response. There's a school that says, yep, it's intelligent. And there's the other school of thought that says, no, it's just outputting the structure and language that we fed it through all that training.

Scott Rutherford
To build on that commentary and bring this back into the realm of the learning and development team. Let's say for example, that you're training a closed generative AI system on your own company's body of knowledge, what the tool then is going to be able to do. And I'm going to give you first the chance to maybe let's explore how we operationalize this. But what the tool is going to be able to do is index your body of knowledge, your best practices, all of your learning materials, perhaps, and then suggest back to a user, okay, here's what we think might naturally come next. And that starts to become, I know you operationalize that as perhaps, Is that coaching? Is that, is that nudging? How do you see that being actioned.

Dr. Guido Minaya
And not even getting into digital agents or agentics, Right? So in, we'll take ChatGPT as an example, you're able to go and say, because really, as was explained, it's taking the billions and billions of documents that it's learned from and then you're asking a very specific question and based on those billions of documents, it's predicting what an answer should be. Right? It may be, it may be incorrect that it, that it leveraged these reference points versus those reference points. So what you're able to do and set is, is you can set up your own specialist, in essence, right, with, with the standard ChatGPT approach and say you are a seasoned instructional designer that is most familiar with the best models, the ADDIE model, all the best models available, and you can provide, you know, more context. You also know multimedia design, micro learning, evaluation.

And so it's taking all of those billions of documents now and making the corpus more narrow in the sense that the body of knowledge more narrow in the sense that, okay, I'm in this lane. My answer should be based on this lane. And then, so that's just the easy starting point. Then you're actually able to upload documents and say, okay, well here's how we specifically do these learning development processes in our company. And you're able to upload it, you're able to upload examples of courses, right? All of that then becomes that more narrow body of knowledge and it's able to give you a much better answer. Is it exactly 100% of what you may want? Probably not. But you are that much closer as you're doing that kind of mini training for that even before you get into digital agents.

Susan Nabonne Beck
You brought up a great point. Word of caution, if you will. If you are new to prompt engineering or even using any of the AI tools where you can ask questions, do not, please do not copy and paste. Do yourself a favor and actually look through what was produced. Even if you feel like your prompts were perfect and you have given them all information possible, I call them a them. I don't know why. I also say please and thank you. Just in case, you know, they take over the world. I want them to like me, so.

But anyway, read what was given back. Do not just simply cut and paste and send off to your manager or client or whatever. Because, you know, to summarize, pretty much everything we've talked about for the last few minutes, there is still room for error In AI and more importantly, the specific thing that you're looking for, they just may not get. So yeah, word of caution there. Don't just, don't just copy and send it on over.

Scott Rutherford
Well, it seems like if we have any continuing anxiety about AI taking our jobs, the one thing that we should not do is take the human element out of the interaction.

Dr. Guido Minaya
No, no, exactly. And that's why the term AI and hi human intelligence is so critical. Because AI doesn't have empathy. AI, the creativity is not there. It doesn't have the ability to collaborate with others before arriving at an answer, the critical thinking component. So now those super soft skills are even more critical in this new dimension that we've entered. And that's why from a learning leader perspective, how are you going to further develop those skills within your workforce? Because they become even more critical as we move down the path of digital workers, digital agents participating in our workforce.

Scott Rutherford
Yeah. And that's probably a good moment to pivot to say, well, okay, well, you know, besides what we in the learning and development teams can do with AI, it's really about how do we step forward and support the organizational change that every organization is facing in some level about, well, how do we leverage and how do we create creative advantage for the enterprise based on these AI tools.

Guido, I'm going to give you a first stab at this, but we mentioned prompt engineering as a discipline for the L&D team. Teaching prompt engineering to the enterprise is also maybe not more important, but equally important. For sure.

Dr. Guido Minaya
Absolutely. I'd say that's a key skill set and there's so many ways to direct our workforce and to improve their prompt engineering skills, as well as developing a course on that that may assist our unique situation, but really it's also AI literacy. Right. So as we've all seen and maybe how we initially faced AI, you're initially a little bit intimidated. What is this incredible new world? Should I even be dabbling in it? And really it's helping our workforce see that these are phenomenal tools. And the sooner you're able to understand how to leverage them, when to leverage them, why to leverage them as you become, and you don't have to start big with robotics or computer visioning basics, you will continue to learn and expand your capability set. But yeah, I would say prompt engineering, the AI literacy as well as ethics included in that mix as well.

Scott Rutherford
Susan, I want to get your thoughts. I know there's something we've talked about a lot, too, offline, but it really is strategic alignment question at some level to say, how do you put together AI skills training for the enterprise that meets the organization's goals that aren't just throwing stuff out there?

Susan Nabonne Beck
Yeah, absolutely. I think we go back to the good old ADDIE model. We need to analyze, we need to scope out what the organization's actual goals are and then do a skill gap of where is the organization workforce, where are they on AI? What have they been using thus far? Something like the banking industry probably has been using AI from an agentic perspective with these agents or bots for quite some time. So they may be more familiar with AI and trust it a bit more. Where other industries, they may be completely at the ground floor of it and have a hesitation to learn it. So we do have to do those sort of analytics to get a starting point for the organization and then it's the baby steps. It can't be from 0 to 100 immediately. Why? Because that learning process and that trusting process is incredibly important.

You may not have the opportunity to be a prompt engineer, you may not be in an AI tool like ChatGPT, but you may be trusting an AI tool such as Wellsaid Labs. Let's take an L&D example. For those of you that are using AI voices out there. If you used it a year ago, two years ago, you know it is much different than it is today. The pronunciations are better, the inflection is better today than it was two years ago, but you've now trusted that it is growing from where it was, is actually growing all the time. I swear, hourly it changes, but that's my opinion.

So from a larger organization perspective again, back to basics. We're doing some very, very deep analyzing and scoping of where the workforce is and more importantly, really where the organization wants to go or plans to go with AI.

Scott Rutherford
Right. Guido, go ahead.

Dr. Guido Minaya
Yeah, just adding to it right now. I'm wrapping up. This is my last week of a UC Berkeley course on business strategy and implementation of AI. And really the reason I took this two month course was to get the lens of a leader and how are they looking at it as they start making decisions for their large organizations. And in many companies, I mean you've got the Johnson & Johnson example that trained 56,000 employees on AI literacy. Great. That is the absolute exception in many cases. Just like you have AI startups, right. On applications and all that within larger companies, you have these, these independent projects that are taking place on AI, but really there's, there's not that, that system wide approach to development. Right. So as learning Leaders we are able to. We not only are we able to, we are going to be called upon to help the company raise the bar in terms of AI. And we are in such a fun, unique strategic position to lead because it's not going to be the IT side of the house. It's not going to be the HR side of the house. This rests on us to really expand that AI capability set within the company. And the more we learn about it, the more we use it internally, the more we, as Susan said, do that analysis and gap analysis, we'll be ready to provide very high value to our company.

Scott Rutherford
Yeah. And that builds really well into actually, I'm going to cut away for another little quote here. This is again from our podcast, but Judy Pennington is also with University of Minnesota, but her experience is largely in large consulting firms. And we talked about this as an opportunity that should empower L&D as a change agent in helping the organization manage an enterprise wide change initiative.

Judy Pennington
I think getting all of the C suite involved at least in understanding where their heads are at. There's a lot of information out there right now about these tools and the goodness of them and the scariness of them, the unethical behavior that could happen, etc. Etc. And so I would suggest, like any other big transformational activity that you want to do, and I would include L&D in this and HR and make sure that that the viewpoints are known. Security is a huge one because you're going to have somebody saying it isn't secure, you can't use this tool because so on and so forth, you want to kind of get on the same level of understanding about what it is, what's good about it, what's bad about it, and then you can start understanding it isn't going to go away. That's the thing. We have had this conversation amongst the university people here. Do we use it? Do we allow students to use it? Is it a tool that should be out there? Well, truth is, it is not going away. It's going to be available. And what we need to do is make sure our students learn how to use it ethically and understand the top side of it, the bottom side of it, and what they need to do to use it appropriately.

And the same thing happens for every company that you're going to deal with. I think the L&D people need to become very flexible and very strategic and very change able about whether the tool today is AI. And in six months from now we'll have a podcast and some new thing that Is the new shiny object that's out there. How do you understand, is it right to use? Is it not right to use? But I think you for sure need to start with the C suite and at least understand where the good stuff [is], and like all good change management strategy. Find who already thinks it's cool, work with them to demonstrate how it can be so helpful and then communicate it and celebrate it to the rest of the organization. And guess what? Everybody's going to go, I want to do what Susan's doing. And then they're going to be asking for it. But it can't just be a check the box that we're putting out a training program on AI and think it's done, that's not going to work properly.

Scott Rutherford
So yeah, Susan, you got a little bit of a name drop there. But really what she's talking about is the realization that if your senior leadership isn't working with L&D to support the learning needs around AI in your organization, it's a little bit of a ostrich head in the sand moment because the tools are there, you know, people are using them. So the question becomes strategically for the organization, are you going to use them in a fully rolled out, structured way or allow it to be haphazard in the shadows with all of the risks that could entail?

Susan Nabonne Beck
I don't know if you want to take this one, but I personally think it's all, as I mentioned earlier, it's all about baby steps. Choose that starting point, use it as a way to get people familiar and then you're going to have continuous upskilling, reskilling things are, like I mentioned, changing all the time. So our job probably won't be done until we're on to the next new thing.

Dr. Guido Minaya
And really, when you think about large companies and even small, medium sized companies, you've got the accounting function, the sales function, the customer service function, not to mention production, manufacturing, procurement, HR, if you're in the hospital setting, patient experience.

So all of those individual processes that we're already contributing to training around those processes now have this transformational set of technologies that are going to completely change how training is done.

So how do we work with that division that might be readier than the others? Great. We've got that division that we'll tackle. But let's not forget our own L&D department. Right. So I remember when I did my dissertation for the doctorate, it was looking at three different corporate universities. We're looking to see if we can get traditional instructors to become virtual instructors. This was back in 2005, I was with Accenture at a couple of Accenture colleagues that were helping me with this analysis, and we were doing a deep dive on evaluations. How, how did those compare to tradition?

That, that, that took us months, and that that can now be done in a heartbeat. Right. So for us, what are our processes that we see that are taking us a lot of time? If you're an L&D leader, what's taking time? And then also what are you not even getting to? You've got all of these evaluations from, you know, months, years of delivering this content. How can you relook at that data to potentially come up with a new set of insights about how you are delivering training? So all of that's available not only at the macro of the much larger organization, but within L&D. How can we vastly improve the L&D function?

Scott Rutherford
I think that's a really interesting point because one of the. I mean, we all want to be data driven, but one of the challenges, just in terms of human workflow, bandwidth, when you're dealing with a lot of the information we're going to get back from human behavior, is going to be qualitative. If we want to do some analysis on that that is meaningful, we need to code it and allow it to be structured data, not unstructured qualitative data. So, Guido, what you're saying is, well, use the AI tool to be able to leverage the reams of information that are sitting somewhere already.

Dr. Guido Minaya
Absolutely. And one point that came up during the Berkeley class is that those established companies have those years and years of customer data, customer experience data, patient experience data. Really, the AI tools, great, they're going to help us. But if you don't have the data that then you could leverage those tools with to get you new insights, the biggest value that we're going to see in AI is going to be for companies that are able to leverage that data. New insights, new products, new services, because they never had the time or the manpower to really address it. And now they have a solution to do that.

Scott Rutherford
So let's move on a little bit. I know that you made a reference to agentic AI.

We are, depending on which analysts you talk to, into perhaps the third generation of generative AI as we've moved through. But we've moved from simple chat interaction to chat bots to agentic. Help me define what that means for folks who are not familiar with the term.

Dr. Guido Minaya
I think as we've been discussing, we have ChatGPT and others that are large language models that you ask it a prompt. And depending on what the capability is, you may get a very narrow answer based on how you've asked it and which, which, let's say, features you're activating ChatGPT.

And it'll give you an answer, right? It'll give you. It'll do an action for you. With agentics, and, you know, there's more and more and more and more, but I think as of last week, there was at least 1200 available agents, digital agents. They could be in productivity, they could be in finance, they could be in customer service, in sales. These are digital agents that you give them a specific task as you would give another employee in your company, and given that that task is defined for them, they can complete that task.

So as an example, there's the travel agent kind of support that you might get. You're going to Europe, you need hotels in, you know, three different countries. On these dates, you can ask your agentic agent, all right, here's the dates. I need you to book the flights, book the hotels, and let me know what the end result is. And that agent will make those calls, will connect with those hotels, will finalize your air travel, and either completely do it for you if you give access to your credit card, or it'll come back and say, here's the best itinerary. What do you want me to do? So that's an example of a basic example of how you can leverage a digital agent to take it not only from. All right, what are the available flights now? It's all the way to completely complete that booking for me. Right? So take that to finance, take that to sales, take that to customer service. You've got different functions that can be moved to a digital agent that they can complete.

Scott Rutherford
Right. And that time I'm going to bring up, you had shared a graphic with me, and I want to bring this forward so you can talk us through it. But this speaks, I think, to that multiplier effect of the pace of progress and change.

I don't know, Guido, if you can see the graphic there, that's.

Dr. Guido Minaya
It's not showing up yet, but no worries. Yeah. And I think if it's a graphic that we talked about, Scott, basically it's looking at human evolution and the progress we've made. Right, so here you go. So Basically, up until 2025, we were progressing, you know, at a certain level. We were increasing in terms of our productivity and capabilities. And now we enter the world of agentics.

We started entering into 2024, but 2025 is where it is being adopted more and more. And the level of productivity, the level of progress that we're going to be able to make by leveraging these digital agents, these coworkers that are now digital agents is going to be phenomenal. So that's exactly the point that we're at. And this was in January of 2025. We're already there, we're already moving up quickly up that line.

So if you take a look at it from an L&D perspective, all right, so now I'm in sales, I'm in customer service. Now there's this digital agent that can also assist. We're going to have to be developing curricula that takes into account that now you've got in a process these digital agents that are working alongside you. Right. That are enhancing your capability set, that are doing tasks that may be extremely repetitive. So you can be focused on tasks that are much higher value. And this is where we're at now. Many of our companies haven't gotten there yet or they're starting to use them, but as they see more and more and more high value, we're going to see them in our, we're going to see them in our, in our workplace. Right?

Scott Rutherford
Yeah. Susan, I think this is a perfect spot for you to comment because we're, you know, we're talking about how do we bring this out, how do we use that hockey stick of productivity within the learning function and love to hear your thoughts.

Susan Nabonne Beck
Well, what I'm most excited about are these agents being able to analyze trends across all learning ecosystems within a company or predict skill gaps that could potentially arise or most importantly, recommend these organizational wide, either skilling or shifts in skills, that need to happen to keep up. I think these agents are, we're primed, if you will, to start incorporating these agents to be able to support what we've been doing all along. I've been in L&D my entire career, so it's incredibly exciting for me to think, wow, now we have something that will flag for us when we need to do something.

Because we're never taking the human element out of it. Remember that we mentioned that earlier, our jobs are here to stay because even with an agent it still needs that human element. The other thing is the idea of this agent being able to be a co-creator, if you will, alongside me to recognize a skill that needs to happen, draft the outline, possibly even taking scoping documents that we've done in the past, ask questions from those documents to ensure they're still relevant and then produce., again, those strategic, strategic gaps or, you know, create the target that we want to get to in the future.

Scott Rutherford
And I think this might be a good moment to move in. Actually. We had opened the door through one of our registration forms for folks to submit questions in advance. And we do have a few here from the audience who will get through those.

Dr. Guido Minaya
If I could just, just, just. Of course, the one reference I think we're going to provide folks, right, is that IBM explainer video on agentics, man 12 and a half minutes. It is a phenomenal depiction of where we started and then how agentics really operates. It still leverages large language models and a number of other things. So it gives you a better, better view of really what they are.

Scott Rutherford
Yeah, and we'll include that it's an explainer video that we had watched, the three of us before coming on here and they did a really nice job to explain to take us through the last few years and really explain the evolution. And they did some nice video effects on it too. But IBM has budget, but yeah, we'll share that link after the live broadcast here. So for folks who want to do a little bit of a deeper dive into understanding sort of how all that fits together and what the really it's looking at, not just okay, where are we in 2025, but how do we anticipate what 2026 or let's be honest, the second half of 2025 might have in store for us.

But yeah, let me go back to one of the questions here from this is Joel, and Susan I'll ask you to take a first swing at this: Do you have advice about using AI to create personalized learning at scale while still keeping everyone aligned, learners aligned with learning pathways and objectives.

Susan Nabonne Beck
So I think at the minimum, thinking about learning experience platforms, LXPs on top of, or in lieu of the traditional LMS to be able to really hone in on that learner's needs to do their job, but more importantly their wants. If they're looking at looking at career pathing, for example, what's my next step? What do I need to do to get to that next step?

But it's using the, using what's available out there today without getting too ahead of yourself, what can be done today? LXPs are available across the board in many different sizes and shapes. The other thing would be to, we talked a little bit about taking like scoping documents and asking if you don't have an agent asking something like ChatGPT or Copilot, what are potential gaps that this learner may face in these situations.

Going back to being a prompt engineer, you have to ask the right questions. But if you ask those right questions, these things can be identified and then you plan accordingly for that upskilling or reskilling if it happens to be that.

Scott Rutherford
All right, I'm going to move to the another question. This is. Gretchen had sent this in. She says, what are some tips to maintain ethical practices? This is a little bit of a. You could take this one and this could take a whole half hour if we really dig into it. But what are some tips to maintain ethical practices and embed them in training and communications around AI usage when speed and productivity are critical markers of success. Guido, I'm going to maybe give you a first run here. What, what advice would you give to. Because we have, we're going back to the whole notion of there's a focus on speed, but we also need due care.

Dr. Guido Minaya
Well, I think it depends on the size of the organization. For large organizations, definitely your governance team. Right. Your governance team for the learning organization should definitely address this in terms of providing some level of ethical standards at every layer. Right. Content creation, data handling and even prompting. You know, skill sets on prompting. So in a much larger organization, that's that, that's how they would normally address it.

In other organizations, smaller organizations like RenoMac, what they did is they actually had an AI champion, champions. And with their, their role was to, to model and, and to provide that kind of guidance to their, to their colleagues and co workers. But yeah, it absolutely should be in, in every level, whether it's content creation, whether it's data handling, it, it needs to be, it definitely needs, needs to be addressed and incorporated.

Scott Rutherford
Okay. And the final question I have here, and I'm looking, there seems to be an issue with the live stream to LinkedIn, which I apologize for. And if anybody's having trouble there, we will post the video to LinkedIn for anyone who wasn't able to connect there. But I do know we have some folks watching live on the YouTube stream, so if you have questions there, feel free to put them in the chat and we'll try to grab those here. But Abby had written to us in advance when she signed up and Susan, I'm going to give you a first stab at this one, which is do you have any advice about the use of AI in the measurement of learner behavior or learner progress? And do you think this creates ethical concerns?

Susan Nabonne Beck
Tough one. Well, not tough one, but it's definitely one of those things. You have to think about, let's go backwards talking with the ethical concerns, taking the human element out of it at this point, I think may present an ethical concern. Because I talked about trust, right. We aren't at the point that we're trusting that AI can learn to be more subjective. I'm choosing my words carefully for this. In short, I think that there is an ethical concern right now, if that data is collected and just used as is, if we're taking out the human element altogether, Absolutely.

Will that be true a year, two, three years from now? I can't say. But as of right now, I think that there is an ethical concern about collecting that data, using it as is today. More importantly, without letting the learner know that this data is being collected and potentially used as well.

As far as measurement, using AI to measure, I think we should, because again, efficiencies in what we do use AI to collect it, but it's a matter of ensuring that we're communicating that we are collecting it, what we're collecting it for, and then not just using it as a cut, cut and paste, without the learner knowing and without us putting our fingers.

Scott Rutherford
Yeah, and the oversight and that, the control and you know, I'll let you jump in here. This is a little bit what we talked about earlier. We can't just let it go to the wild. We need to be able to ensure that as it's ingesting whatever information, if it's learner performance, if we're trying to gauge whether a learning intervention created behavior change in an organization, we're going to have a certain set of inputs, we're going to be able to gather to measure that. But that can't just be taken as gospel coming out of the AI tool.

Dr. Guido Minaya
So, yeah, clear data policies, learner consent, transparency, all of those are important as we collect more and more data. And there's, even as we look at this monumental shift, it is a cultural shift within an organization as to how they're going to transform. Right now we're in the optimization stage of how we're going to leverage AI, then we're going to move into the transformative stage. Then we're going to get into the complete redesign of some of our major processes.

As long as we have clear data policies, learner consent, and transparency with our workforce, you're that much closer. But behavioral health shouldn't be used to be punitive. It should be for evaluation of growth and growth for that individual. And that's how we need to get to its best use.

Scott Rutherford
Thanks for watching this bonus episode of the AXIOM Insights Learning and Development Podcast. For more resources and links to related materials, you can visit the episode page at axiomlearningsolutions.com/podcast.

This has been the AXIOM Insights Learning and Development Podcast. This podcast is a production of AXIOM Learning Solutions. AXIOM is a learning and development services firm with a network of learning professionals in the US and worldwide, supporting L&D teams with learning staff augmentation and project support for instructional design, content management, content creation, and more, including training, delivery and facilitation, both in person and virtually. To learn more about how Axiom can help you and your team achieve your learning goals, visit axiomlearningsolutions.com and thanks again for listening to the AXIOM Insights Podcast.

View More Podcast Episodes