In this episode of the AXIOM Insights Learning and Development Podcast, we’re joined by technologist and technical training expert John Kidd to discuss the evolving role of artificial intelligence (AI) in learning and development (L&D). The discussion explores the capabilities of large language models (LLMs) like ChatGPT, Gemini, and Grok, exploring how these tools can enhance performance by automating tasks and accelerating content creation. This potential is also discussed with a cautionary note to understand the limitations of AI, emphasizing the need for human oversight to ensure accuracy and quality. John also highlights the importance of teaching prompt engineering and integrating AI into training processes to boost efficiency and innovation across technical and non-technical roles.
This discussion highlights how and why AI is not about replacing jobs but augmenting them, allowing professionals to focus on more strategic tasks while leveraging AI to streamline repetitive functions. This episode offers advice for organizations exploring the use of AI to support L&D operations as well as broader business operations.
Episode Transcript
Scott Rutherford
Hello and welcome to the Axiom Insights Learning and Development Podcast. I’m Scott Rutherford. In this podcast series, we focus on driving organizational performance through learning. And in this episode, we’re talking about artificial intelligence, more specifically developing skills throughout the organization to support and optimize performance using generally commercially available AI tools.
And for this, I’m happy to be joined by John Kidd. John is a technical training expert and technologist He has, I’ll say, several decades’ experience in internal systems and product development.
John’s company, KiddCorp, is a provider of training and consulting for technical skills and technology enablement. And so, John, it’s great to talk to you. Thanks for being here.
John Kidd
Oh, you bet. Great. Thank you for having me. it’s good being here, man.
Scott Rutherford
So I wanted to start, and I don’t want to retread all of the technical discussion about how large language model or LLM AI works. um For those of you listening, we’ve done a whole other episode on that. If you want to go back ah into our archives, there’s an episode called Artificial Intelligence for Learning and Development with David Wynn and Judy Pennington from University of Minnesota, and we get into the weeds there.
So what I was hoping, John, we could do is focus on sort of the current state of LLM AI, just to understand kind of, okay, well, where are we now? We have obviously legacy tools – I’m going to call it legacy – OpenAI’s ChatGPT – I know it’s a strange term, right?
John Kidd
No kidding.
Scott Rutherford
And then we have newer entrants. We have you know newer versions of Grok. We have DeepSeek out of China. So can you help walk us through how you see sort of what the offerings are and how to, I mean, what are, what are, are there differences we need to be aware of? How do you start to understand, like, where do you begin with this sort of menu of options?
John Kidd
Yeah, that’s that’s a good question. and um There’s some precursors to that, right? ah I think the answer to the question of what are you trying to accomplish?
Who’s the target group that you’re trying to enable with it? um If it’s, ah and you know, I’ll try to break that apart and you can direct me, Scott, if I’m going down a path that doesn’t make sense.
Scott Rutherford
No, lead on, lead on.
John Kidd
So if if we’re working with technologists, I mean, the people that are really writing code, um not not everybody is going to be involved in building a large language model.
And i think I think that needs to be you know kind of put out there. The majority of us are going to be consumers of these large language model tools. But I think if we are dealing with a group of technologists, having a basic understanding on the algorithms that are used to build those large language models is a good thing.
OK, but ultimately the question has to be answered, what are we trying to accomplish with that? The majority of us um are going to be using tools like Gemini, ChatGPT, Claude, even Grock for that matter.
Somebody out there is going to not like this definition, but as a replacement for what maybe we would do a usual Google search for, because it becomes a little bit faster, a little bit more interactive, right? So that’s kind of the baseline. ah the that this The difference between these tools today ah is um a lot wrapped around the ah the data that they’re consuming into the model, ah if there’s a cutoff date ah that they’re working with, like ChatGPT, what was it?
How many years ago is the data updated for? I mean, it’s like five-year-old data, four-year-old data. ah Obviously, that’s beginning to change ah because now we’re dealing with Gemini, ah which is virtually real time. And if you look at Grok on X, Grok is pretty much, you know, kind of real time. So that’s I think that’s one of the main differentiators ah that’s in the market today.
And basically, it’s how quickly ah new data is being ah consumed into ah these models that we’re interacting with, you know, ah which depending upon what you’re trying to accomplish with it, it’s going to have an impact, right?
If I want to if i want my AI tool, to my Gen AI tool to ah help me analyze the ah the latest Texas ah school choice bill that they’re trying to put through, not to get political, which I won’t…
Scott Rutherford
Right.
John Kidd
..Then I’m going to be going to something like Grok or I’m going to go with Gemini, right? Because they have the latest and greatest updates to them. ChatGPT is kind of playing in that area ah as well. But you know, that kind of gives a good idea ah between when it comes to technical subjects, to be honest with you, ah assistance with, you know, coding or something like that.
I would I would say that it’s somewhat of a level playing field. Right. ah But for the most part, though, that’s kind of the main differences between these tools today.
Scott Rutherford
So there’s the currency, er, currentness, I guess is better way to phrase that of the data that the tools are ingesting.
John Kidd
Yeah.
Scott Rutherford
There’s also and something that I think we’ve been grappling with looking at AI tools, at least so far, is managing the hallucination factor. So, you know, and how do what’s your experience been in terms of in terms of the tools? If they’re more and more up to date, which is good, they’re taking in more and more current information.
We’re still seeing results that come out of queries and they’re sort of circulated around the Internet well. That’s almost things to sort of point and chuckle at, but where a tool has simply made up references or made up articles or, you know, you see you see of the ah article there was a story in the news I was reading a week or so ago of a pleading that went before a judge citing nine cases.
And I think eight of the nine cited were just completely made up. And so there’s that’s still happening from these tools where we’re somehow it’s creating an alternate reality, which for an organization or for them that are just a user, doesn’t that create, oh I don’t know, I don’t want to say distrust, but it’s got to make you pause and say, well, okay, how much can I really rely on this thing?
John Kidd
Well, and you know that’s a great point. um I wouldn’t use ah wouldn’t use these tools to file my taxes. How about that? ah I mean, and because you’re right.
You start asking some very intricate, specific ah questions regarding something like the tax code, ah which you know ah probably six months ago ah in an experiment, I did exactly that.
I wanted to find out ah the value of the response that I was getting.
Scott Rutherford
Right.
John Kidd
Because remember, what an LLM is doing is it’s not necessarily… and we use ah We use the term artificial intelligence, right? But it’s not a reasoning tool, right? It’s not a logical human reasoning type of a ah mechanism.
It’s basically predicting the next words, the next content based upon the build of the model itself.
And so… You’re exactly right. I mean, there is a hallucination effect. And ah anything that I get out of any of these tools that ah would be something that I was going to be depending on, I’m going to double, I’m gonna double, triple check ah the facts because you’re it’s not always 100% accurate. And this also plays into um one of the big areas ah that I think there’s a lot of value with, you know, you know pick your tool, ah is in the development area ah for software engineers like myself.
You know what? I just need a quick example of… you know, a snippet of Python code, right?
Scott Rutherford
Right.
John Kidd
Or a snippet of Java code or Golang or whatever you’re working with. um And it does a really, really good job ah providing that. But…
Scott Rutherford
Now is that also because, oh sorry, I don’t mean to interrupt, but it’s also because when you’re developing code through ah through an LLM, and I’m sort of, i will I will say marginally technical, I’m technical enough to be dangerous, but the way I would look at it is if you get a code snippet from GPT, you can find out pretty quickly whether or not it’s going to work or not and then iterate if it doesn’t.
John Kidd
It’s exactly right. And so the point being is that I’m always going to take whatever it gives me and, you know, try to understand it to the depths it really needs to be understood ah rather than trusting what has been given to me and just integrating it into my code base, if that makes sense.
Scott Rutherford
Right.
John Kidd
I’m going to make sure that I understand what it’s done. i understand the pieces of it ah because, know, it’s going to be up to me to make sure that it’s maintained, ah that it works exactly like I expect it to which for most software engineers, you know, what I’m saying makes total sense. And it’s like, yeah, of course I’d do that.
But um you kind of be surprised at the content that’s actually generated by the LLMs in different technologies. I was working with Terraform with a group this week.
Some of the stuff that I saw come out. was not even ballpark correct. And it’s not, or, or probably more important, and this is the nuance that I think is the current concern, it functions, but is it the best way to actually ah implement what we’re trying to implement?
You know what i mean? The fact that it works correctly, ah that’s great, but is it maintainable? Is it something that is the way that we really want to build it?
And so, um you know, I think that’s the back end of it that’s going to be kind of important as well.
Scott Rutherford
So I wanted to ask your opinion on sort of the opportunity with AI, both from, i think we’ve been talking about the opportunity to train staff within an organization, whether that’s technical staff or non-technical staff.
But I wanted to back up a second and sort of recognize that a lot of the folks listening to this or watching this this episode or are in learning and development and trying to figure out also how do I use AI to affect the development of my learning content for my organization? So it’s a different, slightly different lens on that.
Do you have experience or advice for, for the L&D professional who’s trying to, trying to maybe shorten their content to creation game, accelerate or lower costs to deliver content using AI?
Are we at the point where that makes sense or are there are there but quality risks that that you you see i by embracing AI too quickly?
John Kidd
You went right to it. It’s the quality risk. But um again, to me, it goes back to exactly what we just got through talking about. Whether I’m writing code, whether I’m generating content, whether I’m building outlines, whether I’m building a learning path, right? Because the tools are really good for people like me that don’t do real well with a blank sheet of paper, if that makes sense, ah give me a starting point.
And as long as i understand what I’m trying to accomplish, Man, these tools are fantastic. ah I love it because i can have it give me, for instance, if I try to build a learning path of taking a group and a technology from point A to point Z, um I’ll have it build the path for me.
And then I’ll go in and look at it and say, no, that’s not correct. I wouldn’t do that. But it gives me something to edit rather than building something from scratch. The same thing is true with course outlines.
You know, build me an outline for working, for learning. i don’t know. Pick your technology. Well, the outline that it builds oftentimes is an outline that’s going to take months to accomplish.
So obviously I’m going to go back in and I’m gonna say, no, that doesn’t make sense. ah Yeah, that’s a good that’s a good order. And then do the editing on top of it.
So I guess what I’m kind of circling the field with, I’ll land the plane on it, ah is you and I still need to be the expert on it. I cannot take what’s given to me as gospel, right, so to speak. ah It’s going to be a tool for getting me over that initial creative hump, if you will, ah for moving me down the path to the goal that I’m trying to accomplish.
Scott Rutherford
Right. Well, the oldest advice in the world is don’t reinvent the wheel. Right. So, you know, if what you’re describing sounds a lot to me like you’re saying, well, just don’t reinvent the wheel. Use the tool to give you the foundation, which is going to be maybe not great, but good.
And then adjust and use your human expertise on top of that to make it right.
John Kidd
Yep, exactly. like And in general, you know across the board, um i think that is a really core piece of advice in using these tools.
Something that as I was thinking about us doing this today, Scott, I’ve been involved in a lot of quite a few different cloud adoptions from you know Azure to AWS to GKE, right. And the statement that I consistently make about this, it’s never about the technology.
It’s always about the culture. It’s always about um the human being interaction, if you will. ah And I don’t think we’re talking about that enough, quite frankly, when it comes to ah these tools, ah because the tools will do whatever I ask them to do for the most part.
The real, you know, kind of crux of the matter is, inquisitiveness. Okay. That’s what we need. We need inquisitive people to be able to get from these tools, what they could actually provide for us. The good question, you know what i mean?
The um, going down that path because the tool is not going to lay out the question for you. It’s up to you ah to ask the questions. And that sounds like such a simple thing to say, ah but it’s ah it’s ah it’s a really important crux piece of this whole thing ah because you’re not going to get from it what you don’t ask of it.
Scott Rutherford
Right. And the flip side of that, too, is if you’re not trained to ask the right question or to know how to manipulate the levers, to be a little bit physical about it, there’s one of the one of the promises of AI is accelerating innovation, accelerating time but to market, reducing costs. And those are sort of enterprise-wide benefits. If your staff are… sort of grasping at the levers ah without purpose, a lot of those savings go away.
John Kidd
Yep, they do. They do go away. So and it kind of I think the path you’re going down there, Scott, is the idea of or not the idea, but the skill of learning how to interact with the tool that you’re using, ah kind of the prompt engineering type of thing, ah because the way you pose the question ah is going to have an impact on effectiveness and learning that.
Scott Rutherford
Right.
John Kidd
And that’s not that’s not an overly complex skill. It’s more of an orientation. Right. On how to interact with your tool of choice, whether it’s Gemini, ChatGPT, Claude, Grok, whatever it is that you guys are using, um learning how to prompt it to get what you need from it um is kind of a important skill.
Scott Rutherford
Right. And it sounds like that might be one way to look at. ah there There’s a little bit of an anxiety, I think, about training for AI right now, which is, OK, well, how do I get ahead of this thing and train in a way that’s going to be meaningful and relevant six months or a year from now as these continue to evolve? We’re chasing a ball that’s rolling away from us, at some speed in many cases.
John Kidd
Right.
Scott Rutherford
But it sounds like what you’re describing sort of, okay, let’s focus on the basics of understanding how to manipulate the tool, prompt engineering, you know, ah perhaps as the core of that, um because that’s going to be a durable skill, even as the technology evolves, right?
John Kidd
Exactly. And, um, not to be an advertisement here, but as a big fan of instructor led training, ah this is one of the areas that I think it kind of excels is because in a lot of my courses, right, that I teach, I’ll actually include ah the interaction with either ChatGPT or Gemini and I’ll show students, okay.
As we’re going through learning brand new concepts, I’ll pull over, you know, ChatGPT and say, well, let’s ask that question and, you know, see what the AI gives us for this. You know what I mean?
And so I guess the point I’m trying to say is I think we’re at the point where we need to begin incorporating the actual usage of these tools ah in the context of the training that we’re actually offering to ah employees, students, right, wherever they’re coming from, ah because It’s not going to go away.
And as you said, it’s only going to accelerate. And so beginning to break that ice, ah showing the impact ah on how it can really assist, I think that’s huge. And I think that’s an important piece of that whole learning and development ah arena.
Scott Rutherford
So as an instructor, how do you approach ah advising or maybe be prescribing AI learning for various ah for different audiences in the organization?
We’ve talked a bit about you know technical audiences, and I think that perhaps the path there is clearest. ah if you’re If you’re talking about solving a coding problem and you can use the tool as a as a resource or interrogatory to help you help you generate code, that makes sense to me very clearly. But how do you how do you ah work with, say, ah a mid-manager or [someone in the] C-suite and say, how do you use or how should you be using or thinking about AI in a way that’s going to help their role?
John Kidd
Well, and for me, i think there’s some basic orientation. Okay. This is what it is. This is how you get to it. and this is how you… the basics of this is how you interact with it.
But then um i think it’s very much an interactive type of an activity. Because what I like to do ah is depending upon the group, I’m going to ask, what is it that, what is what’s your job role? What is it that you do day in and day out? Okay, that’s great. What questions do you normally have through the course of your day? What is the information that you want at your fingertips to be able to easily ah get access to?
And so we’ll actually, I actually take them through different scenarios that they may encounter during their days based upon what they’ve ah told me and we actually do it ah because I think seeing it um and interacting with it based upon yesterday’s problem that you were trying to solve and seeing how you could have done it with, you know, ChatGPT or Gemini or, you know, the tool that your company is using.
I think that’s important ah because that’s how you make it really concrete. And you kind of remove the illusion ah of, ah you know, AI. ah because I think it has kind of a, i don’t know, it kind of has this ah movie effect to it.
You know, we’re going to be taken over by the AI bots kind of thing. You know what i mean? But to to break down that barrier, you know what I mean?
Scott Rutherford
Well, yes. Go ahead.
John Kidd
To say, look, see, we just we just got an answer to the question that you had. And obviously you need to know if the you know, if the data that you’ve been presented with is, ah you know, factual and accurate based upon what you already know, but this is how you can get that information very, very quickly.
Scott Rutherford
So but I did want to build on what you were just saying ah in terms of the maybe the technological skepticism. I think there there’s a… when we were prepping for this, I had used the phrase of, you know, AI is kind of the dog that caught the car. Now, what do I do with it now that I’ve caught it?
It reminds me of kind of where we were in business in the, frankly, the middle 90s with the internet. And with the web, um it was this, you know, there were a number of businesses, I remember at the time, who looked at the web and said, you know, it’ll never catch on.
You know, this is a distraction. Why are we putting so much money and effort into this? It’s never going to amount to anything. We’ve sort of proven that wrong. But, it you know, it was a technology that many businesses were trying to learn.
Build the plane while you’re flying it and then try to adapt to an unknown future. That seems where we are today with AI, where there’s a potential for businesses to really embrace the potential of of transformation of their business model to embrace AI, not just implementing it on a tactical level.
John Kidd
Oh, no, I totally agree. And I mean, in your, I think your analogy of the, uh, in comparison to the adoption to, maybe not the web as much, but the cloud.
The cloud, the cloud adoption into one of the big three providers, uh, is a direct parallel in my most humble opinion, uh, to the whole AI thing.
Um, and I think I, I, With the cloud, I’ll never forget a conversation that I had with, yeah I guess you would call them and an operator, an admin of their ecosystem, their compute world. Okay. I’ll leave the company name out of it.
Scott Rutherford
Uh-huh, right.
John Kidd
And, and ah we were discussing the impact on, you know, moving into the cloud on the group that he managed. And, um you know, he said, well, frankly, you know, my people, my all of my folks that do operations are really concerned about their future.
And as we discussed this, ah you know, the idea that the cloud was going to take over their job ah was very prominent. I think we’re experiencing the exact same thing right now with AI, with Gen AI as well, right? All across the board.
The answer to that though, was not that it’s going to take your job. You just need to learn what the next evolution for the role that you have is.
How to, how do I adopt that capability into what you do. ah And so, you know, a lot of those folks found out that the cloud didn’t take their job, right? AWS didn’t take their job.
What they like had to learn was the infrastructure of the cloud provider and then adapt to it. Well, the same thing is true with ah these AI tools that we have. It’s not that it’s going to take over your job. Okay. I think we’re, I think we’re quite a ways away from that. But how can I use it to make what I do be more efficient?
How can I get to the finish line quicker because I’m using these tools? okay How is it going to help my project team ah And in what ways ah would it be beneficial?
OK, because I think that’s where the focus needs to be, um because it’s not going to ah all of a sudden kick out all of the, ah you know, all of your ah legal department, ah the folks that manage the insurance at your company, your developers, you know, across the board.
Scott Rutherford
Yeah.
John Kidd
It’s not it’s not to replace it. So being able to adapt the benefits of it. ah is huge. Right. And I think being able to see that, I think that’s a very, very, very big part of the training. ah Because I think companies are out there going, what in the world, you know, scratching their head, what in the world route do we do with this? You know, it’s out there.
They’re very concerned about not getting company information into these tools, right, so much that access to them by their employees has been totally locked out. I think we need to find a way to begin to, you know, open up the doors a little bit.
Many companies have accomplished this in many different ways ah so that people can begin integrating this ah in their daily workflow, if that makes sense.
Scott Rutherford
I think it does, and I sort of bring it back to the you know an example within learning and development. Let’s say if you have ah if you have a task and you’re developing a module, and you’re going to spend five hours putting together assets and organizing a ah learning flow.
Well, if you could, as the L&D professional, put AI into that process and say, I’m not going to do the graphics manually. I’m not going to do the voiceover with a human, perhaps.
You might not spend fewer than five hours, but you might spend those five hours differently. And my hope for AI would be that the quality of the product increases because you’re enabling your experts, the person you’re paying to do the job, um to use their skills more effectively.
John Kidd
Yeah.
Scott Rutherford
The human element can be supported, I think, by the AI enablement.
John Kidd
Yeah absolutely. And ah going back to another conversation about, ah you know, that’s that same ah manager for his operations team.
The other thing that they were really concerned about at that time was the blossoming of DevOps. ah Oh, my gosh, this whole thing is going to automate my job away.
No, it’s not. It’s going to actually make what you do ah more efficient. And it’s going to allow you ah to focus your attention on those other problems that you’ve been wanting to look at that you haven’t been able to have the time to look at, ah now that we can automate a lot of that stuff, hey, I can go solve those problems. The same thing is true here.
I mean, it I think it really truly is. um And in a lot of the classes I teach, especially the technical classes, have really been able to demonstrate that, oh, wow, I didn’t know it could give me that information.
You know, somebody asked a question about, what would that look like if I put that together in, you know, a Python ah type of a subroutine? Well, let’s take a look at that real quick. And I’ll pull over, you know, ChatGPT and have it generate something for me, even though it’s not close to production quality, but it gives you an idea on what it would look like. Oh, OK.
And we just accomplish that in a matter of what, three or four minutes rather than spending the time ah going back through the whole thing and how to put it together. And, you know, ah in example as you well know, Scott, an example is worth a thousand words.
Right. I mean, it’s like a picture. ah Because give me an example on how something functions. That is a yeah that’s a great teaching tool. And that’s, I think, a really big benefit today of what these tools can actually do for us.
What I find funny at this point in history, and I think your analogy with the cloud, I think is just right on the money.
There are many things that we do ah in our job role, and I’m not just speaking technically, it’s across the board, ah even in business roles as well, ah that if we could offload those, it would be huge.
Think about, um you know, I just wish I had a tool that could summarize all the highlights of all these documents, right? Because I normally spent, you know, hours and maybe days on that type of activity previously. Hey, guess what?
We do have those tools. So it’s learning where it is going to be helpful ah that we can adopt it and then, know, move forward, right?
I think we’re going to become more efficient as a result of it, to be honest with you.
Scott Rutherford
So, John Kidd, I appreciate your time. Thanks for coming on the podcast and great to talk to you.
John Kidd
Oh, you too. Thanks for having me.