Demonstrating results from organizational learning and development remains one of the most challenging areas for Learning and Development professionals. In this episode, we speak with two experts on the topic of learning measurement, and explore how we can gain alignment with our organizations about the intended outcomes of learning and the metrics we can use to report on results.
Our guests for this discussion are: Kevin M. Yates, the “L&D Detective” and one of the industry’s best respected voices on developing and communicating meaningful results from learning, and Susan Franzen, vice president for strategy and leadership at AXIOM Learning Solutions, and an expert on developing learning and measurement strategies that are linked with behavior change.
This episode includes discussion of additional resources, which are available below.
Related Resources
Episode Transcript
Scott Rutherford
Hello and welcome to the AXIOM Insights Learning and Development Podcast. I’m Scott Rutherford. This is a podcast focused on the people, processes, content and techniques that drive organizational performance through learning. Today we’re diving into learning measurement with two guests. We’ll be joined later this episode by Susan Frandsen, an expert in how learning programs can drive durable changes in behavior within organizations. But first, I’m thrilled to be joined by one of the industry’s leading voices on measuring results from learning. You may have come across Kevin M. Yates in one of his many conference presentations and articles, Kevin describes himself as an L&D Detective, which tells you a little about his approach, finding the right information to answer questions about the impact of learning. I started my conversation with Kevin being a devil’s advocate, looking at the Kirkpatrick model, the four levels of evaluation, which is the most pervasive model for feedback and evaluation of learning in learning and development. But at the same time, many times, L&D metrics remain generally reactive at the first or perhaps second level, and simply gauge whether the participant enjoyed the learning experience or took basic knowledge away rather than behavior change. So that’s the context of my playing devil’s advocate. And I’d like to pick it up now with Kevin M. Yates.
Scott Rutherford
Anyone who has been in the field for a few years, I think, has heard a number of presentations and a number of read number of articles where you’re talking about Kirkpatrick as well, everyone’s spending all their time at level one, which is the immediate feedback, post training surveys, “smile sheet” world, that doesn’t have impact. And the prevailing conversation often seems to be well, yeah, but people do that, because it’s easy. And we really can’t get to levels two, three, and four. So maybe let’s start by being kind to Kirkpatrick and we can pick it apart as we go further in the conversation.
Kevin M. Yates
Absolutely.
Scott Rutherford
What does the Kirkpatrick model do well – what does it get right?
Kevin M. Yates
I believe that the Kirkpatrick model does a great job at setting the foundation for measuring results, right? When you think about those four levels that are measured, right, it’s learner reaction, which is level one. And that just gives you a good signal about the experience that people are having with their learning experience. Right. So again, it’s a survey, it’s a signal, it’s an immediate reaction. And we want to keep that context for it. You know, level two measures, knowledge acquisition, right? And that can be through testing and or assessment, right, you know, level three, and level four, are all about measuring performance and business outcomes. So I think that what the Kirkpatrick model does a great job at is setting the foundation for the areas of focus for measuring results for training and learning. There are opportunities beyond the Kirkpatrick model, but I do believe the Kirkpatrick model does a great job at setting the foundation. So that’s what I believe, you know, the Kirkpatrick model gets right.
Scott Rutherford
So you mentioned measurement frameworks other than Kirkpatrick. What models do you look toward that is to drive a learning measurement strategy?
Kevin M. Yates
Yeah, shameless plug here. And I don’t know so much as it is a model as it is a methodology. But I have created the L&D Detective Kit for solving impact mysteries. And that provides a methodology for how to measure results, measure impact measure the influence of training and learning, measure how training and learning is influencing behavior, actions and performance. And then ultimately, how an influence in behavior actions and performance ultimately impacts how people are able to impact business goals. So you know, as you just sit there, Scott, there are a few models out there, right, you got the Kirkpatrick model, you have the ROI Institute, with Jack and Patty Phillips in their ROI model, you have the TRP model with David Vance and the Center for talent reporting. You have the believe it’s the Bersin model, if I’m not mistaken, but there are a few models and methodologies that are out there, where I am focused, and this is what you’ll see in the L&D Detective Kit.
You know, they say, Scott, the shortest distance between two points is a straight line. I believe it goes something like that. And where I am focused while surveys are important, right? Totally get it because you want to get signal. You want to get sentiment you want to get opinion, but what our business partners and stakeholders care most about is the extent to which our training and learning solutions are contributing, and supporting employee performance, and then ultimately helping drive business goals. So my method and my methodology is that shortest distance between two straight lines where I am narrowly focused on connecting the dots between training, learning performance outcomes, and business outcomes, right. And so that’s not any specific model per se. I don’t follow any specific model. I do take a little bit from here and there that informs my decisions and my perspective, but just in terms of a specific model. I don’t have one that I subscribe to. But I do subscribe to them all, if that makes sense.
Scott Rutherford
Sure, sure. Let me follow up on that notion of the shortest distance between two points with this on the subject of alignment. So to be aligned, you need to have agreement with your stakeholders both about where you want to go, but also where you’re starting from. And it seems to me that the pitfall that can happen sometimes in L&D is when we start working, without getting that clarity and alignment. I’m reminded a little of Simon Sinek here, and what he said about the importance of y in leadership, have to first focus on the why before you can look at the how and the what of what you’re doing. And I think it’s the same in learning, you have to have the conversation and be interactive with your stakeholders. What are you getting at what’s really important to you, and it may not be a dollars and cents measure could be that they want to see lower turnover in the department for new hires after their first year on the job, for example. So let’s talk more about alignment, both in getting alignment on where you’re starting and in where you want to go.
Kevin M. Yates
Yeah, that’s a great question. And as a profession, you know, we’ve been saying this for years, start with the end in mind, I think we say it more than we do it. I’m not exactly sure about what the roadblocks are. for that. I do have some ideas for why we, we say it more than we do it. But we can talk about that later. If time permits, I want to get directly to answering your question, right? Because, for me, the conversation doesn’t start with, here’s your training, or we’re going to create some training. The conversation starts with what are your business goals? What problem are you trying to solve? What are the performance requirements to achieve those goals? How do you describe where performance is today? versus where it needs to be in order to achieve those goals? What are the some of the blockers that are preventing people from executing and performing in a way that prevents them from achieving those goals? What are some of the other contributors to achieving that business goal? Beyond training and learning? What are some of the other teams that have a vested stake in moving the organization toward achieving those goals? What are the risk? And what are the threats to performance? What happens if we don’t achieve the goal? And then what are the measures or metrics that we can use a signal to determine the extent to which people are performing in a way that is required to achieve a goal? And what are some of the metrics and measures that we use S signal for the specific business goal itself. So if you notice, Scott, for what I just talked about, none of that had anything to do with training, right? That is all about getting insight into business requirements, business goals, performance requirements, performance goals, you know, blockers, threats, opportunities for success, risk, right? That is where the conversation starts, the conversation in my mind doesn’t start with, we need to train it. The conversation starts really, with that conversation that I just laid out. Because if you are able to align your training and learning solution with the answers to those questions, then ultimately you have a training or a learning solution that is aligned to business needs. The other great thing about that conversation, Scott, and those specific questions is you’re going to determine one or two things, both of which are helpful. Through the answers to those questions, you’re going to determine that training or learning is the solution, or you’re going to determine that training and learning is not the solution. And that’s the beauty of those questions. And a shameless plug in here. Those questions are in the L&D Detective Kit that your listeners can go to my website and download. And those questions are in that kit. And they describe why you’re asking those questions and what you’re going to do with the answers.
Scott Rutherford
And we’ll have a link to that L&D Detective Kit on the episode page at https://axiomlearningsolutions.com/podcast, along with some other resources for this episode. Back on the topic of partnership, in an ideal world, you would have that partnership for trust with your stakeholder in an organization, you can have the conversation about what’s really important, what’s really important to the business. But that’s the best case in the real world, there’s always moments where the training person is approached with a request, that’s more or less, I just need some training. So how do you or how would you advise to someone to move away from that order taking conversation to one that’s more about the why behind the request?
Kevin M. Yates
That’s a great question. And I think that your question is firmly rooted in reality, right, because what I have described as the ideal situation where you can have that conversation, my 30 plus years as a training, learning and talent development professional tells me that the opportunity to have that conversation is not always there. So let’s just tell the truth about that. Scott, let’s just be honest and forthright about that. There are times where our business partners and stakeholders don’t have the inclination, the energy, the time or the interest to have that conversation. So let’s just be honest and clear about that. But let’s not be defeated. So I say, when we are having that conversation, or trying to have that conversation, and we since or we get direct pushback, one of the ways in which we can respond is to say, I hear the urgency, I totally get that this is something that, you know, is a high priority that has a quick turn around. But in the process of that, what I’d like to do is have this conversation so that we can create a win-win situation, the Win-Win is that by having this conversation, I am better able to align training and learning to exactly what you need. And I am then better able to come back to you with a recommendation for how we can measure the extent to which we’ve achieved the goals that you’ve laid out. So the win-win for the training and learning team is that we’ve aligned specifically to your training requests. And we are able to create a plan to demonstrate results. The win-win for you, Ms. or Mr. Business Partner Stakeholder, is that you have that same evidence to take back to your organization and your team to say, here’s how training and learning is moving the needle or sustaining it for what we need it training and learning to do. So that is one of the ways in which you can try to shift that conversation. But again, there will be times where you can’t shift the conversation. So my advice or recommendation is to continue to look for the opportunities, right? So don’t stop trying to have that conversation, when you have experienced several, several times of pushback or several times of just having to move fast and move quick without getting that information. So again, in summary, what I’m saying is, you know, try to have that conversation and do it in the way that I just described, but also recognize and realize that there will be times where you can’t, and you won’t be able to have that conversation, but don’t let that discourage you from trying to continue to have it. Does that make sense?
Scott Rutherford
I think it does. I wonder if you could describe for me maybe just an anonymous example of where you’ve worked to measure a learning program. And to find a metric that was not dollars and cents. My presumption here is that many stakeholder conversations are going to be looking at measuring something other than revenue. So what have you found to be a meaningful metric that wasn’t directly tied to Financials?
Kevin M. Yates
If I understand your question correctly, I do have an example of a time where I worked in a business where we were creating a series of training and learning solutions to support managers, people leaders, because we were seeing in our employee engagement scores that employees felt disconnected from their managers. And they didn’t feel as though their managers were supporting them in career growth and career aspiration. Right. So to your point that has nothing to do with dollars and cents. That’s all about a skill or capability for people leadership. And so we created those training and learning solutions to not only support managers and how to have the conversation about career growth and career development, but also employees, because it is a two way conversation. And the manager does not own that the manager supports it. The employee really owns his or her growth and career trajectory. But the manager supports that. So we created training and learning to support managers and employees. And ultimately, what we were trying to do is improve an employee engagement score. I don’t remember the exact wording, but it was something like I believe my manager supports my career growth and development or something like that. And the scores were low. So in response to that, we wanted to empower managers with the ability to have engaging, meaningful, relevant, actionable conversations with their employees, so that they could be in a better position to support employees in their career growth and development. And what we use again, as metric is that metric from the employee engagement scores. So we were fortunate in that we did see a shift. I mean, it wasn’t a huge shift. I mean, it might have gone up like one or two points. But that’s, that’s growth, right? Even if it had only gone up one point, that’s an improvement. That’s an improvement, because that skill is something that is acquired over time, and you get better with it over time in terms of managers having that kind of conversation. So Scott, that would be an example of where we used a metric. It wasn’t it was an employee engagement score as a signal for the effectiveness of training and learning as it relates to providing learning solutions and performance support for managers so that they could have more engaged, more meaningful, more relevant conversations with their employees to direct employees about career growth and advancement. Does that make sense?
Scott Rutherford
It does, it does. I also wanted to talk to you about data. When you’re talking about measurement, we’re gathering data to support that measurement. In some cases, this will be coming from learning systems, and other various data sources. What do you think the challenges and opportunities are in managing the data around learning in today’s workplace in the learning environment? When we have unstructured learning, we have multimodal learning, the emergence and continued evolution of new forms of technology? What are the challenges and the opportunities driven by this proliferation of systems and data?
Kevin M. Yates
Yes, so I might come at this a different way, Scott, but I’m going to try to answer your question. I think the biggest challenge as it relates to what I call learning data, right? And learning data is what I call those things that you kind of lead into. That’s how many hours of training did we offer? How many hours of training did employees complete? What are the different types of modalities that we offer? What’s the average length of time that it takes someone to complete learning?
You know, what’s the turnaround time for a training request? That’s kind of what I call learning data. Right? Now, you have to keep in context, where my focus is where my strength is where my talent is. And that is focused on measuring impact. So when I think about the operational data, which is what I just talked about, I’m separating that from impact data, because impact data comes from the business. And when we’re talking about impact data, we’re talking about data that shows employee performance, we’re talking about data that shows business performance, life cycle time, like sales, like customer service, or any one of a number of different types of data points that organizations use to measure their own effectiveness. So when you asked me what are the challenges and opportunities as it relates to data, I think the challenge is around being clear about how we classify data, right? Let’s be clear, in classifying operational data for training and learning as operational data, it is not impact data. And there’s nothing wrong with that. Because what we get from the data that shows how many how much how often is a view into our effectiveness as a training or learning business or training or learning organization. So again, the challenge is, is making sure that we call the data what it is, right? It is operational data, it gives us a little bit of insight into learning efficiency, in terms of how effectively and efficiently we’re running our training business. Then there’s impact data. impact data is where the rubber hits the road. And that’s data that we don’t own. Right. So if we talk about people performance data, HR, more likely than that owns that when we talk about business performance data, then obviously the business owns that. But where we gain insight into the impact of our training and learning solutions is connecting the dots between our training and learning solutions. And outcomes as a relates to what we see in people performance data, and business performance data. So if I were to summarize whether the challenges and the opportunities for how we use data, how we view data, our perspective about data, let’s be clear, the challenge is ensuring that we call learning data. What it is, right, it is not impact data. I don’t believe it is anyway, this is my perspective. It is operational data. And that’s okay. There’s nothing wrong with that. And then the opportunity, as it relates to data, I believe, is for us to do a better job at partnering with business metric owners and business data owners outside of l&d because that’s where the secret sauce is, right? Those data are the signals that we can use to determine the effectiveness of our training and learning solutions. That’s where the rubber hits the road at the end of the day. Did I answer your question and most importantly did it make sense?
Scott Rutherford
Yes, that makes sense. Before we wrap up, I did want to ask you to take us into your L&D Detective Toolkit. What’s in the toolkit, and how is it useful to someone who is just starting out on their learning measurement journey?
Kevin M. Yates
Thank you for asking that question, Scott. And so I believe that the L&D Detective Kit does a great job at laying out the specific actions for getting to a point where you are in abled to measure the influence of training and learning. So it is a step by step process from beginning to end. It begins with identifying specific questions that you should ask requesters of training, right. It’s kind of like what I like to call the beginning of the investigation. Right. So there is some information, there are some investigation that you need to conduct in order to inform decisions for how to design training and learning solutions that have potential for measurable impact. So the L&D Detective could start with that. And then it takes you through, what do you do with the answers to those questions? And in terms of what you do, it then takes you to how do you use the answers to those questions to inform decisions for instructional design? Based on the answers of those questions. So then we go back to that alignment that we were talking about earlier. And then it identifies based on what you’ve learned from asking the questions. What you are going to design based on what you’ve learned from those questions. What methodology do you put in place to measure the extent to which training fulfills its purpose based on what you’ve discovered during the investigation where you were asking those questions, and then it finishes up with showing you how to present the results. Now, the good news, and here’s what I think is really cool. And this is the feedback. I’ve also included templates so that it is actionable for training and learning organization. So it’s not me just writing about what to do. I’ve also included templates that you can use, that is really the how to write. So again, the L&D Detective Kit walks you through the steps from beginning to end for how to collect the information to design training and machine learning solutions with measurable results. And it also puts into place those templates that you can use to actually do the work so you can read about how to do it, and then the templates provide you with how to do it.
Scott Rutherford
Well, Kevin M. Yates, learning and measurement detective, thank you for taking the time – pleasure to talk to you.
Kevin M. Yates
Thank you so much, Scott, good to see you. Good to talk to you.
Scott Rutherford
And now let’s continue on the topic of learning measurement with my second guest, Susan Franzen is Vice President of Strategy and leadership, at AXIOM Learning Solutions, and she has extensive experience working with organizations to change behavior and to measure outcomes. So Susan, as you look at identifying and reporting on metrics about learning, what do you see typically being done well, and where are the opportunities?
Susan Franzen
That’s a great question, Scott, I think the intention is done well, I think the execution needs a lot of work. And I’ve seen organizations that are more data driven, be more effective, because they’re already, it’s already embedded in their culture to evaluate and report on the work that they’re doing, where organizations that are a little bit less structured around data, tend to struggle more with it. So there’s a lot of time and energy spent figuring out what topic do we need? How do we deliver a program on that topic, but not really a lot on? What is that bigger picture? And how are we going to know that we’re actually making progress toward it? And do we have staff within the learning organization that their sole responsibility is to measure and report on learning outcomes?
Scott Rutherford
So it’s a question of what does success look like? And then, which of the data points can you select to form a meaningful answer? And of course, there’s maybe tension between the easy measurement, how happy the learner is when they leave the room to how impactful the learning is. So can you take me through a model which I know you use the program results map? How do you apply that to measuring the impact of learning programs?
Susan Franzen
Sure. So the permanent results map was actually originally developed by an organization called Evalulead and tested with about 18 different organizations. And it looks at things from an individual perspective, from an organizational or systemic perspective, and also from a mission or community changing perspective. And it looks at those things over three different stages, if you will. So, episodic is what happens immediately after. So that would be level one, maybe level two, of current path, Kirkpatrick model, and then developmental. So a little bit longer period of time, how is that person evolving? And how is the organization being impacted by that evolution? That’s more of the level two to level three for Kirkpatrick and then the transformative would be more of the level four for Kirkpatrick. So what how has this person fundamentally shifted their behaviors or their mindsets as a result of the program? I think with any measurement tool, it’s hard to say that I behave this way or I accomplish this thing because of this particular learning program. Because we take inputs from our environment in so many different ways. But once we start to collect more information from people, whether it’s hard data in terms of retention, promotions, things like that, or it’s more experiential data, such as people’s people using a different language in the organization, or people reflecting on their experiences and saying, I am doing this because I went through this program, and this was my aha moment during that leadership experience or that learning experience.
So this model is really most effective when you start it up front similar to starting with level four of the Kirkpatrick model. And we look at what is the vision and mission of the organization? What are the strategies? And then how does the learning program need to fit into that? So this isn’t, you know, just sort of we need a program on communications, because people are struggling this is, how does communications impact our ability to achieve our mission and vision or a specific strategy? And so by looking at this and saying, what do we expect people to be able to do? Or what do we want them to be able to do in support of that vision or mission? Then we build a framework by which all of the design of the program is evaluated. And it also then helps us to figure out how are we going to measure these things? So what are the nice to have the things that if we had time and resources to measure we would measure? And what are the things that we absolutely need to measure to know that we’re moving the needle in preparing people to actually achieve that mission and vision for the organization. In order to kick that off and make it effective, you need to have a fairly in depth consultative conversation with your stakeholders. And to be able to say, and again, this is this is a conversation that is all too familiar to a lot of folks in learning. I’m not an order taker, I’m not just going to go and do training, because you’re asked me to, we need to stop and talk about the why and scope the why so because without that, you can’t establish your targets. Exactly. Right. And when we when we use the program results map, we use it both sort of globally. So looking at the organization overall in all of its learning programs, and then we also use it as a framework for individual learning programs. So to do it globally, is probably 40 to 50 hours of collaborative time, going through a series of prescribed questions, allowing us to sort of go through some rabbit holes to explore other things that might come up. And as we’re responding to those questions that might be relevant. And then really looking at how do we then start hanging the outcomes and the measures for each program into that overall umbrella?
Scott Rutherford
We were talking about transformative effects. That sounds like something that would require many months or years to report on, perhaps, to make to make meaningful change in a culture to people’s behaviors. What’s the typical timeframe to see this in motion at the various levels?
Susan Franzen
So you’ve got things that you collect right away, right, which would be the level one evaluation, you know, What were people’s initial reactions to it. And then you have that longitudinal data that you collect as well. We sometimes use artifacts. So one of our favorite ones is to ask people, what does leadership mean? What is leadership in your organization? And then over the course of several years, as people continue to go through that same leadership experience, how does the definition of leadership change for the organization? And so that gives us really strong insight into Are we moving the needle? Are we actually making progress on how we thought we needed to define leadership up front? Or is this program producing a completely different pathway that we had not considered, but is supporting the strategy and the vision and mission of the organization even more effectively than what we had originally envisioned?
So, there’s things that you can measure right off the bat, there’s things that you measure [in] six months, a year, two years out, and then there’s the longer term impact. So using this, the tool that we use the program results map helps us to create dashboards that can be updated for executive leadership, and others who are interested stakeholders that show us how these changes progress over time.
Scott Rutherford
So can you give me some examples of what measurements would show up on that on that dashboard?
Susan Franzen
Sure. So on the individual level, it might be retention, it might be promotion. If we look at organizationally, it might be how that department or group is operating differently in relation to the entire organization. And then if we look at that more community based environment, it might be how many of the people who went through this program, volunteer or serve on leadership roles within their industry or nonprofit organizations? So, but it depends on whether or not those things are important to an organization. So each time you go through this, there’s some standards like recruitment, retention, promotions, you know, those kinds of things are pretty standard. But other things that are important to the organization, if it’s a very values driven organization, it might be how frequently are people out in the community in leadership roles. If it’s a, you know, a profit driven organization, it’s how is this contributing to our ability to increase revenues or increase subscriptions or memberships. So it’s figuring out what’s important to the organization and then making sure that you’re, you’re tying your measurements back to that. But I think that’s interesting, because it’s a blending of the qualitative and quantitative in that report card, in what I’m hearing is you’re not rushing to the dollar sign or a “why,” either, you’re not because you think it’s harder to tie something to ROI. I think, you know, you might see like, if you put salespeople through a sales training program, you might see an immediate spike in revenues and sales after that, because people are applying, or you might see revenues go down shortly after that, because people are practicing new skills, and it’s taking them a little bit of time. So you really want to give a timeframe of in the next six to eight months, here’s where we would expect sales to be as a result of this learning program. But again, it takes time and it takes the resources and the commitment to measure that. And to figure out what goes into that dashboard.
We actually stepped in to work with an organization that had a leader development program that they had been using for about nine years, and they were looking for a new partner. And so we came in with some ideas and strongly suggested that we build out the program results map. So we took them through this consultative process, it was done virtually This was actually pre pandemic, but done virtually, where we asked them a series of questions to get them thinking about what the goals of the organization were, how learning needed to support those goals, and then what needed to happen within learning programs in order to support those goals. So from that we started to put together what would the dashboard look like? And so we used some quantitative information from the program such as pre and post assessments and the evaluation forms. We used some quantitative organizational data such as how many change projects were championed by those individuals over the course of a year. How many alumni from the program will came back to be engaged in the program the following year?
And then we looked at other pieces that might fit into it such as leadership roles inside the organization, leadership roles outside the organization, title changes, things like that. And so it took us about 40 hours working with the client to develop the detail behind the map and to determine what should go into that dashboard. And then as we started looking at the sessions, and what kinds of experiences and activities went into the sessions, we looped that back to the program results map to say, does this actually meet with what we said we were going to do? How does it align with it? What kinds of data or information is it going to produce? Language was an important piece. So if we’re successful, there should be a common leadership language that is developing over time. Whether that’s a model that an organization sets up like Adaptive Leadership, or the Four Frames by Bolman and Deal, there should be, you should start to experience some more subjective shifts in our people actually engaging that language as they go about their day. So with this particular organization, what we determined was that we needed to develop several artifacts to be able to track varying perspectives on leadership over the course of year of the year. And also to use it as a communication tool to inform executive and senior leadership on how emerging leaders and mid-level leaders were actually experiencing the organization. So that was used to help inform culture and culture changes, and how do you how do you intentionally create a culture that drives where you want it to go and use learning as a vehicle to get there.
Once you’ve got your program, results map put together, then as you’re designing the program and thinking about what kinds of experiences or activities need to go into that particular leadership topic or learning topic. What you can do is look at each of those and say, okay, so if we if we did this particular experience, how might we measure? Does it have a measurable outcome? And if it has a measurable outcome, how might we measure it? How frequently might we measure it, that helps you to then refine that measurement piece, but you’re using your map as your framework for doing so in terms of designing and developing and delivering on that content.
Scott Rutherford
And having the map also documented, we talk about alignment with your stakeholders, the map is then something you can show and circulate and say this is, this is what we’re chasing? This is what we’re after.
Susan Franzen
Exactly, yeah. And it helps you set a standard for your programs as well.
Scott Rutherford
My thanks to Susan Franzen and to Kevin M. Yates. If you’d like to learn more about Kevin’s L&D Detective Toolkit, or the Program Results Map framework Susan discussed, there will be links to both on the podcast episode page, https://axiomlearningsolutions.com/podcast.
This podcast is a production of AXIOM Learning Solutions. AXIOM is a learning services company providing learning professionals with the people and resources needed to accomplish virtually any learning project. AXIOM provides on-demand staff augmentation from a network of thousands of vetted professionals with expertise in instructional design, learning, strategy, training, delivery, learning, technology, administration, and much more. And AXIOM can provide complete project outsourcing for me learning projects, including custom content creation. If you’d like to learn more about AXIOM or to discuss your learning project and how we can help contact us at https://axiomlearningsolutions.com. And thank you for listening to the AXIOM Insights Podcast.