The learning metrics that matter: A data masterclass with Figma’s Eric Grant
Data. It’s loved by some, feared by many, but there’s no arguing that it’s an essential tool in a leader’s toolkit. But how do we turn numbers into knowledge?
Eric Grant [00:00:00]:
When I was at Uber, I just saw that the people around me that were most successful at advocating for resources and getting tools and standing up their programs and frankly, like getting promoted and just success as it was sort of defined, there were those that were using data really well to tell their story. I really believe. And the reason we’re talking about it today probably is one of, if not the hardest problem to solve in our arena. And there’s something really fundamental or a belief I have, that working on the hardest problem in your professional area is probably going to pay dividends in some way.
Hannah Beaver [00:00:42]:
You’re listening to How to Make a Leader, a leadership development podcast from Big Think+, where we take the best ideas from the biggest minds in learning and development and distill them into actionable insights. I’m your host, Hannah Beaver. Today’s conversation is a topic that is scary for some, but exciting for others. And that topic is data, specifically how to best use data to measure the effectiveness and impact of learning programs. I have to admit that I fall into the camp of people that think data can be pretty intimidating. But luckily, today’s guest is a Data enthusiast with 13 years of experience in learning and development and a master’s degree in data science and analytics from Georgia Tech. Eric Grant is currently on the learning team at figma, the team behind the rapidly growing collaborative design tool loved and used by many. He has a very impressive background working on various L and D teams at the trade desk, Coinbase, LinkedIn, and Uber.
In today’s conversation, Eric Grant will dig into how data can unlock new ways to track learning, successfully enhance training outcomes, and support organizational growth. Stay tuned until the end of the episode where he’ll share his favorite data tools and resources and the AI that’s shaping the future of learning. Eric, great to have you on the podcast today. I’d love to start out by learning, kind of. How did you start your journey into learning and development, learning, analytics and data? And could you just walk me through the journey that took you to get to where you are today?
Eric Grant [00:02:21]:
Yeah, totally. And thanks, Hannah, for having me. I got into learning an L and D as a fluke, at least in my mind, and actually started. I just got a college job. For those familiar with EPIC Systems, which is one of those companies that like everybody deals with every day but nobody knows what it is. It’s at doctor’s offices. It’s based in Madison, Wisconsin, or near where I went to school. I got a job there and they stuck me on the training team.
And training there is like this huge Business imperative thing. Everybody who buys the software used to have to come out, run this training. And so I got to see this really dynamic, really powerful team in action and kind of learn, like, what training is and that it’s a job and that it has strategic elements and all these wonderful things. So that was in my mind, and I. I didn’t necessarily think it was somewhere I’d go into, but it sort of planted a seed. And then I finished school, I went to South Korea for a year. I taught English, I had this wonderful adventure. I came back, I got a job again, randomly kind of fluke again at a startup as employee number eight.
And as we grew, they asked, what do you want to do with your career? What job do you want to do? And the only jobs I had seen, really, outside of, like, working at the mall, was training and development at epic, which was kind of inspiring, and then teaching in South Korea. So I said, maybe I could do that here and teach our new hires. And we were expanding really fast how to do this job, and just kind of fell into it that way, which I feel like is sort of commonplace for learning folks.
Hannah Beaver [00:03:55]:
Going about it and falling into L and D by accident is surprisingly a pretty common response that I’ve had from TalkTubo in person and on a couple of these podcast conversations too. So not unique there, but always very interesting.
Eric Grant [00:04:07]:
Yeah, it really is. I don’t know if people know this is sort of a career path super early on. So a lot of people with different skill sets, which makes it really unique and dynamic to be part of this function in world. Yeah. Fall into it. The analytics part sort of where I came into data and this came out of. So I was doing L and D at the startup, and then I moved over to Uber in its early days and growing. And when I was at Uber, I just saw that the people around me that were most successful at advocating for resources and getting tools and standing up their programs and frankly, like, getting promoted and just success as it was sort of defined.
There were those that were using data really well to tell their story. And the times I was not getting what I was advocating for, I felt like often came back to. I didn’t have data to talk about what we were measuring or how we were measuring it, or what success would look like or what kind of benefit we’d get from an investment, whether it be time, money, resources, et cetera. So I thought, this is something that’s holding me back. What can I do about it? And I was just sort of maybe going to take Like a boot camp. And Covid happened and I found this online program and three and a half years later I got my master’s in data science, which probably was overkill for what I needed to get out of data, but has really, really served me well in the learning world. And I wrote my application essay for Georgia Tech, my program about bringing analytics into learning and I really believe, and the reason we’re talking about it today probably is one of, if not the hardest problem to solve in our arena. And there’s something really fundamental or a belief I have that working on the hardest problem in your professional area is probably going to pay dividends in some way.
Like everybody’s working on it and struggling. And so companies, places, orgs, people that want to work on that are looking for help, are looking to bring you in or looking to have you as be part of that conversation. And I’ve found that I’ve really, really enjoyed working on this problem with people. So, yeah, that’s learning and data and my intersection sort of happened by chance and here we are.
Hannah Beaver [00:06:25]:
I love that and I’m very excited to talk data, data, data today because as you mentioned, it’s kind of a universal problem to solve, issue to tackle, something that everyone wants to know more about. So it’s going to be a fascinating episode today. So you’re at Figma today, so could you tell me more about your current role, what you do? And I’d love to hear a little bit more about kind of the culture of learning at figma, if there’s anything unique there that you’ve maybe not seen at other organizations or something that you’re excited about with the learning programs today?
Eric Grant [00:06:53]:
Yeah, totally. So I am at figma. I’ve been here almost a year and I run the learning program for our support function. I really focus on functional training. What do our support specialists and agents need to know to answer customer questions? This is a lot about our tool, our billing, our admin, our account management, all these things that people ask about. We need to be knowledgeable and confident and empowered enough to be able to answer those questions. I’ve done learning for this support customer service environment. It’s what I did at Uber, I did it at Coinbase, and now I’m doing it at figma.
And one thing that’s similar about all those and interesting about this customer service space is that unlike a lot of learning folks, I have a ton of data to work on, direct performance data. Most of my audience has most of their work tracked every day. So I can see how Fast. We answer tickets, customer satisfaction on tickets, reopen rates, replies, these things. And we can see if we run a focused training on one of those areas. What was our performance like before that training? What was our performance like after that training? And so we have closer to direct attribution, causation, or I would say influence on the performance of people in a way to track it, which is really interesting. And I’ve been in learning roles where that certainly isn’t the case, and I’m happy to talk about those too. But in my world, we have a lot of data and we are directly looking at the influence of learning programs on performance, which is tracked.
One thing that’s unique about figma is that I feel like our customers at figma often know more than our specialists do. And that’s because our customers are in figma sometimes all day, every day, certainly a lot. And my audience, our specialists are in tools, answering customers each day. They’re not really in figma. And so that wasn’t the case at Uber. Like, you can use Uber a lot, but you don’t know how it works necessarily. We could kind of have our agents know more about that than customers. This is a little bit different.
And so from a learning standpoint, that’s a huge kind of headline challenge for me to solve of how do I give people the right amount of knowledge and confidence and expertise and experience that they need to actually answer these really difficult nuanced questions from power users sometimes. And that’s fun. That’s a really fun challenge to think about. And figma is such a specific tool that does these specific things. It really requires specific training and approaches to those trainings.
Hannah Beaver [00:09:34]:
So we’re talking about data, obviously, other topics too. But our main focus today will be around data. So a simple question. Why is data so important when tracking the success of learning programs?
Eric Grant [00:09:46]:
It’s a great question. I think I have two answers or two perspectives that I take and talk about with this. The first is that for most folks, I think we’ve agreed, this kind of Holy grail concept is to give the right learning to the right person at the right time. We want to do that, we’re trying to do that. That’s. That’s sort of the goal. But even in that sentence, there’s three rights, so three variables that you have to get right learning, right person, right time, how do you know? How do you figure out what is the right for all three of those? Each of them have their own context and nuances and changes and complexities, like what is the Right learning program? Who is the right audience? When is the right time to give that audience in their career, in their day? Is it after lunch? Is it. There’s, there’s lots of things.
And so the best way or really the best method and tool we have to hone in on answering those questions, getting to the core of those variables is data, right? And so why is data important? Because it helps us target from the beginning the thing that is going to be most effective to get to the outcome. It’s a way to think through and try to answer those variables in that equation. Right learning, right person, right time. So I think about that a lot and that that helps me with the why. The second one is also trying to be super simplistic with this of I talk about this very basic equation which has any sub equations to it, but the thought exercise I do with myself and my team is that any time, any person, for the next 30 minutes of their day, would it be better for that person to take a training, any kind of learning, or do their work? That’s the equation. Is it more valuable, is it better to spend the next 30 minutes learning or doing your work? And these are the equations that we’re doing in a corporate environment each day. Like if we want someone to learn, we have to tip the equation that the value of that time spent learning is better than the time spent working. If your work is super urgent, it has to get done by the next 30 minutes or the end of the day, the value of time spent work is going to win.
You should do that. You should always do that. If the learning is super powerful, going to be super impactful, going to change your skillset in a dynamic way, it’s really effective and valuable to do that learning, to do that training. And you could think of really like extreme examples of this. If you have like a mentorship opportunity with the person you think is the most smartest, best person in your space, like that’s a really valuable time spent learning. So super basic equation. But again, how do you actually answer the value of those things and with all the sub variables and equations as part of it is again is data. The data that tells us what the value of learning is going to be very hard to do.
But that’s what we’re trying. And then the value of work for the next 30 minutes, hour, days, weeks, et cetera, these things, the way that we solve and get through the complexity of these is using data to try to uncover the actual measurements behind these.
Hannah Beaver [00:12:50]:
What do L and D teams get wrong in measuring Learning programs, there’s a.
Eric Grant [00:12:56]:
Few things we get wrong or we’ve over indexed on or under indexed on as we’ve gone. One of them is measuring the wrong thing and looking at like what are we actually trying to measure that is going to make sense. So with sales counterparts, I talk about this sometimes support too. But Sales I think has this really great example of like everything that sales enablement, sales training teams want to do is so that sales people can sell more as revenue. Like the outcome, the great outcome you get with sales is revenue. Right. But not every training is focused on increasing revenue. And so to try to tie that training into an outcome that it’s not trying to produce is probably doing some kind of false measurement.
So if you want a new sales methodology and you do a training, this is the sales methodology we want to use. And you want to say, okay, the people that took this, are they driving more revenue? The training is not about driving more revenue necessarily. The training is about the methodology and how well you understand the methodology. It’s the methodology’s job to maybe go increase revenue down the line. But if you’re just measuring your part as the training person, the thing that you want to get the measurement on is do you understand this methodology? Are you able to apply it? Are you confident applying it? Do you have any roadblocks to applying it? And can we give you more training to help with those? That’s it. Like your working on a knowledge, understanding, application training. Don’t sort of equate the wrong outcome, even if it’s the right business outcome. And I think we’ve learning in the last few years has really started getting better at focusing on business outcomes.
Eric Grant [00:14:34]:
That was a problem we had three years ago. Four years ago we’re getting better, but now we might have overdone it to where we’re always looking at these core powerful business outcomes, but we’re not always actually doing training in service of those. So trying to get to those might be a false equivalency. The second thing that’s probably more applicable is just like not doing as much surveying and not relying on surveys as much. And I know learning folks are maybe know this and maybe get frustrated with me saying this. I’ve certainly talked to folks about that. But surveys of course are. We know they’re subjective, we know that they don’t mean a lot when you say that.
You know, this survey got this course got a 4.8 out of 5 on, you know, how well people liked it or NPS or these things. And it’s Just not a great measurement of the program itself and what it set out to do. It’s a, it’s a measurement of how somebody feels at a certain time about something. And if you know people that are doing the training and they know you, they’re going to rate you positive. And there’s all sorts of biases in there. So I try as much as I can to go assessments over surveys, trying to get people that answer questions. Right? Wrong. I love asking confidence questions in those assessments.
So we could, we get another layer of data, but we can use assessments to show knowledge and capability instead of just reflection.
Hannah Beaver [00:15:56]:
So you have a hot take when it comes to data collection. And that is do not use anonymous surveys. Talk to me about that.
Eric Grant [00:16:04]:
With anonymous surveys, what you lose with anonymity is not worth what you might gain in perceived honesty. And so we think that people will be more honest in anonymous surveys. And there’s some research that says that that’s actually not true. But with anonymity you lose the ability to filter, sort and categorize any results you get. And business leaders are generally more interested in the insight of why one group or role might answer or feel differently about something than another, which you can do if you gather names and identities and categories about those people. With anonymous surveys, you can’t do that.
Hannah Beaver [00:16:44]:
What organizational or structural barriers do you see for L and D teams when they’re trying to take data driven approaches in their learning programs?
Eric Grant [00:16:54]:
I think one of the barriers, and this isn’t in a total error or totally wrong, but one of the barriers we have is that we want to be a likable team. We have a bias, we carry a bias that learning is good. I fully bought into that bias myself. So, so self admitted. And we see ourselves often as a benefit for employees or organizations that we’re working with. If we’re internal or for a consultancy, we want to be perceived as good and to be likable and that what we’re providing is a perk and a benefit and a net positive for people. And the reason that’s a barrier, while not totally wrong, is that it gets in the way of objectivity and trying to measure things and actually doing what I think is the actual best thing to do at its core, which is to make somebody better at their job, they will like you more, I think, in the long run. Not in every case, but generally, if you’re able to help somebody be better at their job, improve their skill set, get promoted, do these things, then just give fun trainings for the sake of Being likable.
And so when we do C SET measurements and NPS measurements, those are measurements of likability and enjoyability. When we want the objectivity of did this help you? Did this help you with knowledge? Did it help you with retention? And this is a barrier. And I think truthfully it came from. This is where it gets a little spicy, is in the 2000 and tens, mid 2000 and tens, 2015 or so, we had this whole slew of research come out that companies that had good learning programs had better employee engagement and better employee retention. And CEOs and leaders saw that and started investing in their learning programs. And LinkedIn shows like the investment in learning programs from 2015 to 2020 and with COVID until now has gone up significantly, which is great for us. And I think we were like, yeah, we deserve it. But a lot of this is based off the seeing learning as this perk to keep people around and keep them engaged.
That’s all fine and good, but again it’s a barrier if we see it as that way of just what we should or what I think we want to be doing, which is making people better at their jobs, not happier, not more willing to stick around, but actually gaining skills and getting better at their job. And so if we look at ourselves as this perk for engagement and retention and we’re just here so people stay longer and feel good and like being around us and take our programs, these are not things that a are easy or good to measure and be like not really where we can actually be most effective, most strategic and can really help companies move needles on talent. So a barrier to me is this sort of want and need to be likable and having been positioned a little bit as a, as a perk instead of a real kind of change agent or lever for talent change.
Hannah Beaver [00:19:44]:
I love the way you phrase that and we haven’t really talked about that or I haven’t talked about that directly in such a way that makes so much sense and kind of like the likability. Obviously there’s other barriers like time and like where how can we, you know, win someone’s time? And I think that also is like all connected as well. When measuring performance. What are the key indicators that we’re looking for?
Eric Grant [00:20:10]:
I think there’s a few key indicators that are consistent across performance depending on, you know, or not depending on different roles that you’re training. Change is one right Performance. If you’re doing any kind of up level upskill, you know, you want to see change. So that’s hard. How do you think about identifying and measuring something that has changed from before? Are they doing anything differently that we can track, see, measure, account for, et cetera? In much the same way, application is one too. Right? Like, we don’t really want to train, so something sticks in the back of somebody’s mind. We want to see the thing that’s in their mind come out into action. So this can be the change is that application of something.
Sometimes that happens immediately, sometimes that happens over a long period of time. And again, huge challenge of how we think about how to measure these things. One I measure a lot is confidence, I think is probably a very underrated part of performance. And especially if you find it very difficult to measure, change, application, and sort of behavior with metrics that you have or ways that you have to identify these, you can ask for confidence. You can always ask about confidence. And that’s one that, while always subjective, like, the survey stuff actually is something that somebody can define for themselves. Where like a Likert scale of agree, very much, agree, strongly agree. Sometimes I don’t really even know how I would define those, but I can define if I feel confident about something or not.
And confidence is huge. It’s a huge part of how we show up at work. This is, you know, getting in the psychology of imposter syndrome, getting the psychology of leadership, of being a good manager. A lot of it comes down to confidence. There’s some studies that have found. And so, you know, if you’re looking at, you know, I gave you something, I think you should try this out. How confident are you that you can do this? That’s the application question. How confident are you in actually being able to make this change? That’s like a personal question, but also a culture question for your company.
Like, I see what you’re saying, but that’s not going to fly at this company. We don’t do stuff like that. So I love asking about confidence. I love asking about confidence in assessment questions. I love asking about confidence in surveys. I just like that term as a key one. That could be a very powerful signal for performance, or it could not, but it’s a good signal to have. I’ll say this, and maybe this is getting a little ahead of ourselves here.
This question is going to get answered in the next two to five years in ways that is going to change a lot because I think we are trending toward the place where everybody’s performance is going to be tracked in a way similar to my audience, where lots of different things get tracked all through Your day. And it might not be performance, but it’s sort of rolling these pieces of data up into performance that could show things. So, for instance, Slack is already capable of tracking how long your messages are, how long it takes you to write a message, how long it takes you to respond to a message, all these things that would not be hard for them to put out and tell companies to pay in a lot of money for. And some companies would be like, yes, we’d love to have that. Okay, so now you have interesting quantitative data around communication. Same thing with emails, same thing with calls. Microsoft Teams and Viva has already able to track all of these things about how you communicate. So it’s not performance, but a company could say, hey, we’re a Slack company.
I want all my employees in Slack more than email. Let me make sure I can measure that and see that that’s happening. I want us to be very responsive. I want us to write short Slack messages. I want our Slack messages to look and sound like this. And AI can give you a Eric Grant’s 37% on board with the company way of using slack, but Hannah’s 77%. And now we’ve turned that quantitative data into not just performance, but subjective performance based on how a company wants it to be. And this is like a crazy new territory because the question you asked me about things to look at performance, and I’m like, there’s a couple things I’m really interested in, but it’s hard to track.
A couple years from now, if we go down this route, they will not be hard to track. And companies can kind of tweak. What is it that performance looks like within these skills? Here’s a bunch of quantitative data to measure it. And now learning people have a deluge of data and have to decide how we want to go create that performance. Change how someone uses Slack at this company versus that one, their last one. Change how they email, change how they do calls, change how often they’re in meetings, all these things. And so again, taking this question and running like a few years ahead of us, where we might be going, I don’t know if we’ll go there, but I expect some places for sure will. Learning now becomes much more responsive to performance indicators like this and the data we’re getting in data teams, and then we start really thinking about performance in that sense.
Hannah Beaver [00:25:12]:
How do you think about measurement in areas that are harder to quantify, like soft skills or company culture?
Eric Grant [00:25:18]:
It’s hard. This is very hard. I don’t want anyone to think that it’s easy or that I think it’s easy. Even though I talk about measurement and R and D a lot, some of these things are really difficult. So soft skills I talked about a little bit of, I think today they’re very hard to measure and tomorrow, meaning two to five years from now, they will be less hard to measure, but tracked in a way that we need to be cautious of. How we look at that like a skill like communication today remains pretty esotEric, but we have a sense of what makes a good communicator or not. And I think AI is going to get good enough to start telling us like actually if they’re hitting what we’ve preordained to be good communication or not, and then different companies will change what they consider to be good communication. Amazon has already defined its communication standards versus another company that might use PowerPoints and things like that.
So we can kind of measure that in a way. Team culture and things. I mean really hard. Like we do engagement surveys, we see these things come out, we see what people want to grow in and not can. It’s still hard. Like every learning person I know gets these things. Our learning survey says people want to invest in skills, they want to be better at this, they want to know about AI, they want to know about data, they want to know about this. And then these learning folks, myself included, but lots of people I talk to say, great, we have Coursera, we have LinkedIn learning, we built a program, take AI and then we get 10%, you know, sign up rates, we get no activate, you know, no utilization.
And we’re like perplexed, like you said you wanted this and we’re giving it to you. And part of that goes back to value of time learning versus value of time working. You have to show somebody that the 30 minutes, 60 minutes, 90 minutes that they’re in a session is actually going to be more valuable and important than doing the work. And we are pretty trained to think that the most important thing that we’re going to do for the next hour is our work, especially between 9am and 5pm like this is what we do. If we’re on Amazon.com, like that is seen as not the valuable thing to do, even though we might be buying something for our pets that they need, you know, et cetera, et cetera. So it’s a, it’s a really hard question and one I think data will really help with as we get better at it, get more objective data. But they don’t have like silver bullets for folks. I Think there’s, there’s a lot we can do around it.
There’s a lot learning can do as a strategy partner of showing people like how to participate in learning programs to create these changes. But also like the last thing I’ll say on that is I don’t think people are expecting us to be miracle workers with this. Right. If you’re clear with your business, they want to do a soft skill training. If they want to do a training that’s aimed at something around culture. We have a culture at figma around play. If I said I’m really interested in we, you know, we, we want to accelerate that culture. So for 20 minutes of this training, we’re going to do something really playful, still learning, but really play playful.
I just say like, we’re not quantifying that. Like, you can see its play, they can see its play. I’m not going to like bang my head against the wall trying to figure out how I get data to show that the play happened or was successful or met its mission. I think leadership kind of gets that part. It’s where we can measure that we want to be able to do that and then not just show leadership, but show our audience so that they see value of time spent learning could be higher than value of time spent working. And then we create these good feedback cycles where people actually take this stuff.
Hannah Beaver [00:28:57]:
So yeah, I think that’s interesting too, the fact that not everything is meant to be quantitatively tracked. Like for example, in terms of like productivity studies show, and I feel like this is more commonplace now in today’s work culture. But you know, if you’ve put in a few hours in the morning, it’s going to be more productive to go on that 30 minute walk and listen to music or listen to nature and then come back to work.
Eric Grant [00:29:19]:
We have this in our lives too. Like we don’t, we don’t want everything tracked or like seen as beneficial. We also want time to do leisurely things. Like I want my apple watch to tell me how well I slept or calories burn on a workout, but I don’t need it to tell me like if I’m being a good friend or something. Like, yeah, it would be interesting, I guess, to see that maybe a few times, but after a while you’re like, I just want to spend time with my friend. That’s it. Like, I’m not trying to make this a trackable, you know, optimization event. So I’m sure some people would, but I think most wouldn’t.
And we’re in that Same boat. Don’t overthink everything that we do. But if you can track where tracking is possible, I think you buy yourself the luxury of saying, we’re going to do this. I’m not tracking it. Here’s why we’re doing it. Call it a day. Put it out, ship it.
Hannah Beaver [00:30:08]:
Can you give an example of a time that qualitative data changed the way that you were defining or interpreting quantitative measures of a learning program?
Eric Grant [00:30:18]:
I talked about confidence before, which I consider to be a qualitative piece of data. And so at figma and previously, as much as I can, not every tool makes this very easy. But I like to ask an assessment question and then I like to ask a sub question on that of I’m confident or I’m uncertain. This gives two layers of data. Now, to the question, you can form a matrix then of like where we’re correct and confident, that’s. That’s good. Incorrect and unconfident, that’s also good. Like, we don’t know that and we’re not confident.
But there’s these two danger areas where we’re especially where we think we’re confidently incorrect, right? So if we have a lot of that, we really need to focus on whatever that piece is because our folks think they know something that they don’t know. Very dangerous. We all have this. There’s a million things I think I know that I don’t know. But that certainly changed even how I think about assessments and where we repeat things and where we see things. So I like looking at this. And one thing I do with assessments I talk about a lot is almost every training program we put out, I like to have assessments. We’ll say like five questions for a basic training.
Two or three of those questions would be related to the training that we just talked about. The other ones, the other two or three would be related to way previous trainings, like three months ago, six months ago, 12 months ago. And I’m just looking to see how well we retain things. And again, the same thing with confidence in there. How well did you retain a confidence in your ability to answer this, know this, retain this, et cetera, and you start to get really dynamic data with this that forms a more sort of robust picture of people’s feelings and knowledge than just whether they got the question right or wrong. So confidence data has massively changed over the last, like five, ten years of my career, how I think about these things as well. The other thing I’ll say about this is like learning people, I think are really good at this, they don’t. I don’t think they need me to say it.
But for those that struggle with this, like, probe your stakeholders about what they mean about something. I got assigned a training once that they’re like, we want to train managers on feedback because managers are too nice. And you’re like, well, okay, so then is the outcome that you want for managers to be near? I don’t think that’s what you want. I’m making an assumption here, but let me check that. Like, I think I understand what you’re saying, but what actually do you want the outcome to be? And it’s probably we kind of got to. This was like, we want them to be nice, but capable of delivering hard feedback, constructive feedback. Yeah, effective feedback for people that need that growth. So it’s not the nice thing.
That’s like a headline, easy summation kind of way to put it. But until you probe that, like, you really need to uncover what they actually want. So there’s almost like a qualitative kind of intake. How are. What are you feeling? The problem is. And we get that a lot, and then we need to say quantitatively or measurably, what are the things that you’d want to see somebody after this training or do? Well, that we can check that happened, or we can see or somebody else can see or their manager can see. So there’s also an intake part of this where not necessarily qualitative data, but qualitative kind of feed you want to turn into quantitative measures of something as well. I think we’re good at that.
I think we understand like the nice thing versus actual outcomes. But always a good exercise for me to when somebody kind of tells me how they’re feeling, like, what do you actually want us to try to do about it? Or see that change in people?
Hannah Beaver [00:33:45]:
How does partnership factor into success in measuring learning programs?
Eric Grant [00:33:51]:
It matters a ton. Partnership, both on the stakeholder front and on the help in measurement front. So on the stakeholder front, like learning as a service team, we are here to serve our leadership, our stakeholders, our requesters. Like, you want to partner with them. And again, that idea of like turning their qualitative ask into a quantitative outcome or measurable outcome, certainly partner with them on that. Like, they will have an idea probably of good behaviors that they want to see out of a treated group or a trained group that you’re working with. So really do that on the data part. Like a lot of people, it’s where a lot of learning folks get tripped up.
I don’t have the data don’t have the data. How am I supposed to measure anything? I don’t have the data. So ask for data. Like some people, I’m like, have you asked? And it’s kind of like, no, I don’t know how to ask. And I get that data science people can be somewhat intimidating. And it’s not just asking for this data set because it’s rarely that clean. It’s like, I want pieces from this data set and this data set and this data set. And asking for that can be a little intimidating.
But ask people probably want to help you. And if you give them a why I’m interested in looking at this beforehand and afterward I’m interested in looking at how we change things or insights between groups, people might want to help you. So first is ask and really try to get over that. And anyone listening to this, you can message me on LinkedIn and I’ll help you think about how to like ask a data science person or a communicative data science person. Totally get that. The other thing is like do your best with what limitations you have. So I’ve asked HR many times for different pieces of data. I want that categorical data, right? So categories of people, are they a manager, what region are they in? Tenure is not categorical.
It’s but like how long have they been here? Can I bucket that they’re a new hire, they’ve been here one to four years, five to 10 years, et cetera. I’ve asked HR like can I get performance review data? Can I get promote last promotion? And I’ve got no’s on that for sure. Then I’ve asked, if I anonymize this for you, will you give me an anonymized data set back? And they’ve said yes to that. Not every topic where we’ve basically like, I’ll give you the list of people, you do codes, you give me back promotion date with the code and then we don’t have to like ever know anybody this. But all I’m looking for is not when was this person promoted. I’m looking for across my data set aggregate it. Is there any kind of correlation trend pattern between trends, those who did something and got promoted or didn’t get promoted or got this performance review so you still get what you want. So try to think a little creatively and work with people.
Like I think in that case HR is not saying no because they don’t want to help you or they think you’re. You’re silly for asking. They’re working with Their limitations. They’re not supposed to share this information or they’ve been told not to. So can we come up with a solution for it again? I think people are open to trying to help you if you can do that. So yeah, one tip.
Hannah Beaver [00:36:39]:
Anecdotally, I know most of our partners have seen the most success and kind of had the most buy in when they’ve seen partners in other departments truly invested in the learning process. So I know our first podcast episode was with Jamie Blakey, who’s an L and D at S and P Global, and she mentioned that this kind of cross departmental collaboration was really instrumental in how they could build such a strong culture of learning across the entire organization. So I think what you say there really rings true. We love to celebrate wins around here, so I would love to give you a little platform to boast. Can you share a recent win or a successful initiative that you’ve worked on that you’d like to share with us and maybe share why it was so successful?
Eric Grant [00:37:29]:
Yeah, totally. So one of my biggest wins was a couple years ago. Now, two years ago I was at Coinbase in support. We looked a lot at First Contract Resolution. Fcr we called it. So what percent of cases were we able to solve on the first reply versus a customer had to ask more questions. And Coinbase was doing a few million tickets or interactions a month, I think at that time, or maybe hundreds of thousands, maybe, maybe a couple million over a quarter. And so if 25% of these have extra things, that’s a lot of extra replies that start clogging up your network and you can’t get to new, new customers that are asking questions very quickly.
So we were really big on it. FCR has all these interesting complications. So we’re trying to move the needle on it. We’re doing everything we can and we say, okay, let’s run a training on First Contact resolution. And with my team we said, okay, who’s good at this? What are they doing? Let’s look at those tickets, let’s interview those folks. What helps you get to that First Contact resolution? And it’s a lot about educating customers, providing resources, taking an extra minute to fully answer a question rather than trying to get it out and like thinking bigger term of like, if I can take one extra minute on this ticket, solve it, that’s better than having it reopen and take five extra minutes next time to answer the question I missed or something like that. So we put together a workshop and a training and practice. Four hours.
A lot of psychology, not A lot of, like, you have to click this button and this button and this button. It’s like, what is going to drive fcr? And I’ll say one really big part of that was we had never really stressed with our audience, with agents, that FCR was so important to us. Like, they knew that generally it’s tracked and we want to do it, but we’re like, hey guys, this is the metric. If you can move this on your own, you’ll look good, we’ll look good, customers will feel good. It’s a good metric for everybody. We all want it. Really, really try to get this. And that was the first time we had done that, really mention that in onboarding and we later change that.
Anyway, so we ran this with 2,000 people. We improved first count resolution by, I might get the numbers wrong, maybe it was 5 or 6%, which at scale is tens of thousands of replies that we didn’t have to do. We measured it in two important ways. We did it before and after. So before they took the training, what was their first conduit resolution? Afterward, what was theirs? And then we did a control and treatment group. Because we had to train 2000 people. It took us like four months to do it. So on a rolling basis, every week we looked at who’s been trained and who hasn’t.
Because if everybody’s getting better, that 6% is not really 6% that we’ve influenced. Everybody’s getting a little bit better and everyone got about 1 or 2% better. So we took credit or influenced about 4 other raw percent there. The cool thing about this and the reason it was such a big deal is that we had teams, data teams that helped us with a calculator. This is the cost per ticket based off all the tickets we get in our total cost and support. And so when a ticket reopens, it cost us this extra amount of money. So if we saved say 18,000 replies happening over eight weeks and we could put a dollar amount on those, we can actually talk about how much money we saved the company. So it took months of doing this.
This was not like overnight, but I was able to put a document and story together. We spent about $40,000 on that training with everybody’s time, development, resourcing, et cetera. And we saved Coinbase about $800,000 based off solving those on first contact. So a 20x ROI 40,000 costs 800,000 savings. Wrote up a bid document. There’s a redacted version of this on my LinkedIn if anyone’s interested of how we did and how I wrote it up. But that was a huge win and really showed like what learning can do and was our biggest initiative on FCR that whole year, maybe even ever single handedly to do that. So a really nice initiative and big win and shout out to my team there that really made that happen.
Hannah Beaver [00:41:34]:
What tools or resources do you suggest to L and D program leaders who are looking to use more data driven approaches?
Eric Grant [00:41:43]:
The first thing I’m going to suggest is not necessarily a tool for you to go buy and use, but just to use AI tools yourself or with your team to practice data. So a data visualization tool, survey tool, all these are only most of the time can only be as good as your ability to use them. So if you’re feeling unconfident about data, a tool might not solve that for you. Maybe it will, probably. I don’t think it won’t. But what will help is practice and AI is amazing. You can go on any of these sites Claude OpenAI and say give me a fake set of learning data, a hundred learners. I want assessment scores, I want performance, I want anything that’s going to be relevant to you.
I want categories and they’ll give it to you, we’ll give it to you. And then what you should say because it always does. This interesting correlation is like make it not obviously correlated because they’ll always correlate assessment to performance because they kind of think how we think and want that and then go practice on that. Take that data and visualize something in Google Sheets, in Excel like start with those tools. Those are much more important than going to get looker tableau mode or Culture amp or any of these survey things. Those will help you visualize, but you need to kind of understand what’s being visualized within them and why it’s being visualized that way. So that’s like first and foremost like program leaders get some practice and reps in on fake data. It’s never been easier to get it.
Eric Grant [00:43:11]:
You can get 20 different versions of it. I had my team this year graph my Strava run data of the running just to practice. Just to be like if I gave all four of us a different or the same set of data and said graph anything to find an insight, would we graph the same thing? We didn’t. We all graphed interesting things and we got creative with it and fun but great practice just to have a data set that we could practice on. So I’m happy to talk about tools but I don’t think there’s A silver bullet tool that’s really doing things. And AI can help you visualize and make sense of data and you should try that if you have it available. But I think the best thing AI is able to do right now is give you an infinite playground for practicing things. And data fluency and comfort is going to come from practice, not from tooling, I think.
Hannah Beaver [00:44:03]:
Is there an AI and we have covered this somewhat when we were talking about tracking data around communication, but is there an AI tool that is either kind of recently released or is set to be released and there’s kind of a buzz around that you’re excited about and you think could really make a difference in the world of L and D?
Eric Grant [00:44:21]:
Yeah, yeah. The off the shelf big ones right now, ChatGPT, Claude, are really, really powerful and can be for L and D. And the L and D questions I ask them, they are really good at answering. They give you great answers. So like, I just use it as like a playground to like throw my mind at and see what happens. Yeah. On the learning front specifically, there’s a few tools that I think are really great. Sana is doing AI and lms.
That’s been really fun to see them progress and like they. And now Articulate and a lot of other places are getting toward this buttonification, as I saw it called earlier, of instructional design, where you can like dump a PDF in and it will make a course. And I’ve seen many of these and they’re really, really fascinating. So pay attention to that. Because I do functional knowledge training. I’ve been very into the adaptive spaced repetition learning platforms learns well. Learns Well, Obrizum, Learn2Win. There’s a bunch that I’ve kind of seen that are really cool and I think that technology is going to get really, really, really good.
The proxy, if that doesn’t make any sense to you, is duolingo. Duolingo is adaptive. Like I start it, you start it, Hannah, we start Portuguese quickly. Like we’re learning different things. Right. And maybe you’re well advanced or maybe we’re both starters, but we’re learning different things and just asking you questions. And if you get it wrong, you see it again. If you get it right, you don’t see it for a little while and it comes back.
That’s that space repetition adaptive. So I’m working right now on building that for like how we learn figma, the figma lingo duolingo version of that. And there’s some really cool AI platforms that are helping me with that. And the last one, I think aorist is doing slack and text message learning and they’re using AI and they’re building some really cool AI features in that too for L and D teammates and course creation and some of this stuff. So yeah, there’s amazing stuff happening right now in AI in learning that can help with data, that can help with personalization, customization pathways, a lot of cool stuff.
Hannah Beaver [00:46:19]:
So fantastic. Well, that wraps up our questions for today. Really appreciate your time and insights, Eric Grant. I have personally learned a lot, so I’m very confident that our listeners will have learned a lot too. So really appreciate it.
Eric Grant [00:46:32]:
Yeah, of course.
Hannah Beaver [00:46:34]:
For more from Eric Grant, We’ve included his LinkedIn in the show notes. For more from How to Make a Leader subscribe to make sure you never miss an episode. We’ll be back next month and every month with another L and D expert. Thanks for listening and if you really enjoyed today’s episode, please consider leaving a review on your chosen podcast platform and we’ll catch you next time as we learn How to Make a Leader.
Data. It’s loved by some, feared by many, but there’s no arguing that it’s an essential tool in a leader’s toolkit. But how do we turn numbers into knowledge?
In this episode of How to Make a Leader, Figma’s Eric Grant shares why data fluency isn’t all about the tools—it’s about hands-on practice. He explains how consistently working with data drives better learning insights and leadership decisions, and argues that understanding the data behind the tools matters most.
You’ll learn:
- Lessons learned in data analytics from his time at Coinbase, Uber and Figma
- Why (and how) practicing with data is more valuable than relying on tools
- Ways to measure learning program impact (his answer might surprise you!)
- The transition of a workplace with not enough data to too much data
- New AI tools to watch out for
Things to listen for:
(00:00) Introduction to Eric Grant
(06:53) How to use AI for data experimentation
(09:34) The importance of data when tracking the success of learning programs
(12:50) What L&D teams get wrong in measuring learning programs
(15:56) Why surveys shouldn’t be anonymous
(19:44) Key indicators when measuring performance
(25:12) Measuring soft skills or company culture
(33:45) Partnership’s role in the successful measurement of learning programs
(37:29) An adaptive learning project Eric’s working on
(41:34) Data tools and resources reshaping L&D
(44:46) AI for course creation and instructional design
(45:19) Duolingo as an example of adaptive learning
To learn more about Eric and his work, check out his Linkedin profile.