The Next Big Question


Episode 8
Hosted by: Drew Lazzara and Liz Ramey

Dan Wulin

Head of Data Science and Machine Learning

Wayfair

Prior to joining Wayfair, Dan held positions with The Boston Consulting Group. Dan is our first podcast guest with a Ph.D. in physics, which he received from the University of Chicago.

What Does it Take to Increase Data Velocity in Organizations?


OCTOBER 11, 2020

This time on The Next Big Question, we talk to data and analytics leader Dan Wulin, the head of data science and machine learning at Wayfair. Dan shares insights on how to increase the data velocity in organizations from the readiness of your data to the structure of your teams. We discuss how database technology, skill sets and organizational readiness can impact the ability to move quickly and successfully implement machine learning models.

/

Drew Lazzara (00:13):

Welcome to The Next Big Question, a weekly podcast with senior business leaders, sharing their vision for tomorrow, brought to you by Evanta, a Gartner company.

Liz Ramey (00:23):

Each episode features a conversation with C-suite executives about the future of their roles, organizations, and industries.

Drew Lazzara (00:32):

My name is Drew Lazzara.

Liz Ramey (00:34):

And I'm Liz Ramey. We're your cohosts. So, Drew, what's The Next Big Question?

Drew Lazzara (00:40):

Well, Liz, this week we’re asking, what does it take to increase data velocity in organizations? To help us tackle this big question is Dan Wulin, head of data science and machine learning for Wayfair. Prior joining Wayfair, Dan held positions with Boston Consulting, but he’s also the only guest we’ve had on the show to hold a Ph.D. in physics, which he received from the University of Chicago. Needless to say, Dan is skilled at putting data to work. Utilizing data to make decisions is one of the primal human cognitive capabilities. It’s nothing new, but large organizations still struggle to glean coherent insights from data at the speed and scale necessary. In our conversation with Dan, he offers a practical definition for data velocity, discusses the organizational structure necessary to achieve it, and describes what the future looks like for organizations that can harness it.

Before we speak with Dan, we want to take a moment to thank you for listening. To make sure that you don’t miss out on the next Next Big Question, subscribe to the show on Apple Podcasts, Spotify, Stitcher, or wherever you listen. Please rate and review the show, so we can continue to grow and improve. Thanks, and enjoy.

Drew Lazzara (01:58):

Dan Wulin, thanks for being on The Next Big Question. Welcome to the show.

Dan Wulin (02:03):

Yeah, thanks for having me guys -- really excited to have the conversation. 

Liz Ramey (02:08):

Dan, we are very excited, as well, and we just want to take a moment to thank you for participating in the podcast, but we also want to take some time to get to know you a little bit on the personal side. So, I have a couple of questions that I would love to ask you.

Dan Wulin (02:23):

Yeah, yeah, let's go for it.

Liz Ramey (02:25):

Awesome. Well, we'll start out with – just, you know, what do you do with your family? What's kind of your favorite family activity?

Dan Wulin (02:33):

Yeah, that one is, the answer is boring, but easy to get to. So, we really enjoy hiking, especially with COVID and everything. You know, living in the Boston area, it's just really great to head outside and go for a nice long hike, get out in a way a little bit. Outside of that, I would say we all really enjoy food. Granted we have pretty young kids, but as much as possible, we like to try new foods and experimenting.

Liz Ramey (03:04):

That's great. That's great. So, Dan, if you could play any video game, and you were the actual character inside of that game, what video game would you be in?

Dan Wulin (03:19):

Yeah, so that's a good question. I mean, one, for context, I love video games. I don't play them nearly as much as I used to. So, my initial reaction, like I love Call of Duty and Years of War and those types of games, but I wouldn't actually want to be in the game. So, if I had to pick one where I'm specifically in it, I would probably do something like Minecraft or Animal Crossing, where it's more kind of just like aimless building stuff in a really pleasant kind of setting.

Liz Ramey (03:50):

You know, that's so funny, ‘cause when I was writing the game, or I mean that question, I actually thought to myself, which one would I be in? And I thought of Animal Crossing. ‘Cause this is my son's new favorite game. So yeah.

Dan Wulin (04:03):

Yeah. Strangely addictive, but good. 

Liz Ramey (04:05):

Yeah.

Drew Lazzara (04:06):

Unfortunately, you're both wrong. The answer is the Mega Man series because you get to fight giant robots and get a bunch of new capability, but you know, those are great answers, too.

Liz Ramey (04:18):

Okay. So, Dan, you're taking a trip to a surprise island. You don't know which Island you're going to. You're going to be there for a week. You can bring one survival item, one person, and one food item. What would those three things be?

Dan Wulin (04:35):

So how survival of a scenario is it? Is it pseudo vacation or like actual, like you're on a…

Liz Ramey (04:44):

Like actual you, yeah. You actually are going to need to use this to help you survive.

Dan Wulin (04:50):

You have to like scrounge around. It's, it's tough. So, my initial reaction is my wife just because, especially ‘cause of coronavirus, frankly, going on dates and things like that, it's just been harder. So, to some extent, going away for a week sounds really awesome. But if it's to a deserted island, I'm not sure I would want to make her suffer that. But, I'll just, I'll assume it's my wife. And she would put up with that. For.. you said an item and then food?

Liz Ramey (05:21):

Yeah, one food item. So, like, you know yeah. But you get a… What's your favorite food maybe that you would bring or something?

Dan Wulin (05:28):

I mean, if it was like pure survival, I would do Soylent… like it hits every dimension. You get, like the water, the vitamins, and protein and all that. If it's food for enjoyment? I mean, honestly, like a hamburger, just lots of hamburgers. Yeah.

Drew Lazzara (05:46):

You'd be the one person who could gain weight in a survival situation with your pile of hamburgers.

Dan Wulin (05:51):

I mean, that would be ideal.

Liz Ramey (05:54):

Well, you know, I figured that I would add a little bit of complexity into that type of question since we are going to be talking about machine learning, which is very complex in its nature. So, why not add complexity to the personal questions, as well? So, last question, Dan, if there was a hashtag that represented you, what would it be?

Dan Wulin (06:20):

To be honest, that's when I struggle with -- I'm not a Twitter user. I mean, I use it for like news updates and sports updates and things like that, but definitely not a power Twitter user. So, I don't have a great hashtag game. If I had to keep it simple, I would say like hashtag nerd. I'm just a nerdy guy, to be honest, I have lots of hobbies, like comic books, video games, all that sort of stuff. So, it's a pretty convenient descriptor, I guess.

Liz Ramey (06:49):

Well, nerd is the new cool, Dan.

Dan Wulin (06:52):

Yeah, hopefully. I keep on hearing that. It doesn't feel that way, but I'll roll with it.

Drew Lazzara (07:00):

Well, Dan, thanks so much for kind of playing along with us. It's great to get to know you a little bit on a personal level. We are here today to talk about what it takes to increase data velocity in organizations. And I know this is something that is near and dear to you. And so, before we're getting into that conversation of how you make that happen in your organization, I was hoping we could start out with kind of a working definition for what you mean by data velocity. What specifically does that mean for you, and how do you define that for your peers across the business?

Dan Wulin (07:31):

Yeah, for sure. And, one thing for context, just to be super clear. So, I'm head of data science at Wayfair, and for sure we're huge consumers of data. We work with lots of teams that are producing data and handling it. But frankly, when I talk about data velocity and some of the things that I'll mention, that's really leaning on the work that a lot of other teams are doing, and what I personally do is a fraction of it. But I'll do my best to represent the full picture. In terms of like the highest level definition, you could think of it as -- how long does it take for you to go from getting raw data, to taking some kind of action or drawing some kind of insight from it.

And then, if you drill down a little bit, you can think of, okay, well, I can think in terms of an analyst, where maybe that means I want to get the data, put it in a dashboard, or run some sort of analytics over it. You could think of it in terms of a data scientist, where from their view, it could be more of -- how long does it take to build a machine learning model and get it deployed so it's customer facing. And the third big use case, which you can't understate at all, is other engineers. So, if you think of engineering teams as producing data, there are going to be lots of other engineering teams who are consuming that data, or there could be potentially. So, for them it's going to be all around latency, like how long does it take for them to get data from the time that it's generated to when it's actually processed. So, like over overall, it can sound very simple, or something you can kind of compact at the highest level, but you really have to get pretty deep into the weeds or into like specific user stories to really put some meat behind what you mean when you say data velocity.

Liz Ramey (09:27):

Dan, have you seen that the data velocity with organizations, maybe including Wayfair or other organizations, really increase over the past, I would say, five years or so? Or, what sort of -- and if so, what sort of factors are really helping with that velocity increase?

Dan Wulin (09:50):

Yeah. I mean, it's -- increases have definitely been significant in the last five years and frankly, even beyond that – just with new technology evolving over time. If I had to think of Wayfair specifically, and I think it's mirrored in other organizations, things that have been really big enablers for data velocity -- one, I would think of self-service tooling, which is a big trend. Meaning enabling folks like analysts and data scientists to interact directly with data and produce whatever assets they're working on. Whereas in the past, maybe to do that same sort of thing, they would have had to have partnered with an engineer or something of that nature. So, I think self-service tooling is a big piece.

There's a lot of core technology bits, too. If you think of, I don't know the right time span, but five, 10 years ago, probably more 10 years ago, everything was around SQL and databases and this sort of like static and batch way of thinking about data and information. Whereas a lot of companies now they think of it more in terms of -- how do I handle incoming streams of data as they come in real time and turn that into something that you can use. So, I think there's just a lot of the technology that's been built around enabling that. And probably the last one that I will list is -- this one's definitely the last like five years or so -- is more around organizational structure. So, like given this notion of self service, that you want teams to be able to run autonomously, given the technology that's around nowadays, there has been a lot of thinking around how do you structure technical teams, including engineers, analysts, data scientists in a way where they can run the fastest and in a pretty autonomous way. So, those are like three big pieces, and I don't want to misrepresent it. Like within Wayfair, we've made a lot of progress, but we certainly have a long way to go.

Drew Lazzara (12:00):

Yeah, absolutely. It's interesting, Dan, to hear you lay out that definition and some of the factors that have increased velocity, you make it sound very clean and simple, but I know obviously in practice it can be a much more complicated situation. And, when I think about it, tools like machine learning, enriched tools of machine learning, the data science discipline – all of those things do exist. But on the other side of what's been successful, I'm wondering, why do organizations still struggle to effectively leverage all those things that have been evolving for several years? Where do you see the friction points when organizations are trying to move toward better data velocity?

Dan Wulin (12:37):

You know, for sure there can't just be pure technical capability. Does the organization, or the team, have the right technical skill set in place to be able to do what they need to do. And that's something that, you know, any team has to think through it at a given stage. But I think for companies that are like really early on in their journey, that will be a barrier – like spinning up an organization that can run really fast in terms of just data or machine learning is not necessarily an easy thing to do. 

Outside of that, I would say there's a notion of organizational readiness. So, you could have the right people in place, but then you also have to have a philosophy or buy-in that getting data right,  incorporating algorithms, and doing those sorts of things -- there needs to be buy-in that that's the right path to go because you know, what I've seen is a lot of the time you can drive immediate value with those sorts of activities, right? So, maybe there's a case where a team can get a report generated a lot faster than they used to, and there's tangible value to having that reporting available. But there could be cases where it's more around, you want to be able to share data in a better way with the understanding that then people can consume it and then use it more effectively. Like that's a case where it's much harder to make an immediate business case for it, but by doing it over the long term, it's going to pay off. And if you don't have the buy-in philosophically, it's going to be really hard to make those long-term investments comfortably. And probably like the last one that I would lay out is, you know, just even starting with this stuff. So, there's, I mentioned the talent, but something like machine learning, you can't really do it effectively until you have a lot of your core data sets in a state where the scientists can act on it. You know, I've definitely seen cases where there's appetite for machine learning, there are credible problems that the team could work on, but the data is just simply not in a space that's ready for it. And I can take, you know, quarters or, you know, potentially years to get there.

Liz Ramey (14:50):

I've seen that so much, Dan, in organizations where they almost want to leapfrog the kind of clean data stage, right? And also gathering the right data. Like you said, building those core data sets, in order to provide the right insights. Have you seen personally where organizations have kind of leaped or, sorry, have you seen where organizations have leapt over that most really important part and tried to institute machine learning or analytics and massively failed because they didn't step back and get their data right?

Dan Wulin (15:39):

Yeah. So, I mean, based on cases that I know externally, based on projects that I've seen internally, you cannot sidestep or skip that stage at all. And you know, I could speak to examples that we've had at Wayfair where in pockets of the data science efforts, you could have all the buy-in in the world from the business team. You can have a fantastic business case for building machine learning and frankly, even sometimes a viable proof of concept, where you have a model that works at least on a scientist, you know, development machine. But then when it comes down to what does it actually take to get this thing implemented in a way that's useful for our customers or for suppliers or whatever it is, to get that, that's when you need the data in a really clean, predictable state. And in the absence of that, you're just going to have to go retread ground and figure it out. And that's speaking from lived experience. You know, like I said, I can't think of an external case where a company was able to sort of leapfrog getting the data right. And then just jump to more of like the analytics and machine learning. 

Drew Lazzara (16:59):

Dan, I'm curious, we hear a lot about this data quality issue from the CDO communities that we work with. And I'm just wondering, is it a matter of organizational discipline to get the data in a place where it's more workable for some of these advanced tools, or is there something in kind of the way it's communicated? What are some of the factors that you think keep organizations from saying, ‘Hey, let's hit time out and get this right,’ as opposed to kind of barreling forward?

Dan Wulin (17:28):

I would say like one thing that certainly comes to mind is, is there, I think this is true everywhere. Like there is always going to be this instinct of, I want to take what we have and get to value as quickly as I can. And that's a really good instinct to be honest, and I think it's the right instinct the majority of the time. But then where it creates problems is, I can use an example from Wayfair, when it's a metric we talk about all the time, something like conversion rate or order rates. So, what is the likelihood of customer placing an order during a session on the website? Something that can sound even that simple can be very nuanced in how you define it. So, like, am I considering all customers? Am I considering just new ones? Repeat ones? Do I care if they came from a certain marketing channel, or if they went to a certain part of the website? Do I care if they placed an order seven days after, 30 days after or whatever it is. So, if you don't consider those details, you can end up in a spot where you're using the data either sub-optimally or just erroneously. So then if you're like, okay, well, what do I need to do to not end up there? To not end up there, you need to have people who are technical, either working on the analytics teams or engineering teams, ideally both, who can speak to those nuances in a way that is going to resonate with somebody who cares very much about impact and wants to move as fast as possible. So, I've definitely seen that you need that credible counterbalance to the instinct to move fast, where you can, when appropriate, kind of pump the brakes and think about things in a more nuanced way.

Liz Ramey (19:19):

So Dan, this can be kind of a long journey, or it can be a quick journey for organizations to really amp up their data quality, but then also get to the point where they're able to utilize machine learning technologies and analytics for making business decisions and getting great business outcomes. But taking a step back, how do you build credibility for really, I would say, these two areas that need to be built before -- meaning data and these analytic technologies. How do you build the credibility before the business is able to actually see the outcome of what those things can do and deliver?

Dan Wulin (20:09):

I would say what's specific about the data space or the data science space is you want to start small. And you want to start small in a way where you can show very concrete value that scales. And I'm trying to use my words carefully where, like one thing you see in ML. And I think I kind of said this before, you can build a proof of concept quickly, and it's very easy to fall into a trap of -- you have some idea for a model that you want to build, say it's like product recommendations or something like that. You build the model and show some results off, and then everybody gets excited. But then when you actually go to roll the thing out, it can take a lot of work to make that happen. So, you want to be deliberate around like, yeah, of course I want the proof of concept, but what I really want is to get something that's exposed to customers or whoever that we can look at and measure in the wild. 

So, you want to focus as narrow as you can to make sure that you're not spending time on all sorts of stuff, and you can take something to some version of MVP completion as quickly as possible. And what I've seen is, when you do that if, especially if it's early on, you'll have a pretty good sense of what the top business priorities are. So, there shouldn't be a huge mystery in terms of where you can drive leverage. And then you just try to go with laser focus to demonstrate that you're able to achieve that. And then in the process, you learn a lot about what the operating model between the org should be. The scientists learn more about the business area they're working with, and conversely, the business learns more about what the scientist can do. So ideally, you have some MVP delivered at the end of it, you have much more familiarity and rapport with each other, and then you can comfortably scale things up from there.

Drew Lazzara (22:20):

Dan, you also talked earlier about this idea of organizational structure serving your data velocity. So that's a big part of how organizations can get to be moving quicker in these areas. As you're talking to organizations that are maybe trying to mature themselves in this area, what would be your ideal vision for what organizational structure looks like to capitalize? Is there a particular way that you think is most effective for organizations to design themselves so that they can capture these benefits?

Dan Wulin (22:50):

Yeah, so, I mean, if I had a speak to data in general and then machine learning separately. For data in general, nothing I'm going to say is like my original thinking or anything are really that innovative. You know, I think the answer that the industry has converged on, which is you want teams that are as autonomous as they can be, that are delivering very concrete products, and or data, and or services. And they're able to own that, from developing it to ultimately implementing it. So, an example in the Wayfair space is there's a team that's responsible for much of the customer behavior tracking. They own that in a really end-to-end way, where they're responsible for collecting data from the raw logs and then translating that into a form where other teams can consume it. So, you know, just within the engineering and data space, for sure that sort of example, or that structure is ideal.

In terms of the ML space, I would say the same things or very similar things around autonomy. You want teams of data scientists that can execute with as little communication overhead with other teams as possible. So that means you need self-service tooling, where they're not necessarily having to partner with engineers or data engineers to pull data for their models or deploy their models in a production setting. And the other piece that's probably more unique is I think there's a huge amount of value to having some kind of hybrid organizational structure. And it can take many different forms, but what I mean is, you don't want a fully central team, and you probably, unless you're like a very large company, you probably don't want a fully decentralized team because what I've seen in practice, and this is true at other companies, is you want to capture the benefits of centralization is all as well as decentralization.

So, like decentralized, that means you want your teams really closely partnering with subject matter experts on the business side. You want them partnering really closely with the engineering teams that are working in the same area. And then on the centralization side, you want to make sure that the data science products or the machine learning products you're building are getting reused appropriately, best practices are getting shared across teams, that there are credible and strong career paths available for folks on the team. So, like on the ML side, I would say the hybrid org structure seems to be one that folks are converging on generally. And then, enabling teams to be as autonomous as they can for sure is key when you're structuring things.

Liz Ramey (25:57):

This has always been really fascinating to me. I used to work for a data and analytics research firm. And so many of our clients spoke about this federated model that they were using, kind of this hub-and-spoke where like you're talking about, where there's autonomous teams. But then there's also a central point. I'm so curious, though, as we want to kind of spread out, democratize data, really build these autonomous teams to execute while using these self-service tools, who kind of fact checks that, right? If two teams in the organization are asking very similar questions, but they are working with two different groups of data scientists, who is actually checking the outcome of the research that they're doing?

Dan Wulin (26:59):

Yeah, so that's an interesting question. And there are a few pieces embedded in there. So, I think one is, this is almost this notion of redundancy. And I do think that's a natural danger if you federate or fragment a group, like very, very early. So one way we get about that is by having this kind of hub-and-spoke structure, where teams are enabled to work with their business groups that they're partnering with and engineering groups, but there's still a central vision or strategy for what's getting built. So, an example of that within Wayfair is we have models that understand our product imagery. And the first case use case we use that for was for visual search on the website, where we allowed for users to submit photos, and then we find products that look similar to those. That's the sort of thing, though, that you can plug it into product recommendations, you can use it for merchandising use cases, and so on. So because of the hub-and-spoke model of having these teams that are distributed, but also having them kind of feeding back information, we were able to understand that, ‘Hey, there's demand for these things across many use cases.’ Rather than having X many people distributed throughout the org working on it, let's have some number smaller than that building an incredibly solid central platform for them to use. So, I think it's very important to have that kind of feedback mechanism where you can recognize where there are redundant use cases, and then build a more general solution to it is like the one that comes to mind. 

And then, the other thing you said around fact checking, that one's really interesting to me. And at least the way I've thought about it is I think you get there by evolving your team from more centralized to more decentralized, where you need a very solid culture and community of analytics folks who are basically holding each other accountable for accuracy and doing the right thing. And, I do think it's very easy to slip away from that if you have teams that are federated from the get-go. That said, if you do build a culture where accuracy and just driving the business, measuring the results the right way, are top priorities for the team, that's usually something that I think you can sustain once you build it up. You have a sufficiently large group, then you can federate it out, and there's much less of a risk. Whereas the, like I said, the inverse, where it starts federated, that's one where I've seen it mainly outside of Wayfair, just not work well.

Liz Ramey (30:00):

So, do you think it's easier than, especially that culture piece, do you think it's easier for digitally native organizations?

Dan Wulin (30:05):

I think so. I would say that, on the basis of… I'm guessing on average, digitally native organizations will have more of kind of an openness to things like test and learn. So, like, ‘Hey, let's, let's do an analysis or build a model and actually test it in the wild, and then use that to determine whether or not we go forward.’ And, that's obviously one thing that you need to do to keep yourself kind of honest and doing the right thing, whereas organizations that aren't digitally native, I suspect they don't have that muscle. So, for sure that's something that they would have to build out to get things right.

Drew Lazzara (30:50):

Dan, I'm hoping we can kind of close the conversation by thinking a little bit about what the future looks like. So if we imagine that organizations are able to kind of roughly follow the path that you've laid out during this conversation, and they get to a place where data velocity is a core capability, what does the future look like for an analytics-driven business? What are some of the outcomes that you think organizations could achieve that maybe they aren't now because of this data velocity gap?

Dan Wulin (31:20):

Yeah, so there are a lot of them. I mean, if I had to go through a few that come to mind, I would say one is the ability -- I mean, it's going to sound redundant with data velocity, but I'll say what I mean -- is the ability to move fast. And the key is to move fast confidently. And you can do that when you have data or you have results that are highly trustworthy because there's a really robust team, really robust technical pipeline that's producing them. And that's a game changer, in my opinion, because I've personally worked on projects where the data is more trustworthy than others, and you could just move so much faster and with so much more confidence in a way that can be liberating, to be honest. So, I think it's, you can act faster. Related to that, you just get information faster. So rather than having to wait whatever it is, if it's a week, a month, you can speed up that cycle of learning and become much more reactive to the business environment that you're operating in. 

And the third that I would say, which is more on the machine learning side, is I think you can innovate much more quickly, as well. Where if you think of the kind of baby state -- of you create a machine learning capability, maybe they master one or two use cases. It can become almost magical when you have the data in a great spot and you have a sufficient number of machine learning folks, where if you want to do something really creative in terms of like how you're pulling customer data, what sort of things you're making predictions on and so on. Once you have that baseline ecosystem, the speed to getting to those kinds of innovative outcomes, it's just much faster and much, much cheaper. And this kind of goes back to something I said earlier, like all these things, it's hard to do a one-to-one business case for like, ‘Hey, you should do this because you get this.’ Because it is all contingent on things like innovation and R&D and so on. But again, in my experience, in my lived experience, in what I've seen elsewhere, it pays off to get right.

Liz Ramey (33:46):

Dan, we're seeing that kind of a statement that you referred to earlier around organizational maturity. And we're seeing that that's really been evolving at kind of a rapid pace over the last five years, I would say, even a little further back, but it's moved very quickly. And organizations are becoming mature and they're becoming more capable with utilizing data and analytics to make business decisions. But with that, and with kind of the current state of organizational maturity, I would love to understand -- what do you think the future of an analytics-driven business looks like?

Dan Wulin (34:32):

It's partly, what I was saying in the previous topic, where things just move faster, right? And, if you're… if organizations are changing and catching up almost by necessity, you have to do that, as well. So, it's not just good enough that for your own purposes, you're going to go from having the data fresh every day to having it fresh every 30 minutes. You need to do that because your competitor's doing that. And if you don't, they're going to eat your lunch. So, I think it does kind of just, it accelerates the need to have to have to do these things and to have to innovate. From more of a customer-facing lens, again, kind of tying to the last topic, I think it dramatically increases the scope for really interesting and innovative experiences that would not have been possible without the kind of data ecosystems that we have today. And again, I use the example of visual search at Wayfair. We're able to take that sort of information, text information about the products. Kind of like how customers and suppliers talk about them, we're able to mesh that with customer behavioral information, and do all sorts of things that add value to the customer experience in ways where we're not really asking for much input from them. So, they just kind of get it naturally as part of their experience on the website. And there are tons of examples like that elsewhere. So, I think there's a ton of upside just from a consumer point of view… any of us or anyone else, like just people shopping or working with companies that are increasingly leaning into data and AI.

Liz Ramey (36:23):

Yeah, we actually spoke a couple of episodes ago to Steve Lavin from Redbox. And we talked about consumer behavior, and the data that they're capturing with consumer behavior, and how they're utilizing that to make better business decisions. And it's just really fascinating, especially with the rate of change that's occurring right now because of COVID and other factors and how behaviors are rapidly changing.

Dan Wulin (36:49):

Yeah, exactly. And, you know, I think that's one of the benefits of being digitally native is in many ways like the foundation to be able to react to the times like this. It's there in a way that it might not be for a non-digital native organization.

Drew Lazzara (37:09):

Dan, before we let you go, we always like to end these episodes with two questions around the future. And the first one comes from our previous guest. This was Rebecca Sinclair, who is the chief people and communications officer for American Tire Distributors. And Rebecca actually has done a lot to bring analytics capabilities into the HR function. She was asking very generally about how you helped to bring information into different specific areas of the business, which I think is something you've covered quite a bit in our conversation. But are there things tactically that organizations can do to drive some quicker wins when it comes to pairing things like data science and analytics with specific business units, like sales or HR?

Dan Wulin (37:51):

I think one thing I didn't say is, is it can be really easy to have a narrative in your mind of how you want to apply these things. But then when you start parsing out, what would it actually mean to use data in this way, it can get really challenging pretty quickly. So, my advice is, and this is something I said, is keep it simple, and don't be afraid to keep it simple. And it might feel like in some ways that you're sort of sacrificing the dream or whatever it is. But especially in their early days, it really benefits you to do something that's not going to take many quarters, that's going to demonstrate value. Because at the end of the day, you're going to learn a lot about what it means to be successful doing these things. And those are necessary steps to get to the actual cool and fancy stuff. So, high level, don't be seduced by the fancy stuff, and don't be afraid to keep it simple.

Drew Lazzara (38:50):

Makes a lot of sense. Dan, our last question is -- when you think about connecting with other leaders across the business, what would be your next big question for them?

Dan Wulin (39:00):

One thing I really like talking with folks about and understanding how they balance it is thinking through, how do you manage maintaining existing products? And, in this case, it could be like a machine learning model or something like that. How do you think about maintaining that versus doing a pure innovation? And it's sort of like the classic innovators' dilemma. But within an organization, it can be tricky to figure out how do you do that in a structured way, where you're keeping yourself honest and keeping the team focused on delivering what's immediately in front of them, but also not shutting off the ability to potentially find something that could be a step change difference.

Liz Ramey (39:44):

Yeah, Dan, that's fantastic. And this entire conversation has just been really thrilling and insightful. And I just want to say from Drew and I, thank you so much for being our guest on The Next Big Question.

Dan Wulin (39:58):

Yeah. Thanks so much for having me.

Drew Lazzara (40:00):

Thanks, Dan. Appreciate it.

Liz Ramey (40:03):

Thank you, again, for listening to The Next Big Question. If you enjoyed this episode, please subscribe to the show on Apple podcasts, Spotify, Stitcher, or wherever you listen. Rate and review the show so that we can continue to grow and improve. You can also visit Evanta.com to explore more content and learn about how your peers are tackling questions and challenges every day. Connect, learn, and grow with Evanta, a Gartner company.