Estimating & communicating ROI for data science projects
videoimage: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
Hey, everybody. Welcome to our hallway chat. I haven't come up with a final name for this, but this is our experiment to see if y'all like these. But my thought process here is, you know, how some people say their favorite part of conferences are those conversations they get to have in the hallway, or when you get to talk with someone about something you just learned or how it applied to you, or maybe something you wish a presentation covered.
So in one of our data science hangouts in early November, somebody had asked, Are you ever asked to estimate the ROI return on investment from an analytics project before the business is willing to invest in anything? And a shout out to Nick, who asked that. How do you approach that? And so there was a lot of conversation about this in the chat and people wanting to talk more about it.
So here we are. I posted on LinkedIn to see who'd be open to sharing their process for this and would love to introduce two leaders joining us today. They'll both share their thoughts to kick off the conversation. But this is a group discussion about all things ROI, to share ideas and our own experiences to help each other. Like the data science hangouts, it's totally okay to just listen in here. But if you have follow up questions on the topic or want to provide your own perspective on what's worked or hasn't worked, there's a few ways you can contribute.
But just to let everybody know, like the Data Science Hangouts, this session will be recorded. So if you want to go back and listen again, I'll be sure to share that to the Posit YouTube. But before jumping in, just a reminder, we're all dedicated to making our community events a welcoming environment for everybody. We love hearing from everybody, no matter your years of experience, the languages that you work in, titles or industry.
Introductions
But with all that, thank you so much for spending time with us today. I'd love to introduce both Joe Powers, Principal Data Scientist at Intuit, and Derek Beaton, Director of Advanced Analytics at St. Michael's Hospital. Do you both want to maybe kick things off by just sharing a little bit with us about each of your roles and kind of say hello before we dive into ROI? Derek, do you want to lead off? Sure, thanks. So I'm Derek, Director of Advanced Analytics in a unit at St. Michael's Hospital in Toronto. Our unit is Data Science and Advanced Analytics. We are an applied healthcare unit. So we put StatsML AI tools into production for mostly clinical use, but there are quite a few operational or like staffing solutions that we put into place. We're not research. It's all about getting things actually done and into the hands of users.
Overall, our group is about 30 people. We technically work across three hospitals, St. Michael's, St. Joseph's, and Providence, all in Toronto as part of the Unity Health Network. We are mostly in our house. We have maybe 70% of our solutions in R, and then we've got a variety of things in Python, including some new stuff we've recently launched with a little bit of stuff in pretty much any sort of analytics AI and ML domain. So imaging, clinical prediction, patient flow, forecasting, you name it, we probably do a little bit of it.
Awesome. Joe, you want to go next? Yeah, sure. So I'm Joe Powers. I work at Intuit. You're probably familiar with some of our flagship products like QuickBooks, TurboTax, MailChimp. And yeah, so I lead experimentation, which has a huge amount of ROI involved in it, largely around investigating new methods for how we could run faster and more accurate A-B tests for product and marketing decisions. And so when you move a whole company, you're shifting their decision making process to a new statistical method, people want to know, why is the juice going to be worth the squeeze on this? And so, yeah, when Rachel brought this topic up, I was happy to join in because it's been, I feel like it's one of those hidden curricula topics that it's central to doing your job as a data scientist, but absolutely no one teaches this.
it's one of those hidden curricula topics that it's central to doing your job as a data scientist, but absolutely no one teaches this.
Derek's project intake process
So when a project comes our way, it usually comes from someone on the ground. So a physician, or someone in HR, or say a clinical manager in some way. They have a problem, they know data can solve it, and they work with a few of us in the team, primarily the directors and the product managers on filling out and detailing a project intake form. And one of the most critical parts of the project intake form, is a key section here called evaluation metrics. And right off the bat, you should have a good idea of whether you're trying to address mortality, admission, length of stay, financial issues, or human effort generally across the types of problems you want to solve in the hospital. There's not a box other, which is usually an indicator of like, we probably aren't going to take this on or it's hard to define or hard to measure.
But a lot of these are generally easier to track or measure and assess the impact. And from there, we work with our collaborators on how to define the primary metric. So are you trying to do something to reduce the length of stay? Then we are going to look at how to do that as well as tracking the actual length of stay pre and post deployment. And when we do this, we have discussions early on about what the expected level of change is. So if you think you can bring length of stay down for, say, all patients in a certain unit by, let's say, 30 minutes in like an ICU or something. So you bring the overall down quite a bit. We'll get this written down here.
We have fairly deep and detailed discussions at this intake stage about what the metrics are, how we're going to measure it, how we're going to track it, and what the level of change is expected to be. And a lot of the expectation will come from knowing the hospital or the system in general, knowing the types of changes that do exist in other sites where you can point to literature or something else somewhere else and say, look, they saw a 10% reduction. So we would expect about the same. So a lot of this is based off of targeting something, identify a key healthcare economics metric, and let's work out how much we think we're going to actually make a difference.
If things aren't, say, a huge benefit or if they fall kind of in that other box, this is where we say, like, maybe we're not going to take this on or we have to dig really deep into the types of things that might yield value overall for either the unit, the hospital, the system, like the whole network of ours before considering taking it on. So we get pretty deep into this on how we actually assess and decide what to do before we fully commit.
Joe's simulation approach
I can just kind of echo some key things that Derek said around identifying those key metrics. A lot of people talk about stakeholders. I tend to think about gatekeepers. You have lots of stakeholders, but really thinking now about which ones are the gatekeepers that are going to block or allow your project to roll out? So what are the metrics that they really care about? And yeah, establishing baseline. There's very likely a dashboard that those leaders are regularly referring to that you can anchor to as a shared reality. And then what is a reasonable lift that you could expect? That could be based on literature review. It could be based on qualitative testing.
If you looked at 20 users in a UX setting and you said, oh, our invoice completion rate went from 20% to 90% with the new invoice flow, that is actually a great anchor point that you could now start thinking about. Like, okay, that's in an idealized setting, but when it moves into the product, what's reasonable to expect? So a lot of times you're just dealing with the best data or best information you can get your hands on. And that can run the gamut from maybe it was captured in your data lake to running a few surveys or doing some qualitative sessions. But any information is better than none.
So yeah, key metrics, baselines, reasonable lift. And then actually that's where for me, it becomes a data simulation task after that. And I'm happy to share a little bit because nobody ever taught me how to do this, but it's just become my go-to for the workflow.
So I'm in RStudio right now and I've loaded the tidyverse. And so you can imagine, yeah, my baseline is this rate A. So maybe I have a 50% rate. Maybe this is a conversion rate or a click rate, whatever is that key metric for your gatekeepers. And maybe I'm just going to explore some reasonable lift. Maybe I think I could lift this a 10th of a percent with some new program. I've got 100,000 customers and I'm going to run 4,000 simulations. So yeah, I can execute this now in this cell. So I'm using the binomial distribution. If you're not familiar or you're not comfortable using these likelihood functions, I think this is something just really worth leaning into. A lot of abstract statistics become really concrete and intuitive once you get comfortable simulating data with likelihood functions.
So all I'm doing here is I'm running 4,000 simulations with 100,000 trials, and I can set the probability of success on each trial. And this is going to output a tally of events. But then once I divide that by the population, I'll always get a rate. So maybe this outputs 50,000 events, but when that divides by 100,000, now I get a 50% rate.
So yeah, for my first simulation, I have 50.1%. The second one, 50.2. Now would you be able to describe what's a project that you might be working on, where you would be doing this? So this is really very close to the idea of a power analysis. If you were introducing this 0.1% improvement, you could start asking questions how likely are we to detect it? The smaller the change in 100,000 users, 68% of the time it would be larger. And you could explore the range of differences that you might observe, like a 95% interval of those potential differences.
The very concrete examples that I could give are like when I'm exploring a new A-B testing method, what I'm going to do is I'm going to simulate the test data from 100,000 A-B tests. And then I can start asking very practical questions from those simulated data, like how often did I detect the superior treatment condition? How long did it take to detect the superior treatment condition? How many additional conversions did we yield because we detected the better experience earlier and rolled it out to 100% earlier in the season? Those are the kind of return on investment questions that I can explore using very basic data simulation that my gatekeepers, my stakeholders really care about.
Communicating ROI to stakeholders
Through all means possible. So it's, you know, again, as I mentioned gatekeepers earlier, like thinking about who's approval, who's buy-in you need for a project to get green lit. That person is going to have a lot of people demanding their attention. They've got a lot of fires to put out. Like, how are you going to get your idea in front of them so that it sticks?
You know, you give a 20-minute presentation, you know, 20 minutes later, they're probably if you succeeded, they're only going to remember one or two things from that presentation. And so, you know, the like clarity of your communication, and we had a former CEO who loved the expression, repetition doesn't sully the prayer. You know, just repeat, repeat, repeat, and just keep kind of driving home that central message that appeals to that. Like if they have an urgent problem and you're offering the solution, if you're doing it clearly, you're not going to have to offer it that many times.
And so anyhow, but yeah, through communication, it's like, you know, start sending word out in Slack channel, offer, you know, I guess more peer supportive trainings through, you know, Zoom meetings and live meetings, get your peers on board, you know, try to get, work through your manager to get, you know, FaceTime in, you know, there's usually recurring executive presentations where you can like get in front of your leaders to pitch, you know, your potential solutions. And if you're coming in there, not only with, you know, the performance of your solution, but the return on investment in a metric, they care about and recognize, like, this is just all stuff that like greases the skids to make sure that, you know, your project gets a mandate and goes into practice.
And like we said earlier, just be ready on the tail end to verify that you delivered what you promised because that's, that really kind of like puts a bow on it and, you know, and really like cements the relationship that, you know, in the future, it's just going to be that much easier for you to get that FaceTime, to get the approval to put your projects into production.
Estimating time investment
Yeah. I was wondering how you would estimate or measure your own time investment, because I think the other half of this equation is, is understanding how much effort it will take for you to, to do that. And also like how accurate those estimates are.
I have only seen that accurately, accurately measured with survey data, like calendar day. I've, I've tried to use calendar data for these kinds of projects. It is meaningless. Like, you know, the loyalty of people to their calendars, the accuracy, the coding, just don't even try it would be my advice. But just, you know, asking like a good survey question about like, you know in the past week, like asking, you know, surveying analysts, for instance, or surveying the key people, like in the past week, how much time did you spend on like these parts of your work? Is that a typical week? But usually it helps to like anchor, like anchoring surveys to a specific time period can really improve the accuracy of responses. But honestly, just survey data is the only thing for, for your, to answer your question that I think that's the best signal. It's not great, but it's, I think it's the best you can do.
Derek, do you want to add anything there? Yeah. So estimating ahead of time, we don't like, that's just, it's really hard to make a guess, especially when a lot of stuff we do is pretty variable and unknown, especially in the early like parts of the projects. Measuring later, we're probably a little bit better about once we're like really in the weeds. And this would come a lot from our product managers and the way we've set up like our sprint cycles. Got a pretty good track of, or at least within good, like small windows, we have a pretty good sense of what we're doing and how much effort we're putting into the respective projects.
But this is definitely not something we, I would say estimate very far in advance. So like at the, the ROI project intake stage, more so like in the project, is this going to take you a week, two weeks, one day, stuff like that. Yeah. Yeah, exactly. If it's going to take you, let's say eight months, you know, to make this 10th of a percent increase, is that worth it? I mean, and then other factors, you know, like, well, maybe in a couple of years, we know things will change. So maybe that 10th of a percent isn't even going to last us that long because things will change and then alter that.
But you should have, obviously, I think a lot of times the benefit needs to just be unquestionably worth the time investment. You know, like it's, you know, it often just doesn't come up because it's just so obviously like the, you know, the reward to investment ratio is should be high enough that it's just not a hang up. If it's if it is getting that marginally close, like it's probably not something to pursue because let's face it, most ideas fall flat.
Handling underperforming projects
Yeah, I guess I was just wondering like how you handle if a project that you had to pitch and sort of fight for how you handled, maybe it under delivers or underperforms, as you said. Mostly, I guess I'm wondering about like the trust aspect of like the decision makers or stakeholders, how that impacts any future proposals you have for projects.
So you got to remember, A, like there's variability in the world and B, you can't control everything. And the thing you can control, though, is transparency. And so I think that's like just really key that like one, you know, your findings, so your ROI projections should reflect a range to begin with. You know, I mean, even just like mean and interval is like a great anchor point of like just to remind them like we don't know the future with certainty. Like this is the like reasonable range of outcomes we might expect. And then like just to, you know, be transparent throughout about how you without drowning them in detail, but just be transparent about like how you arrived at your numbers and then like what you saw at end of season.
I mean, there are certainly been projects, even successful projects that have been on that along the way fell short, you know, because we were unaware of some key factor. You know, and then you learn because it doesn't deliver the way that you expected it to. And never during that did I feel like the leaders who had invested in us were upset or frustrated. It was like, look, we leaned into a difficult problem and we learned along the way. And these are, you know, to be clear about like this is what we overlooked. This is the correction we put in place as we pursue it next time. And I think that that's that transparency and honesty is like the most you can do.
But again, just also anchor them to a just get them thinking ahead of time about like there's a range of outcomes we could experience, even if we did introduce this through improvement of, you know, 1% or 2%. Like it's sampling variation. It's it could show up in our, you know, customer sample for the year as a lot of different values. That's not how normal people think. Normal people think very like causally, narratively, precisely. They don't think in terms of variance. So you have to kind of condition your key stakeholders to that, to expect that kind of variance and they'll they'll appreciate it in my experience.
That's not how normal people think. Normal people think very like causally, narratively, precisely. They don't think in terms of variance. So you have to kind of condition your key stakeholders to that, to expect that kind of variance and they'll they'll appreciate it in my experience.
So to the underdelivered part, we, this is probably gonna be very different to what Joe said too. But like we, we have maybe like four different ways in which we can expect and or watch actual potential or actual under delivered projects. So early in our baseline, like trying to establish what the effects really might be. If it looks like it's not quite as big of, say, an issue or feasible for us to pursue it, and we suspect it would be something that under delivers on the potential impact, we're not going to do it. During model building, if we do not cross certain acceptable thresholds, in particular, things around like precision recall or PBV and MPV, like we have some deployments like do not miss, like you just absolutely do not miss a prediction. If the models aren't performing well enough, like we will not go forward with it.
We have two types of monitoring we do. One is we monitor models and we monitor the intervention. So the model may target one thing while its use is meant to impact something else. And those are the primary metrics that I showed. If we're not seeing, say, an actual impact, or something is going in the wrong direction, like that for us is underdeliver and you just stop. We've been talking a lot about this in the context of one of our particular projects where in the post phase, it's been deployed, it's being used, made a big impact, dropped readmission rates, and we're seeing over time it's drifting back up. So we have this really uncomfortable question right now. If we pull it, does everything just jump back up, or is this a natural trend back up where something about the deployment a year later is not quite actually working?
I'm thinking also, as we talk about this, that like, you know, we mentioned earlier, like those metrics that your stakeholders all like align on and value. I'm thinking also that like just explicitly redefining those metrics in some cases is often really valuable. And this goes to that like underperformance issue, is that two people can mean completely different things by the same metric. And if you both explicitly defined it, and then thought ahead about like, what are potential ways where things can go awry? And how can you get ahead of that?
I think that's a lot less important in industry. It matters in medicine. So I'm gonna, I'll say to Derek's case, like this is not his approach. But if you think about like, in tech, a lot of times a false positive is a free mistake. Like all the development costs are sunk by the time that the test runs. If you release, you know, some minor change that turns out to be like a false positive result, no one's harmed by this. It's actually the false negatives in tech that are really dangerous. The reason I'm bringing all this up is like, so I work on experimentation, like things that I get metrics that matter to me and my stakeholders are like accuracy, and duration or speed. Like that kind of shift accuracy is like a term we can all throw around. But like, I've, you can completely redefine that and justify justifiably, but you need to get your stakeholders on board that like, hey, look, we're not going to key your index on like false positive rates. You know, that's not what matters to us as a business.
Aligning with business strategy
What I think about this is, you know, what does the company need? You know, and I think Derek's example that he showed was just so perfect of this. Because it didn't do it on purpose, or it did on purpose, but it wasn't like in your face. But it was like, these are the metrics the company actually cares about. How is what you're doing fixing those metrics? So I think what we get a lot, what I tend to see, is passionate data scientists. And they're going off, and they see stuff, and they're like, oh, I bet I could do this. But it doesn't come from this direct line from strategy. Yeah, no, I 100% agree. Put it in the metrics that your non-technical stakeholders care about. Yeah. Which is probably not area under the curve. You know, it's like, live saved beds, empty beds, et cetera. Yeah. Put it in those terms.
And my next point was, why are they asking about ROI? And you know, sometimes they ask about ROI because it's a multi-billion dollar decision, right? And so the CFO needs to know, right? And so there, 100%, you need to go dig into this. But I think sometimes they ask about ROI, because they don't think it's valuable. And they're like, just asking you, prove your value on this. And then I think, you know, sometimes they're a project that are devoid of any kind of corporate metrics or anything like that. And so they're saying, what's the value? So you can do all the work you want. You're never going to convince them. Okay. So you got to pull it back to strategy somehow and bring that forward.
I guess I don't see those as competing. Like if you have a strategy, a good strategy should be anchored to like a clear goal. And then the company or your leaders are aligning resources to ensure the successful pursuit of that goal. Some things are like, like skill development among the data scientists and analysts. Like it's something where it's like intuitively, it's like, yeah, we invest in this, like this should pay dividends. That will be very, very hard to measure, but it's like, or, you know, the best measurement you're going to get is probably anecdotes and case studies, but is it still worth doing? Yes. But like, again, you're going to have to be transparent upfront that it's like, we're not going to be able to measure this with like, you know, X percent increased conversions or decreased mortalities, but it's like, where do the moonshot ideas come from?
It's interesting. I mean, I think both Joe and Ann raised really important points here. I like to sort of ask myself the question, like, what are the underlying assumptions behind anything that we're doing, right? Like whether it's on the strategy level, like what is the end goal here? And then how do I know that I'm on the right path? That being said, you also want to leave enough space for the moon shots, but I challenge everybody when they say, hey, I want to do X. I'm like, why? And what are the metrics that you are hoping to drive? It's interesting when you show the intake form, it's intriguing because that is an existing system that you have. The question that I find myself asking is like, okay, how long does it take before you know that you've got a good test? And when does the impact sort of kick in, right? Because some things, like in product development for us when we're building new features or new products or whatnot, it could take 18 months, 24 months before you know whether that investment was a good investment or not.
But I think it's good discipline in general to sort of say, hey, you know what, here's what I'm assuming is going to change, or here's what I'm assuming is going to get better. Here's how I'm going to measure it. And then use that as a way to learn and get better, right? The moon shot one is a good question. I think you might want to be okay with saying, hey, 5% to 10% of all the investments that I make are going to be these kinds of moon shots, and I'm okay if they fall flat on their face.
The rule of pi and managing workload
There was a comment earlier about your personal time investment, and it just kind of echoed me. So I used to build furniture. That was before I was in tech. And estimating how long a new piece of furniture was going to take was really hard. And so when I took really good notes, what I learned was this, that if I had built something similar before, I had to triple my estimate. And if I hadn't built anything similar before, I had to multiply times nine. And people, friends would say like, oh, you're terrible at doing estimates. And I'd be like, no, I'm awesome at doing estimates. And then when I got to grad school, I found that the mechanical engineers had names for this. They called it the rule of pi, that everything takes 3.14 times longer than you remember it taking.
And so the point of all this was just like, as you think about your own time investment, you've done all the ROI, but just making sure you have bandwidth to see this through successfully. If you've done something really similar before, triple it. If you've never done something similar before, multiply times nine. And I can tell you on my own project, usually just that ninefold thing is just like all the stuff you didn't think of. Like, okay, you put the model into production. Well, now you need to train every relevant user. You need to build enormous amount of documentation, enormous amount of tracking. This is all the stuff that was not part of the model development. And if you don't have time for it, well, your life's going to be really miserable for the rest of the year while you're working nights and weekends to see it through, or just budget, really over budget for time. And you can have a happy life and get things going.
the rule of pi, that everything takes 3.14 times longer than you remember it taking.
I just want to add one more thing to what you just said, Joe. I love this. I'm not going to think about this pi squared idea. But the other thing that I think is really important is to just remember that the more things you take on simultaneously, the more your work and progress increases, the more your context switching, and things will actually take longer than you even realize. I know this is kind of like meat and potatoes for a lot of people, but it's so easy to get sucked in and say, oh, this is not going to take long. I'll do this. I'll do this. Before you know it, you're doing 17 things at the same time. And that just slows the living daylights out of you.
Presenting ROI as real options
So, I mean, I come at this. So, early on in my career, I was actually in charge of doing projects like this and actually presenting them as budget items for the Coast Guard to Congress. And you get your ass grilled. And so, like, for instance, when I ran a data science team at an insurance company, all of our projects were ROI. The goal of this project is to shape X percent of this cost. It's worth X. Here are all the variables. These are what we think the ranges are. This is the project we should do.
On a financial basis, I always just do stuff as what we call real options. Your first is your exploratory POC. What's that going to cost? What's the benefit? Benefit's probably close to zero, but it's going to be an investment cost. But then you have an option to kill, an option to expand, or an option to do something else. And if you can think about some of those options, you can price them out. And so, I had to present all my projects, five years, here's the return, here's the discounted value, here's the return.
And so, I'd have these presentations to people, and you could go through those discussions, and we would do variable sensitivity. It goes back to that never present a point estimate. You know, this, it's a range between this and that, it's a range between this and that, it's a range between X and Y. And if you get really fancy, you can actually convolve them all in a simulation and say, here is the entire range of outcomes, and this is what it looks like. No, it's not a bell curve, it never is.
But, so, I mean, when you talk about ROI, that's how I've always thought about it, that's how I've always been forced to present it though, because if I'm asking my division leader, president, whoever, to spend 50, 75, or 100 grand, or to dedicate, you know, you think about a man, a person hours, 2,000 hours a year, maybe I want to dedicate 1,000 hours of the data scientists that cost me X amount of money, what am I going to get for that? And that's just some of the ways that I've had to present it and pitch it and, you know, get people to sign off and say, yes, we will give you the money, or we will give you the time to do that.
And to the earlier point, it's all about what metrics, what are you going to improve? You're either going to improve revenue, or cut costs, or improve efficiency, or cut costs. You have to have that in mind, otherwise, you know, it's just R&D, and if it's just R&D, then you've got to be able to pitch that to, hey, this is R&D, we don't have anything to work, but we're going in a space that we've never been before, and sometimes that's just, you've got to be honest with that.
Redefining metrics and closing thoughts
Yeah, I think the question is, you know, we were talking earlier about this idea of false positives and true positives, and you can kind of like, in a classification problem, you can kind of like tune that, and you kind of associate costs and things to those various outcomes. Yeah, so in insurance, I'm in property and casualty insurance, and we work on models that go into pricing, and it's way more complicated than just sort of like true and false positives. We sort of build a pricing model, we try to estimate losses for a given set of features, and then what happens is that all that information kind of goes over the wall to actuarial partners, our product partners who have certain kinds of business objectives they have to hit, so actually the models kind of get like tweaked and modified, and then they layer in additional things like discounts and so forth.
And so when I go in thinking about, okay, I want to improve these models, maybe I'm bringing in a new third-party data source, for example, I want to figure out if this is a worthwhile data source to include. I have to figure out, is the model improving, but then I have to know also, is it worth buying, right, and so the models kind of decoupled in some ways from the final outcomes in really complex ways, and I'm wondering if there's some way to think about this fuzzy sort of connection between model improvement and ROI at the end of the day.
No, no, I think a lot of decisions get made on overly simple metrics just because it's so overwhelming, like not all customers produce the same amount of value, but so many business decisions are based exclusively on conversion and retention, as though every customer were equally valuable, so but you're going to overwhelm your stakeholders if you're simultaneously like introducing like a new justification for like, oh, well, you know, deduct these costs, add these like lifetime value benefits, etc. Exactly. Those have to become separate issues, so then you just have this like reliable lifetime value metric you could pull off the shelf, and you could ask now, hey, what if we made decisions based on three-year revenue per customer rather than raw conversion, which treats everyone as equally valuable? You know, how much more money would we save? How much more money would we make? Those are all kind of your ways to shape ROI by redefining the question.
I don't know, maybe that was kind of the theme today. It was actually just like, you are in a very powerful position to redefine the question when you're doing an ROI investigation, and don't take that lightly, but keep in mind that people only have so much attention, so like you gotta modularize things so that they can stay on board. I see so many data science projects just completely overwhelm, you know, their stakeholders, so avoid that, and you know, yeah, you can get a lot of influence from that.
Gerard, I know you had your hand raised, so I know we're right at time here, but I just wanted to see if you had time to ask your question. I just wanted to make a comment about these moonshot ideas and these R&D ideas. We usually, at the beginning of the year, just agree upon an amount of time that you have to do research and or for projects that don't necessarily have an immediate impact, and then basically throughout the year, if you come across a new really cool thing that you want to work on, the question immediately bounces back to you. Is this worth stopping the other research that you're doing right now and switch to the new project? Can you wrap it up? Can you switch? Because you only have so much time, and then there is not even a need to generate a return on that investment, and then usually I can myself answer the question whether I want to switch or not, because I either don't want to give up on the project that I'm working on or I'm done with the previous project.
Well, I don't want to keep you, Derek and Joe, too long, so thank you so much for sharing your insights and experience here. I just want to see if you had anything else you wanted to add to the conversation before we go. Yeah, just give it 10 years, and it'll be a no-brainer. No, this was great. Great participation. Thanks, everybody. It was good.
Yeah, this was awesome. I would love to keep having these conversations, and I mean, I will definitely stick around here, but I understand people have to go to other meetings and can only spend so much time with us, but is this something we'd like to kind of keep talking about or if there are other specific topics we want to do a deep dive on? Please let me know whether it's in the chat or if you have an idea later. You can message me on LinkedIn or email me.
I know there were a few great questions that we didn't get to cover, so maybe I'll go and collect some of those, but Derek, if you are able to stay on, I saw one of the questions was about the infrastructure costs, I believe. How does production resource costs, like cloud computing, factor into your ROI calculation? Here's an easy answer on the cloud computing. Given that we're in a hospital and in Canada with very particular regulations, we don't use the cloud. So the infrastructure stuff becomes part of operational budgets that we talk about in the beginning of the year with everyone else, including IT and other folks of like, yeah, if you want these projects to keep going, this is where you get to point to your previous ROI. Do you want projects that will continue to bring down costs and bring down readmissions and increase patient flow? This is the infrastructure we need. These are the subscriptions we need. Fortunately or unfortunately, we're entirely on prem, so I don't need to talk about cloud.
Awesome. Well, thank you all so much for joining today. This was a really fun, lively conversation. I really appreciate it. Thanks again. Have a great rest of the day, everybody. I'll see you later. Bye.
