Resources

Data Science Hangout | Daren Eiri, Arrowhead General Insurance | Building a DS Playbook

video
May 3, 2022
1:09:56

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

Hi everybody, welcome to the Data Science Hangout. If you're joining for the first time, I'm Rachel. It's great to meet you. If this is your first Hangout, this is an open space for the whole data science community to connect and chat about data science leadership, questions you're facing, and what's going on in the world of data science. So we want this to be a space where everybody can participate and we can hear from everyone. So there's three ways that you can ask questions. You can jump in live, and raising your hand on Zoom works well for this. You could put questions in the Zoom chat. Just put a little star next to your question if you want me to read it, or else I'll just call on you to jump in the conversation too. And then we also have a Slido link that Tyler will share in just a second, where you can ask questions anonymously too.

Just want to reiterate, we love to hear from everyone, no matter your level of experience or area of work. And with all that, I'm so happy to be joined by my co-host for today, Daren Eiri. Daren is Director of Data Science at Arrowhead General Insurance Agency. And Daren, I'd love to kick things off with having you introduce yourself and maybe share a bit about your role and the work that you do.

Sure. So thanks for having me today. So I lead the data science team at Arrowhead General Insurance. Arrowhead is one of several subsidiaries within our parent organization, Brown and Brown Insurance. And we're kind of set up as like a shared service within the organization. So we're very decentralized, and not every subsidiary within Brown and Brown has their own data science team looking at data. Sometimes they haven't even looked at their data, right? So our team is there to provide support for those types of needs. It's not always something where we're building a predictive model. Most of the time, it's just trying to find value in the data that they already have and what their data is, what the data actually looks like.

So sometimes we'll try to find business value in that data, whether it's trying to improve their underwriting functions. Sometimes we'll try to look for ways to price insurance products. But in other cases, sometimes it's just looking at the data and trying to understand how to affect the bottom line. So that's kind of high level.

Engaging with internal clients

That's great. How does that engagement usually work? Or how do you get in touch with the teams or work together with them?

Yeah, so everything is budgeted, of course. So when we are planning for our budget for next year, which usually is late summer, we make an email list of anyone who's interested in doing things related to data science and analytics. And we have kind of like a menu, so to speak, of like, here's what we've done, and here's what we're capable of. Are these things that you're interested in doing? And so that will get sent out to the other subsidiaries within our organization, and they'll reach out to us. And then we'll have initial calls with them and try to understand what their actual problem is.

Sometimes it's, they jump straight to, I want to have a machine learning model. I want deep learning to do something cool, right? And that's not always the case. Sometimes you just start off simple, just looking at the data itself and trying to understand what the problem is and building a solution for that problem versus building that shiny object that they think that they want, but they really don't.

I imagine that takes some good communication skills, too.

Yeah, and sometimes it's setting expectations. You always want to be clear, you know, what, whatever you're building, what it's going to be capable of doing and what its limitations are. I think we're pretty lucky within our organization. Not many people that I work with, they all have a pretty good idea of what we're capable of and the limitations. And for the most part, they're always pretty invested in the work that we do. So, you know, having that stakeholder investment, I think we have a much easier time than other organizations, at least from my experience.

Opportunities in insurance data science

While we wait for questions to come in from everyone here, I'd love to ask you, what is something that you're really excited about in data science right now or are thinking about the year ahead?

For me personally, I think there is a lot of opportunity within the insurance industry to make a difference. You have all these insured tech startups trying to, you know, disrupt the industry, right? We are a much more formal insurance company. So, you know, I think we're kind of catching up to using data more efficiently and effectively within the organization. So, from my perspective, there's a lot of opportunities for the data scientist to build meaningful products, you know, whatever that might end up being, whether it's a dashboard or a report that provides some insights to provide business value, a predictive model, right? We have a lot of opportunity to make a difference.

From my perspective, there's a lot of opportunities for the data scientist to build meaningful products, you know, whatever that might end up being, whether it's a dashboard or a report that provides some insights to provide business value, a predictive model, right? We have a lot of opportunity to make a difference.

Whereas, you know, I think other spaces, you know, I think we'll get more competitive. And I think the leadership in our organization makes it a big priority for us to leverage data and analytics to make sure that we don't fall behind.

Are you hiring a lot of people right now? Or what does the team look like? How many people

Yeah, so we're a pretty small team. So there's other teams within a larger organization that does data analytics, they might be doing something different. One team is very focused on on Bayesian statistics, for example, and they have a different client focus. Again, our we're kind of like internal consultants. So we're helping the businesses internally. There are some other teams that are focused on helping the business with customers externally.

Yeah, so we're a pretty small organization, or small team. We have right now two other data scientists on the team. And then we are looking at hiring another data analyst. And we have an intern starting in the summer as well. So we're hoping to have that person grow within our team and that position hopefully would end up becoming a full time position with our team. So hopefully by the end of this year, we'll have four other people.

Balancing management and technical work

Awesome. I see there's a few questions coming in on from the chat. Andreas, would you want to jump in and ask that question?

Sure. I'm always interested in someone who has a title that looks like they're a bit managerial. How much of it is managerial? How much can you still do hands on technical stuff? And is it something you can decide for yourself? Or is it sort of dictated to you?

Mm hmm. Yeah, that's always, I think, a hard thing to balance when you're at a position where you're managing a team, and you're in a technical space. I think other people have mentioned that you don't want to lose that technical skill set if you can. I don't do the day to day, like modeling or analytics and looking at the data specifically. That's kind of what the rest of the team does. So a lot of my time is spent kind of like doing project management, product ownership, you know, trying to, you know, remove any road blocks, and working with the stakeholders to make sure that, you know, we have a, you know, a path forward.

I was talking to a friend of mine about this yesterday, and he's spending 50% managing and 50%, you know, doing technical work. And at that point, he doesn't have time to work on the projects and have that project management hat on. And, you know, move things forward, because he's still involved in the technical side of things. So I think you do kind of let go of that technical role, so to speak. So my time, if it is technical, it's kind of just making little tools here and there to make us more efficient as a team, and to monitor things.

I kind of have a unique perspective, because I have an IT background. So I used to like systems administration. So we use, you know, RStudio products. So we have RStudio Workbench, we have RStudio Connect, and then Package Manager. So I manage all that infrastructure, and the servers that host those products. So making sure that that kind of stays available. And, you know, I think one of our goals for our team is making sure that those APIs that we have released as production, that they stay operating during business hours. So that's kind of a challenge that I have to kind of balance. And sometimes that ends up with me, you know, working more than I should, I think, and especially during off hours when I have to do like, you know, the IT maintenance stuff.

But it's a need that I think that our team has to have. And I think that's also one of the things that gets kind of brought up is, you know, how do you work with IT as a data science team, and I kind of take that responsibility so that we don't have to rely so heavily on an IT team to do what we need to do technically. So it makes us more agile to make things work the way we want it to work. Yeah, that's I kind of went off on a tangent there. But in short, I don't do a lot of coding as much as I'd like to. When I do, it's kind of just making little tools here and there to make us more efficient as a team, and to monitor things. But the actual like data analytics, building models, I haven't been doing that as much.

That's interesting that you say you wear two hats, obviously, or many hats, but are doing the IT administration as well. Is there a point where you think that would get handed over to IT?

Um, I mean, I think that would always be nice. And, you know, I think, fortunately, we have a close relationship with the people in IT to make, you know, significant changes if we need to, but once it's kind of in place, then there's not much else that needs to be done. The things that, you know, that I can handle are, you know, just day to day things that are really small and doesn't require a lot of effort. So once everything's working in place, and it's not too much of a responsibility, but if something isn't working, then of course, that's kind of when you know, everything is put off to the side. And that's when you have to work with IT to understand like what the problem is, and it's a networking issue.

Deep learning in insurance

I see Ishwar, you just put a question to the chat a bit earlier, too. Do you want to jump in and ask that live?

Yep, absolutely. Hey, everyone. I'm Ishwar. I'm a data scientist at Nissan here in the supply chain manufacturing assembly plant, basically in Tennessee. So as you mentioned, in the discussion that looking at deep learning, and neural net stuff. So I was just curious if you have, or your team has ever experimented or try to work on the deep learning or neural nets, at least if not, you know, in the level of computer vision. I understand in the insurance industry, as I'm working on several projects, but it doesn't much need to train very high capacity, deep learning and neural net models. So I was just curious, if you have you see any potential opportunities to work on those deep learning models that can be applied in into the insurance industry.

Yeah, that's, that's always a fun question to think about. Short answer is our team does not do any deep learning. I think in the insurance industry as a whole, yeah, I think there's opportunities to yeah, I think, you know, I could be wrong. I'm not sure how other insurance industries are thinking about it. But you know, there's a lot of investment in that, right? It's, it's kind of you're going into like an R&D focus versus like, developing a product that could actually be used. And you know, there's a high likelihood of, you know, the business using it. So so there's certain risks involved with building products that leverage deep learning. But I think it depends on the application.

From a very simple perspective, I mean, just being able to read, you know, handwritten documents, there are scans, there's a lot of scanned documents that are, you know, still being used within our industry. Usually, it's typed out or something on some form, but there's always some going to be some handwritten forms and stuff. And so being able to extract that information, and, and, you know, leverage, you know, handwritten, handwriting recognition, for obtaining that data, that text, and putting it into a structured format, and that's always going to be a common application that is going to be needed in the insurance industry.

Although I think now that there's, there's so much of that available, like, in any, you know, industry, that you can just buy something off the shelf, and you don't have to have a data science team dedicated to solving that problem. Same thing with like images. And if you're looking at, you know, trying to process claims much more quickly and efficiently, right, like if you take pictures of your car, because you got into a car accident, you want to estimate the damage of that claim, you know, sooner than later, right? You don't have to send a field agent out to inspect the car and look at it, you can just take pictures. So that's something that could also be done.

Have you thought of the NLP in the team? I don't know if you have any models running on the NLPs for the, as you mentioned about text analytics or understanding the claims and so on.

Yeah, so we haven't done anything related to computer vision, but we have done projects related to NLP. So we've used spaCy, for example, to extract, you know, more meaningful information from bodies of text to make, you know, more efficient decisions. But I wouldn't say it's like, it's not deep learning, it's still within that machine learning space.

APIs in production

But I wanted to ask you, Darren, when you were mentioning APIs and using APIs in production, would you be able to share a bit more about what that looks like?

Yeah, so I think one of the common problems that the businesses that we work with, from an underwriting function, is knowing if you have, you know, 30 applications to go through, you know, these are insurance applications. And as an underwriter, you have to make decision about which ones to go through. And if you only have, you know, time to go through 10 of them, how are you going to prioritize that? So we have built an API that predicts the likelihood of, you know, that submission winning with our company. And so we'll give a grade out for that submission. And so if you have like a list of 30 submissions, each one has a grade, so A, B, C, D, F, or whatever. And that way, the underwriter can, you know, order those rank from A's to B's and focus on those first. And so our API basically takes data, you know, the core, you know, data inputs of that submission feeds it into our server. And then we send a response back with that predicted grade. So that's kind of like a common example that we like to use when we're talking about APIs and how we use that within our team.

Insurance innovation and the industry

I see there's a question that just came through on Slido. And it was, do you think that insurance companies are underrated or misunderstood regarding how innovative they are?

I think so. Yeah. When I joined this company two years ago, you know, I was just looking for a new opportunity and I was working with a recruiter and he told me about this position. I was like, oh, insurance, like that sounds kind of boring. And all I knew about insurance at the time was that I was paying for it and it's costing me a lot of money. And I really understand the need and the business need of insurance and how it works. And also how wide ranging insurance is when you think about it, like everything in business needs insurance, right? Because the business wants to minimize the risk and, you know, hand it off to somebody else by paying small amounts, you know, on a monthly or annual basis.

So at the opportunities that I think we have on our team are so significant because our company touches a wide range of products. You know, when you think of insurance, it's like auto health, home insurance, you know, just standard stuff. But there's insurance products for liability for dentists or lawyers, bookstores have their own special insurance, right? Even when you're looking at professional liability insurance, there's different liability insurance for different professions and how you apply and solve their problems for their data. It's so unique. So, yeah, I think it's pretty underrated just because it just sounds boring. But when you go into it and you kind of delve into the problems a bit more, then they're all very unique and they all have different problems that they're trying to solve.

What's the most exciting opportunity you think or a problem that you're helping the team solve that you're most excited about?

I don't know if there's anything specific. We've had a lot of cool projects that we've worked on. You know, part of that is related to using NLP. It was kind of the first time we were able to apply that type of modeling and approach to solve a problem. A lot of our problems are kind of focused more on just basic statistical methods. We don't do anything really complex because it doesn't require more complex methods. We always look at more advanced machine learning models when we're assessing the predictive ability of the work that we do. But it doesn't compare to the explainability and the ease of use and maintenance of using regular regressions. We compare those types of models to something that's more machine learning focused and they perform just as well.

Data science vs. actuarial science

Well, thank you. I see Alex, you just asked a question in the chat. Do you want to jump in?

I was always curious about this. In insurance, it seems like data science and actuarial science could be like doing almost similar things. And so I'm kind of curious what it's like at an actual insurance company. So I was wondering if you could comment on that.

Yeah, so our team is organized so we have a data science side and an actuarial science side. So both teams report to the same person, but they're separate groups and they work on different problems. The actuarial team looks more specifically at the insurance products and whether they're meeting their profitability targets, whether they're looking at do we need to make changes in the rate at which they charge the product to make sure that they're meeting their profitability goals. So they're looking more specifically at claims and what types of claims are leading to large losses and how that affects the pricing of those lines of business to make appropriate changes to the final price of it.

We don't really delve too deeply into the insurance specific problem set. It's more just operating as a business. How can we make better business decisions? We don't know enough about insurance from an actuarial perspective to kind of make those types of decisions. So that's kind of how we make that distinction on our organization.

If I can ask another question, I'm kind of curious do you see any actuaries that want to do more data science work or do you see data scientists that want to do actuary work like anecdotally?

Yeah, I think that depends on the individual person and kind of what they've been exposed to. I've had some data scientists or I've had like a data scientist position open with people who have some actuarial experience and they apply or want to change from actuarial to a data scientist position because they care more about like the data and the modeling versus I guess the specific problem space of just insurance and pricing. And they also really like the coding aspect of it. Although that's not to say that with an actuary you can still code and stuff, right? I think it's just you're more into it when you're in the data science space, especially if you're building out APIs and pipelines to handle data.

I think there's that love of just working with data that I think attracts some people to the data science side versus the actuarial side. You're really focused in just insurance. The other way around, I haven't seen that as much, where you have someone interested in data science, someone's moved to the actuarial side. I think also with the actuaries, they have other things that they have to be mindful of, right? They have exams that they have to take while they're working, which takes several years. So if you're looking from a professional development background, if you haven't worked or done any programming, that takes a lot of significant effort initially, especially if you're coming from a traditional Excel background.

Model interpretability and ethics

Thanks, Darren. I see, Ethan, you put a question into the chat as well, if you want to jump in.

Yeah. Hi, Darren. Nice to meet you. I just want to check, because we talk about machine learning models and selecting which models to use. I was wondering if there's a factor of interpretability or explainability you would need to consider when choosing model when you're working in insurance. Because I know, for example, financial industry is quite heavily regulated in terms of you need to explain what the model is doing. So you can't just throw down a box model if it comes to deciding what premium people are paying or what the interest rate that they need to be paying. And also, probably not just regulation, but also from an ethics standpoint, do you need to understand is there any particular subgroups that are being prized or being predicted differently than the other?

Yeah. So those are two really good questions. So with the interpretability and how it relates to regulation, that hasn't something we've run into. When I think of interpretability, it's more on the business stakeholder side and having them understand that the models that we're building, how it works. If we're taking in four inputs for the model that we're building to make a decision on something, how does a change in one of those inputs affect the output? So that makes it really straightforward when it's something like some type of regression versus some more machine learning approach. Although there are approaches now I think that are within the machine learning space where you can make things a bit more interpretable, I think it's just easier for us on our end to just focus on the interpretability from those regression models. So yeah, it's really just about being able to communicate to the stakeholders how those models work, what can they expect, what are the limitations of the model, and being able to do that effectively.

With the ethics side of things, that's something that's always been on my mind. Unfortunately, I don't think we've run into that type of problem. Making sure that we use data responsibly, that whatever data that we're using for the model isn't going to create some sort of bias against any type of groups, or applying data features to something that might inadvertently cause some problems down the road. That's something that you always want to try to avoid. So yeah, from an ethics standpoint, that hasn't been something we've explored too deeply, but it's something that's always on my mind when we're building something.

The data science playbook

You mentioned working with internal stakeholders and how you highlight interpretability as one of those, as a factor, I guess, in deciding the types of solutions you and the team build. I had a general question about what else do you do to help ensure that your ideas or MVPs or a particular solution you're working on are well received by your internal customers? And then is there a particular framework you follow? For example, I've started to read about product lifecycle management, but applied to data investments or like design principles, but for data products. So just curious to see what you're doing.

Yeah, that's definitely something that's always on my mind about how to more effectively communicate to stakeholders and make sure that they're aware of, you know, what we're building and that they're completely invested in what we're building. So kind of in that agile mindset, being able to frequently meet with those business stakeholders is important. You know, whenever you're making major decisions or have questions about something, making sure that they're available to answer those questions. We have a playbook, data science playbook for building out models when we are building models. And that generally follows a path of, you know, initial kickoff, you know, trying to understand what the business problem is. If we do build out a solution, what is it going to look like?

I think and sometimes they're like, okay, an API. Yeah. But they don't realize like what that takes to integrate an API, because you're going to need software engineers to implement that and process that data more effectively. However, they're going to process it and display that information. And also making sure that it's part of their business workflow. So if they're adding this new feature, how is the business going to use it and continue using it so it's not just something where they implement and it kind of gets left in the dark. So, you know, we do try to follow this playbook to make sure that, you know, we follow the steps that were to make us as successful as we can. That doesn't always happen, but we try our best.

So in terms of making sure that they're on board, before we can start building the model, you know, we just talk about their data and what it looks like. Because we have to do that anyways. And it's kind of a service to them because they probably haven't looked at the data at that grand of a scale. You know, they're kind of seeing things quarter to quarter, year to year. But we'll take a whole look at the data and provide them with what we're seeing and the perspective that we're seeing. Especially because we don't have that, you know, expertise in their business. And get their input. And we always ask for their hypotheses. And what they think will lead to, you know, what we're trying to predict.

And that kind of helps us understand what their problem is. And include their own ideas into the project as well. And I think that causes, that results in some investment and some ownership on their side too. Because they're kind of partnered with us to build whatever we're building for them. And when we have the model built out and, you know, we evaluate it and we see like the accuracy, you know, we provide them with several examples. And the variation of that data. To kind of show the limitations of what that, you know, final product will look like. And usually they have a sense, like, okay, that makes sense. Like, I understand, like, based on these inputs why you would predict that. And I think that helps out a lot too. When you're spending the time to walk them through that process and they see it working.

So kind of in that agile mindset, being able to frequently meet with those business stakeholders is important. We have a playbook, data science playbook for building out models when we are building models. And that generally follows a path of, you know, initial kickoff, you know, trying to understand what the business problem is. If we do build out a solution, what is it going to look like?

Yeah, that was really great. It seems obvious in hindsight. But having a playbook seems tremendously useful. So you're not ad hoc, like the wild west, just doing something different every time. Yeah. And then sometimes, like, you might forget small little details. So at least you have something to reference to when you have a playbook that works.

It sounds like you are so thoughtful in your approach to working with the business as well. Like, having the data science menu and then the playbook. And I'm curious where you learned that? Or how we all can do that as well?

Well, to be fair, that data science menu is actually from my boss. And he kind of came up with that idea. The playbook was kind of, it came out out of necessity. Because I forget things. So I mentioned, like, you want to make sure that when you do certain projects, you don't forget things. And I tend to forget things. So I always write things down. And having this playbook also kind of provides a visual of, here's what we're going to do with you. And this is what the steps look like. And so that kind of lays out the land for them, when we work with them. It's like a flowchart, basically. Just so they can see the complete picture. Versus just talking about it and being kind of vague. And at least you have some documentation that way, too.

So I think a lot of it just comes out of, you know, as I joined this team, you know, finding the problems that you encounter and finding ways to solve those problems as they come up. So you don't just kind of, I don't know. I think you want to be reactive to those problems, right? And find solutions to that. So that the next time it happens, it works better, obviously. So I'm very mindful of, like, if something's not working, how are we going to be able to fix it, so it doesn't happen again? So we do, our team does do, like, retros every two weeks, just like a software engineering team, for when things don't work well and ways we can improve the team. We kind of address those pretty frequently. And there's a time and space for that to happen, too. Versus, you know, just as they come up. Like, I think having that dedicated time really helps.

Working across teams and data engineering

Hi. So when you mentioned the actuarial team and your team sort of, you know, being on two sides of a house with a similar supervisor, it reminded me a bit of my team, where we're a data science team within a public health department. So, you know, obviously, epidemiologists model and write code. We also can model and write code. And so carving out the space of our work and the space of their work can sometimes be tricky. So I'm curious, do you work with that team and build products for them? Are you, you know, leveraging similar infrastructure? Like, how do you manage the relationship there when, like, often you could end up working on similar bodies of work?

You know, from my perspective, like, we end up doing a lot of technical stuff, and more of the software engineering type things. And I'm curious if there's a similar kind of relationship there. Any sort of takeaways from that relationship, sort of best practices for working well? I'm not sure if I can say anything about best practices, but it's similar to your approach, Robert, where we kind of, our team focuses more on the technical, like, statistical software engineering type approach, whereas the actuaries, again, they're really focused on the insurance and the insurance products.

So there's a lot of things in insurance that I don't know. So many things that are very technical in the insurance side that our team, you know, isn't as familiar with. So we lean on them for, you know, understanding the data sometimes. Like, you know, what does this data point mean? Like, what does this line of business mean? And can you provide some business context on what this data looks like?

But for the actuaries, you know, they don't always have the data that they need to work efficiently and effectively. So sometimes our team does step in to help, you know, clean that data when it's coming from different data sources. You know, that's a big challenge for them right now, where, you know, they're getting data from the insurance carriers, and it's, like, in these Excel files. And, you know, when you're doing view lookups and pivot tables, you can only do that so much before it gets a little bit too much to handle in Excel. And so you need to provide some better framework for that and some infrastructure. So our team does kind of help out with that effort too. Yeah, we're also trying to get them on board using our tools, which can be a challenge.

I think it helps if you can to have, like, someone in that support role for them, where their role specifically is to support them, so you can kind of focus on the things that you want to focus on. You know, that's kind of where we're at as well, where we don't want the data scientist time on our team to focus on cleaning the data for the actuaries. So we have a role that's available, so that's their specific responsibility, is providing kind of like that data engineering role, basically. You know, in some organizations, we have data engineers that do all the data cleaning and preparation before the data scientists actually do the work. And so I think from that example alone, I'm not sure if that would help, if you have another position to open up to provide that support, which is hard to ask, I think, you know, right?

Dataset sizes and insurance data

And it was, Darren, what is the size of datasets that insurance companies analyze and collect? Descriptors is okay too. Big, small, very big, perfectly fine. Just curious.

So in terms of the datasets we work with, we can work everything within memory on our servers. So we don't have to use like Spark or some online Jupyter notebook with lots of compute resources. So our problems aren't big data problems. But that's not to say that within the insurance industry, like that doesn't exist. Because it does. It's just that the data that we work with, it's pretty structured. And we just haven't had the need to leverage those types of tools. But everything that we're doing is in our RStudio Workbench in the cloud. So it's not like it's on our laptops with 16 gigabytes of memory. So we do have some additional capabilities on the cloud to handle everything in memory.

Darren, I'm just going to chime in here as the other insurance data scientist, so I can give an alternative view. So like Darren, you could have extremely small datasets of insurance that you can operate on your laptop without a server. Or you could have very detailed telematics data, where you're talking about tens of billions of records, right? So our data can get quite large, depending the type of company and the business that you're operating in.

Behavioral pricing and geospatial data

Yeah, thanks. Over, say, a decade, there have been trends related to changing premiums, based on behaviors that the insured can control. Two examples I've listed. One would be smoking versus non-smoking. If you don't smoke, at least in the United States, you can pay a lower insurance premium. So that's a motivator for people to change their behaviors. The other one is safe driving. And there are these little monitors you can put in your car. And in exchange for giving up a little bit of privacy, you might be able to get lower premiums. Two others that I'm particularly interested in are related to the environment. One is the idea that if you clear some space around your home in a western forest, for example, you might be able to pay lower fire insurance premiums. Or if you live near a coast and there's an increasing likelihood of storms, say, in the southeast of the United States, you might be able to pay less sort of flood insurance if you build your house on stilts or something like that.

Yeah, that's a really interesting question. I don't have any thoughts to that. Maybe Russ, if you work with anything on your end that you can think of, but I like the idea of if you're controlling for your yard and clearing out that area so that there's less plants around your property, then that should theoretically reduce the risk of fire is happening. But I think the problem with that is you have to make sure that you get recent satellite imagery. So that's a technology hurdle, but maybe that's something that will come over time.

But Russ, did you want to take over? Yeah. I mean, Serge, you nailed a couple of them, right? Although I would highly doubt any of those characteristics really affect how much you're paying. It's more on the eligibility side. I've worked in the past on a lot of computer vision models where you're using satellite and drone data for primarily commercial properties, but even in residential, your roof condition, the type of roof you put on your house is going to be very important too. Those are used in pricing and personal insurance for your homeowners, right? You know, I built a rubbish model, essentially looking at what's surrounding a business, right? If there's a bunch of trash, you know, maybe it's not a risk we'd like to ensure.

Brush, like you said, brush tree overhang is really important, right? You could have trees fall on your house, but also it's a bigger fire risk, like you mentioned. Something you can't really control is how close you are to other buildings and structures, right? An adjacent structure catches fire, you're at a larger fire risk. Rachel, not exactly what you mentioned earlier, but density to other autos, right? You can generally look at traffic patterns around, you know, a particular zip or sub zips area, right? And a lot of, you know, I'm sure Progressive and Allstate have baked that into their geospatial pricing. To date, they're pretty slick.

But I think in terms of like other behavioral type characteristics, you should be aware, like insurance companies are getting very slick on how you're performing on their other lines of business that you may have, right? So you're getting a homeowner's quote, and you have an auto policy with them, they can use the characteristics from your prior auto experience to price your homeowners, right? If you have, Rachel, unfortunately, if your car is being backed up into a lot, that could signify you're not as good of a homeowner's or renter's risk, right? So just be wary of that kind of stuff. Like people are starting to map, you know, locate their customers across the different lines of business and marry up that data.

Post-deployment follow-up and keeping models in use

I see Ishwar had a great question. I think he had to drop at the top of the hour, so I just want to read it. But it was, I have seen a lot of time that the models are built and deployed, but mostly they go unused by the business customers. How do you handle that situation on training customers or creating the business use? Use case with the model, not only in the beginning of the project, but after deployment?

Yeah, I think follow up is really important and kind of setting that expectation ahead of time. You know, I think we're not alone in that challenge. So I definitely set time aside intentionally and say, okay, we have this release and so I will follow up with you, you know, a month from now and then, you know, create those intervals a little bit longer over time just to make sure that they're getting the value out of it. Because sometimes they just end up having a different priority and they can kind of lose focus on it. So making sure that, you know, we keep in touch with the stakeholders and just chat with them, how things are going. Sometimes that leads into like other projects that they might have, that they want to work on. And so that leads to new things that our team can help out with. So I think as long as you're, you know, staying in touch with the business side and communicating with them throughout the whole process, and that whole process includes, you know, post-deployment, I think it's really critical.

One last question I want to ask you, Darren, is if there are specific resources or maybe books or podcasts that you'd recommend to us all?

Oh, I don't have time for podcasts and I don't commute anymore. So I haven't been listening to anything. For me, I think a recent book I read was The Making of a Manager by Julia So. She, I think, worked at Facebook as a UX designer and then she eventually was really early on in Facebook's time. And then she's kind of, you know, put in this position of becoming a manager and how do you lead a team when you haven't done that before? And so I've read it once a long time ago and I'm kind of going through it again just to remind myself of things I should be aware of. And it's good to kind of review every now and then to just to kind of make sure that I'm just for new ideas and stuff.

I think for me, being a leader of a team, you know, it's always like, what does that look like? And what kind of leader do you want to be? And how do you keep people engaged and keep talent too, especially right now, right? And how to attract talent. Those are all kind of new things for me and things I think about a lot. So that's been relevant.

That's great. Thank you. I just put the link to that book in the chat too. I'll put it with the summary on LinkedIn as well. Thank you so much, Darren, for sharing your experience with us and answering all the questions. And thank you all for all the great questions as well.

Happy to be here. Thank you so much.