Data Science Hangout | Patrick Tennant, MMHPI | Welcoming People into Conversations to Make Change
videoimage: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
Hi friends, welcome back to the Data Science Hangout. If we haven't met yet, it's great to meet you. I'm Rachel, if you want to say hi in the chat, please do. The Data Science Hangout is an open space for the whole data science community to connect and chat about data science leadership, questions you're facing and what's going on in the world of data science. The sessions are recorded and they're shared to the RStudio YouTube.
We also have a LinkedIn group for the Hangout too, if this also helps you connect with each other. Together, we're all dedicated to making this a welcoming environment for everybody. So we love when everyone can participate and we can hear from you all, no matter your level of experience or area of work.
So there are three ways that you can ask questions today. You can jump in by raising your hand on Zoom. You could put questions in the Zoom chat. Or I could also call on you to introduce yourself. We also have a Slido link where you can ask questions anonymously.
One other quick note, just wanted to mention that if you are currently hiring, we'd love to have you feel free to share your open roles in the chat as well. Thank you all so much for joining us here today. I'm so excited to be joined by my co-host, Patrick Tennant. Patrick, I see your title changed recently in the past month. You are now Senior Director of Data Science and Analytics at Meadows Mental Health Policy Institute. So congratulations on that new role as well. I'd love to have you introduce yourself. Tell us a little bit about your role, the organization, and maybe something you do in your free time outside of work.
Sure. Thank you, Rachel, for the kind introduction and for the congratulations. Yeah, that is brand new. As of this week, that just changed. I wasn't even going to mention it, but I appreciate you noticing and bringing it up. So I am Patrick Tennant. As Rachel said, the Senior Director of Data Science and Analytics for the Meadows Mental Health Policy Institute. And let me read the mission statement of the Institute because the bosses will want me to get this one right.
So the Meadows Mental Health Policy Institute provides independent, nonpartisan, data-driven, and trusted policy and program guidance that creates equitable systemic change so all Texans can obtain effective, efficient behavioral health care when and where they need it. So that is a mouthful, but we, in effect, are a think tank that makes policy recommendations and helps localities implement policy changes around their mental health care system.
We are Texas-focused. We, for most of the history of the Institute, have been exclusively Texas-focused, but in the past few years have become more national in scope and even international a little bit in that we are taking things that we see that are working here and adding them to national conversations.
Really, the organization's only been around for 10 years or so, a little bit less than that technically. And so it's not an old group, but it's been very productive in Texas, made a lot of changes, and things in the mental health system here are really getting better in large part, not exclusively, but in large part due to work of the Meadows Institute. The 130 or so people that work there now is the largest it's ever been. We continue to grow and have grown pretty rapidly in the past few years, but the data team is only, there's six of us that are full-time, then we have a research assistant as well. So we're a pretty small chunk of the organization, but we work across the organization, touch a whole lot of different things.
Basically, anything that has a quantitative flair to it, they're going to ask the data team to weigh in on. We on the data team are primarily researchers and academics by background. My boss, the Vice President of Population Health is a epidemiologist. It was a tenured professor before she joined Meadows a couple of years ago, and she has built out the data team, myself included, I joined about a year and a half ago. And we've been working with the data of the Institute for that period of time. Prior to that, it was a consulting firm that was on retainer that did all of the Institute's data work. And so we've been transitioning all of that in-house and building structures and processes around having all of that in-house, because it's a little bit different thing when you own it all yourself versus, or getting kind of discrete answers from an external partner.
Rachel, your last question was about me personally. Yeah, just curious to get to know you a little bit. What's something fun you like to do outside of work? Sure. So I have a 15-month-old daughter. So outside of work time, most of it goes to her. We're also expecting another little one in January. So we're a little busy around here. But yes, it is lots of fun. It is honestly not much free time, but as much joy as I can imagine.
Building the in-house data team
It sounds like the data team is growing quite a bit. And with your new role, I'm curious, what does the year ahead look like? Or what's something you're most excited about in the next year? So we are growing. We don't have a position posted currently, but I could certainly see one coming down the pipe. We are really, really kind of restructuring how we do all of our work. And so I'm really excited that we are going to have a kind of a more robust set of workflows and organizational processes in which people request data from us, request answers or analyses from us.
Like I said, we built from the ground up a little bit. We've only recently started to really fully utilize Git for version control. So that's kind of where we started from. We were in Dropbox sharing scripts via email to each other when I first joined. And so just building from the ground up. But I think those steps are going to make us more effective at the work we have been doing, but also allow us to kind of expand and do some more exploratory and kind of neat, I don't want to call it cutting edge, but like closer to the cutting edge kind of work.
That's great. I see there's a few questions starting to come in on Slido and someone had a great question. It said, super interesting to hear about the transition from using consultants to an in-house team. I'd love to hear about the infrastructure build and how you did it.
Sure. That could easily take an hour itself. When we came on, when the data team in-house started proper, like I said, when my boss joined a couple of years ago, there were a few things or a few pieces that the external consultants kind of owned and managed for the Institute. One was a big hospital database. So it's all admissions and all encounters at ERs, hospitals in the state of Texas, and the external team owned and managed this, and sort of like kept all of that on their side, but they would provide answers, provide discrete answers out of that to the content teams in Meadows.
So we took all of that, brought it over, put it into AWS and built the kind of the structure for our analysis of it in-house totally differently than they had done it. The other one was the prevalence estimation that we do. So if you want to know how many people in County X are dealing with major depression, for example, we have a proprietary estimate that we developed and that was owned by the consulting firm. We have now brought that over in-house as well.
I think if there's a lesson learned that I could take from that, it would just be that we have a good ongoing relationship with that external consulting firm and that has been completely critical. We honestly couldn't have done it without that. If we had a, you know, like a cutoff or like an end date with them that was, you know, firm and they were going to answer no more questions to that point, we would be up a creek. So I hope that answers the question, basically if you can maintain that ongoing relationship in some way, even if it's just a small retainer in a few hours a month, I think it's probably worthwhile.
Public sentiment, policy, and data
Yeah, that's really helpful. I see somebody else asked, knowing how much public sentiment affects what issues get attention, do you end up allying with news organizations to share data and compelling story ideas? Yes. I don't know that allying is the right word exactly, and to be clear, that's not my end of the operation. So I can't speak to it perfectly, but yes, we are certainly aware that we need public sentiment to be with us on these important issues. And so we have press releases and we work on stories and we put out information in a variety of ways, and I will say a growing variety of ways.
Are you able to give us an example of one of the projects that your team's working on? Sure. So at the institute level, one policy, one specific sort of mental health intervention that we really believe in works and that the research really supports, honestly, incredibly strongly, it's called collaborative care. Which in very brief is this idea that your PCP, your family doctor, your primary care physician is your main point of contact for most of your health care. And they track things like your blood pressure, not because they're necessarily the one who would do open heart surgery on you, but because they're the ones who keep track of all of this information. So collaborative care is just the idea that we can bring mental health into that same sort of treatment space.
So they'll track your blood pressure, but they'll also track your depression and anxiety symptoms. And then collaborative care says we build the system out from there. So the referral network starts from the PCP. They are getting supported by a psychiatrist. So they know when is the appropriate time to refer to a psychiatrist, when is the appropriate time to refer to a social worker or a counselor. And then they also have somebody in house who's helping with those things and can do some of the sort of more straightforward behavioral interventions that are possible in like an outpatient PCP setting.
The Institute as a whole encourages this through, we're trying to get and have gotten Medicare and Medicaid uptake of it, as well as private funders to pay for these things, because in health care, that's what you have to do. You have to move the funders to move the treatment. And then all the way down to the data team specifically, the data team is there to help say, you know, a given county in Texas says, okay, we see this, we understand, we want this model to be implemented in our county clinics. But we don't know how many people are going to need each tier of care under collaborative care.
So then the data team will come in and work with the data that that county has from their health care provision over the past X number of years, and work out estimates using that data, our sort of foundational prevalence estimates that I talked about a minute ago, county data from other counties that are comparable. So we'll do some matching on that and say in other counties that have implemented this, here's how you might see this go. And so we're bringing together lots of data sources and analyzing them to, with the goal of getting to the answer of, okay, here's how you could plan a staff. Here's what you might need in terms of funding for your local hospital to be able to implement this well.
Prioritization and team workflow
Thank you. And thinking of, like, all the different problems that you're working on, how does a team prioritize? Do you have a process for that? Yeah, no, it's a challenge. So we do, we get a variety of requests. Some are contract level, that's kind of what I was just talking about. If we're in an agreement with a particular health care system or a state or local government, some of them come from legislators, right? If a legislator wants to know something, or our CEO thinks the legislator wants to know something, we're going to generate that estimate too, right? Those two things typically bubble to the top. We have to meet our deliverables, and if there's a request that's going to go to some policymaker somewhere, we need to push on that, right?
I think if there is a thing that drops to the bottom that does not get priority, it's our exploratory work. And that's, you know, that stinks. I'd love to spend more time doing that. We've got a really impressive team of people, and I think we could really move the needle in terms of methods and analytic answers. But we unfortunately don't have all the time that we want to do those things. So the priorities, to more directly answer the question, the priority list goes policymakers, contracts, everything else.
988, mobile crisis teams, and telehealth
Another Slido question was, how has the focus on 988, mobile crisis teams or community-based stabilization, for example, impacted your organization and or your work on the data team? 988 is a national number that is now up and operational that is sort of equivalent to 911, but specific for mental health needs, suicidality specifically. So if somebody is feeling suicidal and needs help immediately, they can call 988 instead of 911 and get that help.
It is all complicated, right? The different communities that we work with, and again, we're primarily Texas-focused, so the different communities have different levels of sort of uptake and buy-in of these things and of mobile crisis outreach teams, or MCOTs is what we call them. And we believe in the methods, the approach, the intervention I think is strong, but you can't force communities to buy into these things, right? So it's about providing the information, providing the guidance when there is interest, and supporting the implementation when the time comes. But in terms of how we work with that, it is about collecting and curating the data to show what we think it will continue to show, which is that these are positive programs. These programs work well and support people, and making a clear and compelling empirical case for that is sort of our task on the data team.
That return on investment question comes up quite a bit, so I'm just curious, how do you actually calculate that? It is complicated. We have some economists laying around who help with these things. I'm a psychologist by background, so that is not my area. I actually trained as a family therapist originally, so I'm sort of a long way away from the data and have sort of slowly made my way over.
But our economists bring in as many factors as we can come up with to try to understand that there's going to be a ramp-up cost, the building of the program that is typically going to be really expensive, and you're not going to be seeing the return immediately, so you're going to be at a loss most likely in the early days. But building that over time in terms of both population growth as well as changes in prevalence and need, once you factor all these things in and you take it out five, 10 next years, you can start to see the actual return on investment through reduced emergency room use, because emergency room use is really, really expensive. Improved taxpayer base, there are more people who are able to contribute to their communities if they're not in prison or in a psychiatric bed. There are also more people who are available to support that system, so as you start that flywheel, it picks up speed and peer support can jump in and take a lot of people some of the way.
Patrick's journey into data science
I love getting to hear when people come from different backgrounds and end up in this data and analytics space. I'm just curious, what was your journey getting here? Yeah, it's a funny one. So, I started as a family therapist, like I said, practicing clinically, and my idea was that I was going to do that for the rest of my career. When I started graduate school, that was the thought. I realized pretty quickly that it is really challenging work. I was working in low-income clinics and working with child abuse cases and things like that, and it's really hard work, and my heart goes out to anybody who does it full-time.
But I thought I'd like to have some diversity to my experience, right? I don't want to just be practicing clinically. I want to do other things, so I went to a PhD program in human development. It was sort of like a social psych-developmental psych hybrid, is how you could think about it, and I was in that program, still practicing clinically, but getting research training and getting data training. I started sort of learning at my advisor's knee on SAS, doing data work, and I was just sort of enthralled by it. So, I taught myself some SAS, and then I learned about R, and I picked it up, and I started running with it because it was more fun than SAS to me.
So, then my post-graduate school career, I worked in research and evaluation roles in academia, but I was always kind of the most data-focused person on the team, and then you start to polarize, right? You, as the most data-focused person, you get the data asks, and then you learn that, how to do that particular thing, and you polarize a little further, and then you're more data-focused than you were the last time, right? So, as projects come in, you just kind of polarize to one side, or that's what happened to me anyway.
And at this point, my clinical experience is still useful. Like, I work for a mental health policy institute, so that is valuable to have a clinically licensed person working with the data. I can speak to what it is like to be in the room, and to sit with someone who is suicidally depressed, or with a family that is really, really struggling, and so that is valuable, but I don't practice clinically anymore. I just bring that in, you know, through the data, and through, you know, how I read the data, right? The numbers mean something a little bit different to me, I think, because of that experience.
The numbers mean something a little bit different to me, I think, because of that experience.
The future of mental health and data science
Caitlin, you put a question to the Zoom chat. Do you want to jump in? Sure, I hope you guys can hear me okay. I was recently wondering what do you kind of envision for the future of mental health care and data science, because I always compare this kind of to bioinformatics, that there's kind of a whole discipline for biology and data science, and then human genomics, so I wonder what you kind of envision for the future of mental health and data science, and if you think behavioral sciences will ever almost kind of catch up to kind of genomics, and more of the physical life sciences?
Thank you, Caitlin. That's an awesome question. I like that a lot, and I am hopeful that we're on that same trajectory. I don't know about catch up, just because, you know, they've got a head start, right, but I think there is definitely growth in that area for behavioral science. We're pretty large in our focus here. We generally focus on populations rather than individuals, but at the individual level, I'll say one thing that I think is coming and that will change mental health and behavioral science broadly is computer adaptive testing for mental health concerns.
So, you know, if you've taken the GRE, or I'm sure lots of other standardized tests do it too, but it's, as you answer a question, the computer adapts your next question based on whether you got it right or wrong. So, it's sort of targeting towards your ability, right, your level of knowledge on whatever that test is. And in mental health, we ask lots of survey questions, right? We ask people, the PHQ-9 is the standard measure for measuring depression in most populations, and so there are nine questions, but right now, they're static, right? Everybody gets the same nine questions every single time they respond to that, and the questions don't vary at all according to how you answered one or two or three.
If you bring computer adaptive testing into that space, you can make, one, reduce participant burden enormously, so you have, you don't have to ask as many questions, right? You can use AI to sort of generate models that will help to inform based on how somebody answers question one, whether they need eight more questions or whether they only really need two more questions, right? So, it reduces participant burden. It makes the output more precise, right? So, we don't have just sort of a single number, but we have a single number for all the people who fit into that really precise category of people answering those questions in that particular way or in that pattern of responses.
And then it also allows us to, once you integrate computer adaptive testing, you're kind of inherently integrating that into your data world, right, into your data system. A lot of PHQ-9s are done on paper, right? So, you walk into your counselor or your psychiatrist's office, and you might fill out a form on paper and hand it back to them. But once you go to computer adaptive testing, you have made the leap into sort of the digital collection of that information, which allows us to connect it to everything else, right, the ability to connect those responses over time within that person.
Sorry, I'm a little passionate about computer adaptive testing. I don't know if you can tell, but I just, I think there's a whole lot of potential there. Behavioral science is unfortunately behind on those things, but I do hope we're going in that direction.
So, on that topic where you said, like, behavioral science is behind on, in that in general, why do you think, like, certain industries can take such a long time to get into, like, data and analytics? That is interesting. I'll speak to healthcare because it's a good example of an industry that's lagged on that. In healthcare, the systems are largely outdated and kind of archaic, and they're slow to pick up new things. Part of the issue in my view is the regulation around them, right? There's a ton of regulation, HIPAA and high-tech and otherwise, around how these systems have to be built and maintained.
I used to work with an attorney who is a national expert in HIPAA, and he would say, you know, it ends up most of the time just being an excuse, right? It's not that the law actually forbids this thing from happening, but it is a very, very good excuse to hide behind for pushing the boundaries or extending into new areas. And so, you know, I think that is probably right. That has been my experience where people will just say, we can't do that because of HIPAA, because of high-tech. Working towards a place where we are not afraid to make changes, but respecting the data as we need to while also kind of pushing boundaries and being willing to take some chances with new technologies — it seems like that's a balance we have not quite struck properly yet.
Rural health and telehealth
I see Logan, you asked the question in the chat. Do you want to jump in? Sure. Hi, nice to meet you. I'm Logan. I was just curious, I don't know much about if Meadows works with rural health as well, because you talk like a lot about population, but I know Texas is huge. So I don't know the dynamic there and also just your thoughts on how to better improve like data, data science for rural population health, because the data collection looks a lot different for those communities.
Certainly does. Yeah. Thank you. That's a great question, Logan. And it's good to meet you too. I think the biggest thing that comes to mind for me is the telehealth transition, right? So telehealth as of two and a half years ago was, you know, existed, but was not widely utilized. And now it is not only utilized even now, it's not like gone away since COVID, but it's sort of instantiated, right. It's built into policies. People have procedures around this. People have specific rooms for providing telehealth treatment.
And we have good evidence and we're getting better and better evidence that telehealth is effective and that works in behavioral healthcare. There's still some hesitation. And again, I'm a therapist. I do understand that hesitation of not being in the room with the person. I don't think that's misplaced, but I also don't think the evidence really shows that. The evidence that we have to date says telehealth works really well and that people are able to get good treatment remotely.
In Texas, we certainly have some vast rural areas. We worked with one school district level system that was providing services for, I think it was 14,000 square miles out in West Texas. Like just massive, massive areas that like, you know, commuting isn't going to get this done. They're really sparsely populated and so you need to bring in telehealth. And I think similar to what I was just saying about computer adaptive testing, telehealth, it kind of brings you along, right? It spurs data collection, it spurs the digitization of a lot of the information in a way that maybe hadn't been done before. And so I think it creates opportunities.
Data sources and surveillance
Sam said, when I worked in public health practice, we wish there were more mental health public surveillance data sets to use for our analysis or report development. What are your primary data sources and what do you wish was available? We have lots of wishes. We are working right now primarily with the NESARC, which I am not — if you worked in public health, mental health, you will recognize that acronym, I think, but I am not remembering it off the top of my head right now, forgive me. But it's a large-scale psychiatric epidemiological study of people across the country or representative sample and asks sort of all of the mental health questions you can think to ask.
NCSR and ECSA, the national comorbidity replication sample and the ecological catchment area sample are two others that we've used a lot. And then, like I said, we have this really, really rich data set in Texas and it's Texas specific, but I think other states probably have similar things. That is all hospital admissions and encounters for the state. And it's not free and publicly available, but you can pay to get it. Or if you work with a state university system, you can get it for free. And so that information provides us really a treasure trove of data on both sort of treatments that are happening, but also we can track people over time, right? There's an identifier in there that allows us to connect people across encounters and say something about trends in their treatment and where they're going.
Computer adaptive testing and outcome data
Marika, I see you had a followup. You want to jump in? Sure. So I'm just thinking a lot about the kind of the relationship between computer adaptive testing and how that relates to, you know, machine learning studies or recommendation systems. And so given the ability to kind of do that and to create really appropriate and accurate recommendations and the question flows in like PHQ-9 or some other testing, you probably need a lot of data and not just any kind of data, but true outcome data, which I think is pretty rare in the mental healthcare space. So like any recommendations on how to get that on like minimum viability, things like that.
Yeah. I've not really done all of that myself. I have thoughts about it, but I don't want to speak as if that's, you know, I've been down that road. I think you're absolutely right. And if we go towards more sort of open and less afraid EHR usage by healthcare systems, I think we have the opportunity to get that. So connecting, you know, the social worker who's talking to that person about their housing status and, and, you know, their food stability and things like that, connecting those kinds of interviews with their physical health follow-ups with their PCP as well as their, you know, changes in psychiatric medication regimens and things like that. That's all happening within the hospital system. So it doesn't seem to be impossible that we get towards better outcome data.
I will say there are companies out there that are working on that cross system integration of data. And that is their whole mission. They have as their mission integrating the, you know, the social service and the city and county level data for that individual with their healthcare data with their sort of individual self-report on how things are going that you might get from say a PHQ-9 or a GAD-7, that you can tie all of those things together into a single system that is accessible to the person, right? So that individual has the ability to track all of their data from all these places in one system that's accessible to the healthcare system and to the other providers.
Language, culture, and the consulting process
Being located in Texas, does your team do any research on the impact of language and or cultural discordance in mental health treatment? So I have not personally done this. Yes, our team, language justice is the phrase that gets used. And so, yes, that is a big focus for us, but that's more programmatic than research for us. Again, we are, I think the easiest way to sort of conceptualize the work we do is as consultants, right? We're hired a lot of times for specific projects or to answer specific questions. And so I was in academia, my boss was in academia. So we have that kind of lens to us, but we don't get to just go out and explore, you know, this question because it sounds interesting to us or write a grant that would let us do that.
I know that you said the team is internal now, but you still act as consultants. So I'm curious, what does that process look like of being the consultant when a request comes in to you? Sure, sure. So typically what we'll have is a contract development process with a, it's either a locality or a healthcare system, sometimes foundations, but there's a scope of work developed well in advance of any actual work happening on our end. But that lays out, okay, here are the three or five or seven questions that we're going to answer for you. And it's not detailed to the point of the specifics of the data set or the analysis, but it does provide, here are the things that we're going to tell you, we're going to give you an answer to this particular question in the best and most rigorous way that we can.
And so once that comes down to us, and that is over my head, that is the executive team is sort of working out these contracts. But as those come down and we see the scope of work, we then divvy up within the team, okay, who has time and who knows about each of these particular areas? And this is a thing we struggle with is that like that tension between rigor and actually providing an answer, rigor and actually getting something done, right? We very much have the inclination to push everything and to make it all as airtight as we possibly can, but we also have deadlines and we also have to provide answers, right?
Even if the information is imperfect, I think it's necessary for us to provide something. I think it reminds me of open source, the sort of ethos which is that we're very transparent about all of the analyses we do. We want to be as open as we possibly can, because we understand that, you know, one perfection is never going to be possible. But two, that we do have to get to some end state. We do have to make a recommendation and that might, you know, we reanalyze that same data next year. We might get to a different answer, but we need some answer for that policymaker or for that client. And so we do the best we can to make it as strong and rigorous as we can. But the kind of the backbone of that is that we're transparent about it. We're going to say all the things that we know, all the ways we did this, the reasons we did those things, the way we did them. And, you know, long method sections make some people's eyes glaze over, but like, it feels important to me that we provide that and that we were clear with whoever we're providing that answer to about the way that we got there, and do it in a way that is reproducible and shareable.
The kind of the backbone of that is that we're transparent about it. And, you know, long method sections make some people's eyes glaze over, but like, it feels important to me that we provide that and that we were clear with whoever we're providing that answer to about the way that we got there, and do it in a way that is reproducible and shareable.
Inference challenges in mental health data
Jordan, I think you had a question a bit earlier that I missed in the chat. Do you want to jump in? Sure. Maybe I'm imagining. Yeah, it's probably less of a question more, just a bit of a comment too on some of the computerized adaptive testing. It's a lot of the time, the data I look at actually outcome measures from clinical trials. But there's a bit of a growing interest, I think, for a lot of specifically within CNS psychiatry and other sort of mental health fields that there's more interest in developing some of these, I guess what we call patient reported outcomes, then some of these clinical reported outcomes where the scoring process isn't necessarily burdened by a individual in their own clinical experience, but is a bit more standardized across patients and patient populations.
So just, you know, like increased, I guess, kind of some increased research and understanding of stuff like computerized assisted testing is really beneficial for making sure that when we're looking at these data sets that contain these sort of outcome measures, that they aren't so influenced across, you know, they aren't completely, or they're less influenced and less biased from sort of who's completed that assessment and that you're in your data are a bit more, have a bit more integrity between themselves. So if I want to turn that into a question, how a lot of data that you're looking at with some of the outcomes measures, do they come from a good like sort of variety of sources? And is there anything they have to do to sort of aggregate some of the sources together?
Yeah, Jordan, thank you. That is an awesome question. And it sounds like we should be friends. I think generally what we're trying to do is to triangulate, right? To bring it all together. So as you were talking about in the problems with a clinician reported scale, you know, there are not the same, but sort of parallel problems with a patient reported outcome scale, right? Like you get that sort of inherent bias. And so really what we prefer I think would be to have both of those things available to us at any given moment or for any particular analysis. So we think of that as triangulating and bringing together the data sources for a particular question, getting as many sources of information as we can.
But the other piece is that when we have say a static estimate of depression in some population, generating appropriate confidence intervals for that and how to extrapolate that to a new population is something we're thinking about right now. And we're playing with some bootstrapping methods that I think are going to really help us do that better. Because you can't calculate a standard error in the same way if your sample is too small. And so we work with some small and pretty specific communities and subpopulations. And so to be able to get to an appropriate estimate for populations like that in other places, I think we're going to explore the bootstrap option.
Welcoming people into change
I see Colter was resonating with what you said about like archaic and outdated systems in government sector, public health as well. So just wondering if people are feeling that in their own organizations, what's like the best first step towards getting out of some of these like archaic and outdated systems. I think I was thinking about this particular question, but at a broader scale of like, how do we get society out of archaic outdated systems and how do we move people forward? And actually this is going to sound super cheesy, but I really, I mean it. Rachel, I think it's, I think it's this, like, I think it's what you're doing here. I think it is the inclusivity and the welcomingness of this that is the answer to that.
And so to bring that down to, you know, a sub society level, I think in your organization to have the people who are running those archaic and outdated systems welcomed into the conversation and not, you know, not brought in as in we're going to knock your system out and do something way better and we're going to sort of send you away, but to welcome them along with the people who want to make the change along with people who don't understand the conversation at all to get everybody there and feeling like they can contribute and have enough understanding to really be a part of that is a way, way more efficient and effective way to make change in my view than just trying to convince a single, you know, top level leader or trying to convince the gatekeeper on the archaic system that it needs to change. I think you want everyone, everyone there, everyone a part of it or else they're going to be a lot of barriers and a lot of roadblocks that I would think will hinder progress.
I think it is the inclusivity and the welcomingness of this that is the answer to that. To have the people who are running those archaic and outdated systems welcomed into the conversation — to welcome them along with the people who want to make the change along with people who don't understand the conversation at all to get everybody there and feeling like they can contribute — is a way, way more efficient and effective way to make change in my view.
Exploratory work and building internal community
Catherine, you had asked a question a bit earlier about some of the exploratory work. Do you want to jump in? Sure. You had mentioned earlier about maybe wanting some time to do some more exploratory work. And I'm just curious if you can elaborate, like if your team was not weighed down with all the other priorities, what would you focus on? Thank you, Catherine. I love that question. We have a few irons in the fire, I guess you could call them around that. One is predictive modeling at the population or ecological level. So where are needs going to be? Can we see into the future a little bit about what the community's needs will be, would be enormously valuable. So in terms of planning for services, building systems and structures, it's not ever going to be precise down to an individual person or for an individual person over time. But if we could get some foresight into what community X will need next year or in the next three years, that could be incredibly valuable information.
We are not a team of machine learning experts but we're pushing at that and trying to train ourselves up on that so that we can hopefully provide some useful information with some foresight. The other one I'll say is sort of related to the bootstrapping thing, but we have other related projects around simulation and sort of like Monte Carlo models using distributions to try to better quantify the risk associated with certain things. We are often tasked with providing a single number and we are inclined to provide ranges, but to make those ranges as strong and as sort of like well grounded as we can is a task that we're working on. And so if we can use some simulation models or plug in, you know, reasonable distributions to generate ranges for our estimates, I think we're in a much better place.
I always frame this as questions, but I'm always interested in learning what other teams are doing or what challenges they're facing in this area. And just curious what this looks like from your team's perspective. Do you have this internal data community? Yes, we do. We can certainly do it better and that's a priority of mine, is to always be improving around that. We have our core data team. We also have people who are working with data who are embedded within the content team to the subject matter expert teams. And so there is sort of a core group, but then also some tentacles — I'm thinking of an octopus — out through the organization of other data related folks.
But really the main goal for us is to just have people feel confident, right? When we work with the subject matter expert teams, we want them to have confidence in that conversation to push back on us. If we say something that doesn't make sense or to understand that they have expertise that is absolutely essential to this and that, that really weird, it seems like they're coming to us and they're dependent on us, but actually that is the inverse. Like we can't do anything useful without the subject matter expertise teams. And so building confidence is actually my like number one priority. And welcoming and inclusion is a part of that, but also getting people to see real world examples of where these numbers might say this, but because you know about X, Y, and Z, you can tell us what this actually means.
That's extremely helpful. Thank you so much. Thank you for sharing all of your insights and experience. That's, it's been a pleasure. I know we just hit the top of the hour here. Thank you for joining us and thank you all for the great questions as well. Yeah. Thank you guys very much. It's, uh, it has been fun. And like I said, I'm really happy to chat and, um, Rachel, again, this is an awesome community. So thank you for doing this.
