Resources

Data Science Hangout | Lindsey Dietz, Federal Reserve Bank | Focus on the impact of the output

video
Aug 11, 2022
1:09:17

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

Hi everybody, welcome back to the Data Science Hangout. It's so nice to see everybody again. I'd love to hear what everybody was up to all of July, but it was so awesome to be able to meet so many of you at the RStudio conference last week too, and being able to put a face to the name. I'm Rachel, and it's great to meet you. If this is your first Hangout, this is an open space for the whole data science community to connect and chat about data science leadership, questions you're facing, and what's really going on in the world of data science.

And the sessions are all recorded and shared to the RStudio YouTube as well as the Data Science Hangout site, so you can always go back and re-watch or find helpful resources there too. I'll add up here first that we do also have a LinkedIn group for the Hangout, so if you do ever want to continue a conversation, you can go there as well. Together, we're all here and dedicated to providing an inclusive and open environment for everyone.

So there's three ways that you can ask questions today. You can jump in by raising your hand on Zoom. You can put questions in the Zoom chat, and just put a little star next to your question if you want me to read it out loud instead. And then third, we also have a Slido link where you can ask questions anonymously too. Just like to reiterate, we love to hear from everyone, no matter your level of experience or industry that you work in as well.

But today, I'm so excited to be back with you all and to be joined by my co-host, who I just got to meet in person last week, Lindsay Dietz, Stress Testing Production Function Lead at the Federal Reserve Bank of Minneapolis. And Lindsay, I'd love to turn it over to you and have you introduce yourself and maybe share a little bit about the work that you do.

Lindsay's background and role

Sure. Thanks, Rachel. Rachel, I think we actually maybe met in 2020, but I know no one remembers the time before the pandemic. In San Fran, possibly. So I'm Lindsay Dietz. I am currently essentially a data science quant manager within the Federal Reserve Bank of Minneapolis. For those of you who aren't familiar with the Federal Reserve System, basically the Federal Reserve System is 13 separate entities. So there's a board of governors in Washington, DC. And then there are 12 separate individual reserve banks. So I'm in the one in Minneapolis, which covers the ninth district.

I work on what's called the large bank stress test, which essentially is a way to see how banks would do under a severe recessionary situation. So we have a set of internal models that we use to sort of assess their performance, and look at outcomes of that test and is published on an annual basis, except for in 2020, which we got to do it twice.

So my background is that I have a sort of math stats, upbringing, undergrad, and a little bit of grad school and then started work in finance, decided that I wanted to be a professor. So I went back for a PhD in statistics. About halfway through, I realized, you know what I'm really enjoying the most is sort of the consulting aspects of the work. And that, you know, I get to apply all these like statistical concepts in different fields. And for me that, you know, most interesting space was was finance and economics at the time. And so I had some connections with the Federal Reserve Bank of Minneapolis is a really great fit as sort of a research institution, but also a place where I could do this kind of applied statistics work that I was interested in.

So I had joined a group that does something called validation, which is basically looking at financial models in sort of a way that sort of a journal reviewer would look at a journal article submitted. Like what, you know, have they met the assumptions of the methods that they're looking at? Have they implemented things correctly? Are there, you know, holes sort of in the logic. So I worked in that space for about three years. And then about three years ago, I switched over to this implementation and analytics work, which essentially looks like putting our models into code, production quality code, and then developing analytics to tell stories around the outcomes or the data so that our policymakers can make good decisions and have, you know, sort of useful economic stories to tell around the outcomes of the stress testing exercise.

Right now I have sort of jointly managing a team of about 12 quants data scientists that work across all of these different financial models. Our program is about 150 people in total. So I get to work very cross functionally with a lot of different people with different skill sets. It's definitely very interesting and in line with the idea of the consulting work that I really set out to do.

Excitement about explainable ML

That's great. Thanks, Lindsay. As we wait for questions to come in from everyone, I'm curious, what's something that you're most excited about right now with regards to data science? Yeah, I think so. As a statistician by training, we can be a little bit down on the machine learning community, because sometimes we can say like, hey, they can't explain their results, or their results might be biased in some way. So I'm really very interested in the movement towards like the explainable ML. While it's not really something I work with necessarily on a daily basis, I think it's really going to change, you know, I guess, at least the rigor that we think about going into the data processing and how these, you know, seemingly kind of innocuous algorithms impact people in disparate ways.

And so I have some colleagues from grad school who do a lot of work in this space and very interested to see where that part of the field goes. I think there are a lot of, you know, useful crossovers for the statistician and machine learning community and don't want to be a gatekeeper. So I think I'm very interested to see where that goes and learn more about it.

How stress testing models work

Someone had asked, am I understanding correctly that models are used to forecast unique situations, like a severe recession? And if these are so uncommon, how is that done? Sure. So essentially, the Federal Reserve has a group that will develop these scenarios, which basically consists of several variables. So you can think of things like unemployment rate, or treasury yield curves, being part of that set of variables. And they project paths for those that sort of emulate what a severe recession would look like. And then the idea is that we publish those. So those are published typically around February each year. And then we run our models using those as sort of the predictor variables that sort of relate to these different financial profiles. And banks do the same, they have their own internal sets of models that they use as well.

Communicating technical results to non-technical audiences

I see somebody else had asked, Lindsay, do you have any tips for communicating technical results to non-technical audiences? Yeah. So I would say it's probably key to think of pretty much every audience as a non-technical audience. Even when you have people who are technical, it's best not to assume that they ever know the things that you know, because, you know, it's unlikely that anyone will be as deeply immersed in a problem as you are. And in some sense, it can almost be worse if you have a technical person and you're like, well, why don't you understand this? Because then they may, you know, may feel a little bit hurt by that.

But I would say it can be hard. But the biggest thing is focusing on the impact of the output first. So I like to tell my team that, you know, people, what you might think is like a smallest piece of work is something that someone else is going to think is sort of magic. Like, just telling someone earlier today, you know, they're talking about all the coding stuff they had done. And I'm like, you know, the thing that's going to really impress people is that you just generated this automated piece of documentation from your code, basically for free, it took you five minutes. But that's like the magic piece. I remember someone at the RStudio conference talking about like, minimizing time to magic. And that kind of thing is what really is like the impact driver for most people. So really focusing on the impact in regular words, removing the jargon, that kind of thing.

The biggest thing is focusing on the impact of the output first. I remember someone at the RStudio conference talking about like, minimizing time to magic. And that kind of thing is what really is like the impact driver for most people.

And then, you know, trusting yourself that you've done all the checking, the assumptions, the really technical details, and, you know, writing that down for yourself and having, you know, maybe a good appendix or technical paper to go with your work. But really, that is the key, I think, for the non-technical audiences. And then, sometimes having the, you know, people that are non-technical ask you questions that you just didn't even think about, that are really very helpful in sort of being proactive in future analysis. So I would say, look at it as an opportunity to like, practice your translation skills. But especially for decision makers, managers, you know, bottom lines are important, right? How does this save us time? How does this save us money? Those are kind of key concepts that I would keep in mind for non-technical spaces.

Impact of high inflation on the work

Someone else asked on Slido, how has your work been impacted by this period of high inflation? Yeah, it's a good question. I will say that my work is not really in the sort of inflation interest rate space, but we're certainly all thinking about it in the background about how we should be incorporating this or thinking about the data that we're going to, we get in from banks when we look at that. Certainly it will change things about the portfolios that the banks have. You know, they may be taking certain actions because of high inflation. So that's definitely something that, you know, we're considering, but again, I don't work specifically on inflation. I think we're trusting that the Federal Open Market Committee and their tool set will hopefully be able to help us in the long run.

Getting buy-in for R and RStudio at the Fed

Hi, Lindsay. Lindsay and I are just across the river from each other. So yeah, I'm just curious how you got buy-in at the Fed for using RStudio, if you were the one to push for those, if they're already there. And then if you've had any luck getting some, I don't even like this term, I use less technical people, but maybe people who in the past have used like a lot of super complicated Excel spreadsheets, for example, to try using R.

Yeah, I think, luckily I didn't have to do too much of the hard work up front, but I think there were a lot of key kind of decision makers at the beginning who had some affinity for, actually maybe S+, even like weren't necessarily using R, but, you know, they knew that R was sort of blooming a little bit at the time. I should have mentioned, I've been an R user technically probably for 20 years, but really just in the last five, do I feel like I really fully understand, or not even fully understand, but fully I'm taking advantage of some of the capacities of R.

I think also it was like pretty cool to be doing a PhD at the time when like Hadley and Yihui were coming out with all of these like inventions, you know, sort of like the tidyr framework and the like reshaped to dplyr kind of framework. But I think, you know, what was nice about the place that we were in was it wasn't necessarily people using Excel, but more just like people using other statistical software packages. And I think for our work, we needed to really centralize and unify, like, so that we could standardize code and we could all run each other's code. So we couldn't choose, you know, multiple different languages necessarily to do that.

And I think, you know, we had some smart leaders who sort of saw that open source was a good way to move. So we use, they chose R and then we also use some Python in our work as well. Certainly there is still like a continuous spy in work to be done. I think people still, you know, people are very beholden to the software that they used in grad school or undergrad or, you know, what they've been using in prior jobs. So convincing. So most people who are using more of the proprietary software, especially like it's economics, there are really specific software packages that people enjoy using. But, you know, R is great that it's very approachable. It has a great, like, it's fairly easy to Google.

And I think just having all these communities, like I said, a lot of my team was able to go to our RStudio conference. You know, there's not necessarily equivalent stuff for some of the other software packages. And then I think just being open to really teaching people the ropes and that, you know, it's kind of like being a professor where you just have to stay a little bit ahead of the students. So like, you know, some of the work to try to turn things into packages within our framework, I'll go run and do a little bit. And then I've had a lot of success getting our interns to work on, you know, to sort of front run some of this so that we have a, like a framework to give our team has been very helpful.

Hi, everyone. My name is Zach Garland. I work with the Economics Institute. And so it's very related in ways to things that you focus on. I was curious, what are some of the most interesting data trends that are new or anomalies that happened after COVID? Sure. So I should have said this at the beginning, but I'll say it now that these are all my views and do not reflect the views of my Federal Reserve Bank of Minneapolis or Federal Reserve System. But I think definitely one thing that has been very interesting about this time that makes it sort of different from the maybe 2008 to 2010 financial crisis, which a lot of people probably remember, was there was just like an unprecedented amount of government support in this COVID timeframe.

And so I think, you know, there's kind of a debate about, you know, has the financial industry, how the financial industry sort of maybe benefited from that? Would they have been, you know, in the same position without that sort of government support, even though it went directly to, you know, consumers or, you know, individual Americans? You know, a lot of the behavior of people during that time was, you know, either they had to use it right away, or they had to put it back in, they just put it back in their deposit account, basically. And so that's been very interesting. People have referred to the COVID timeframe in some ways as like a K-shaped recession, where some people really weren't that affected and others were just really drastically affected.

And also just like the, you know, maybe the co-movement of different macroeconomic variables that had been observed in maybe more traditional recessions were just not necessarily the same during this COVID timeframe. So I think, you know, there's a lot of, you know, supply chain issues that are affecting things and maybe changing the price structures that, you know, we wouldn't have seen in other kinds of recessions. So for example, people may have noticed like, the used car market is kind of been sort of insane. I think I could sell my car back to the dealer six years later, and they would give me maybe more than I paid for it. You know, partially just because supplies have been so short in nature.

Using Shiny at the Fed

I see Sam asked the question, is your team using Shiny today? We do use Shiny. So actually, I have an interesting story about how we started using Shiny. So I think in about, you know, in 2020, when we're, you know, all working from home, and we're sort of running, running a crazy amount of stuff, just to better understand, you know, the early impacts of the COVID pandemic, basically, because our timeframe, when we really run the stress test is basically April through June. And so, you know, the lockdown period really occurred essentially in March.

We knew that we'd need to maybe do more analysis than normal, given the situation. And I think a lot of us, you know, there were tools in this case, in Excel, in other, in other spaces, for these sort of key analytics. And a group of us were just like, we just can't deal with this. We can't like make constant updates to these Excel spreadsheets. And so basically, we got kind of a little bit of white space for a couple weeks. And I think three or four of us just sat down and said, let's just try to build these things out as Shiny tools. We had some access to deployment in RStudio Connect. And so we're like, let's just try this out. And also, like, we built out some frameworks using just like HTML reporting from our markdown. And, you know, we put together what's probably the basis of the set of analytics that that we still utilize. And so some of those are Shiny, some of those are still in HTML from our markdown files.

And then like that the next step is very easy to convert something like that to Shiny once you sort of have the framework laid out in our markdown. And so there are, it's actually something we have an intern working on right now, is to convert some things that are sort of HTML into Shiny. So we, we are able to use Shiny, we're not able to convince everyone to use Shiny. So we're supportive of other tools, Tableau, etc, that people want to use for these sort of analytics. But I think our team is, especially those who went to RStudio are very excited about Shiny. And even, even those that are outside our team, we told them about Shiny for Python, and they're, they're pretty jacked about that, too.

Interns and their skills

Hi, yeah. Nice to see you. I'm Mike Smith at Pfizer. I was just interested to know, how are you seeing interns at the minute? You know, how are they coming in? What are their levels of knowledge on Python and R and data science tooling? I, I guess my feeling is that there would be a whole lot smarter than they were five years ago, or 10 years ago, maybe.

Yeah, so we, we have had some great luck with interns. We, we've had a program in the last couple years start up in Minneapolis, where we do graduate students, we call it a data science internship. And their skills have been pretty fantastic. I mean, in some cases, you know, they're even more cutting edge with, with these R tools, or Python tools than we are ourselves, as you know, folks who maybe haven't been in school for a little while, or haven't had as much of the dedicated time to study. So like I said, it, it's been very easy to set them off doing things like, hey, go build, go work on building the Shiny app, or, hey, like, why don't we figure out how to turn this into a package?

So they've been great. And I mean, we, we do, in our interview process, you know, we do have sort of a code sample, just to make sure that it, you know, does appear that there is some skill in that area. But we do also, you know, we're pretty open, because like the idea is for them to learn on the job as well. But I think all of the interns that I've worked with have all had very good R skills. And I think, at least some Python skills too, we've just been maybe prioritizing the R skills a little bit more in our, in our pool.

Thanks. That, that seems to mirror what I'm seeing. It's almost like the interns could teach us a lot, you know. Yes. No, we do. We make them do that, actually, too.

Working across multiple programming languages

There was an anonymous question at the beginning, which actually ties in to this as well. But when you have people using multiple programming languages, so R, Python, SAS, how do you structure the work, and who does what? And how do they work together? Sure. So I think we have sort of specific tools that we like to use for specific purposes. So we'll say we use R as probably our main language, but the, the developers that we work with, they might use any tool. So they might use SAS, or Stata, or MATLAB, or some other tool, or even R, basically.

And so, you know, a lot of us have sort of the multilingual background, where maybe we weren't coders in SAS, or Stata, or something else, but in the process of helping to translate that code into sort of a production environment, we definitely learn on the fly, you know, how to, how to Google the right things for SAS or Stata, or how to interpret, you know, what things are happening, you know, because sometimes, like, the algorithms aren't the same. When you have to translate something, you have to kind of determine, you know, how much of that do you need to dig into versus just say, like, both concepts, conceptually, these are both the same thing. But SAS does it different than R, something like that.

I think we more or less match people up more based on maybe their financial background versus necessarily their software background. Just because we're all translating to R, for the most part, and the Python pieces that we have are a little more general. And so we just have a couple folks on our team who are a bit more expert in Python, but they still also are expert in R as well.

Ethics and diversity in the work

There was an anonymous question earlier that was, how does your team think about ethics in your work when it comes to things like race or ethnicity? Yeah, that's a good question. We, you know, I will say that we thought, maybe I'll separate ethics from sort of maybe the diversity equity inclusion. You know, one of our main goals, and especially for me as like a hiring manager is to really thoroughly have, have a pool every time that we have a posting that meets some specific diversity criteria, but also maybe meet some diversity criteria of like diversity of thought or diversity of background versus just maybe like typical metrics. And so that means that our hiring process can sometimes take a little bit longer. But it also means we don't just hire the first maybe qualified person that we come across in our pool.

And I think internally, once we have people into the organization, we really want to retain them. And so a lot of the focus within our organization are to have employee resource networks that cover sort of, you know, different intersectional parts of our personalities and work lives. And then within my team, it's actually great, we have a, we sort of do an internal team meeting, where we'll usually discuss, you know, topics that sort of overlap in this DEI space, a lot of times from a research perspective. So we've covered things like, I don't know, we looked at like the Iceland four day work week, they had some studies on that not too long ago.

So I think it's always, you know, the goal is that it's just part of the work. And that it's not like a separate thing. It's not like we're doing DEI now, and then we go back to our day job. No, it's meant to be integrated with within our day job, just to help, you know, help everyone realize that these things are always there. We've also had some really fantastic anti racism training and those kinds of things within our organization.

But then I think when it comes to like approaching the work, like on the, you know, modeling side, definitely we're considering, you know, what's in the data and what's missing from the data, right? No data is perfect. You know, we're, we're relying on banks to provide data that we asked them for based on certain instructions. And so understanding, you know, what's missing from that data is really critical to thinking about, you know, sort of like, DNI issues. And then, you know, from an ethics standpoint, more just like the, the Fed has a very high code of ethics. There are some very specific things that were required to avoid or comply with.

Learning R and favorite resources

Jonathan had asked, what platforms and fun data set projects did you use to learn or enhance your R programming skills in your free time? Yeah, so I think there was a conference, maybe that I attended probably like five years ago, a Women in Statistics and Data Science Conference. And that was like the first place I'd learned about R-Ladies. And I think, you know, one other person from the Minneapolis area who attended with me, she was really inspired and decided like, take it on and start up our local chapter again, which had been a little bit defunct for a while. And I think just going to some of those meetups is where I was really inspired to learn like the tidyverse. You know, it's very safe space, like it can, I think we had some great contributors who were able to give good, just really beginner overviews of some of these topics.

You know, I've also enjoyed just like the Twitter community around RStats. So I don't always, I feel like I kind of know what's going on, even if I don't always like dive into something, but like the emergence of this penguins data set, like in every example, I've been sort of following that recently. And it was even like a highlight of the sort of OpenScapes Quarto talk, which I thought was very, very cool. And I think just, you know, trying to immerse myself a little bit more into the community. But, you know, sometimes, you know, if I just want to learn a package or something like that, I'll just kind of dive into the page documentation. Usually people put pretty good little data examples. I've also had some good luck taking, you know, Coursera's that have good kind of R examples as the basis for maybe learning a new topic. But yeah, it's just great that there's so much free material, you know, the bookdown package has made it possible that almost everyone publishes their books now on open source, basically, if you don't want to get the hard copy. And so I think just, you know, copying and pasting from Stack Overflow can sometimes be the first step in any good learning data analysis project.

Bayesian modeling and getting buy-in

Hi, Lindsay. I'm at the data science consulting firm, Ketchbook Analytics, and we work with a ton of banks and lending institutions. So this is really right up my alley. My question for you was, with stress testing in particular, which is an area that we work in quite a bit in developing those models, having some trouble lately with the sort of new approaches that we have and new tools that we have for Bayesian modeling that allow us to give more of a probabilistic output, which sort of feels more like a stress testing-y type place to have those models as opposed to deterministic models that just say, hey, this is exactly what's going to happen next year. Here's your point estimate. I've had mixed experience sort of getting buy-in for Bayesian models. I don't know if you use any Bayesian methods in your work at all, and if so, do you have any words of wisdom for getting buy-in on those?

Yeah, so I would say I don't use any specifically right now, but what I like to say maybe as a selling point for Bayesian models that maybe people don't think about because a lot of people are used to frequentists or likelihood or just maybe haven't had a course in Bayesian approaches, and this is something that my partner told me once, is that Bayesians are just much more explicit about the assumptions that are present in their models. So sometimes I think we think we're getting things for free when we fit frequentist-style models or likelihood-based methods, and people can be uncomfortable with the idea of like, oh, the prior changed and therefore the output changed. And I think, I actually think like the, you know, credible intervals being like an actual probability, having an actual probabilistic interpretation is much stronger than sort of the idea of confidence intervals, but a lot of people just don't, they don't understand maybe the subtle differences.

Bayesians are just much more explicit about the assumptions that are present in their models.

So I think, you know, what also people could be afraid of in Bayesian models sometimes is the, you know, how long does it take to actually run this kind of situation, which I think there are a lot of tools now that, you know, can allow you to take, you know, much shorter paths to get to an answer, but I would just say that like it maybe can be great in certain cases, like where you have very little data and you really have to make an assumption, right? So I like to, I don't know, I think I like to think about like maybe HR cases where you're trying to determine are we meeting sort of our diversity goals, but there are only like five people in this group, right? Can we really make a good assessment using like a frequentist analysis to say like, well, one out of the five people, you know, so we're at 20%, but in that case, it could be useful to think about like, well, what's a Bayesian approach to this, which is that like maybe we make some assumption that we're either hitting our criteria or not hitting our criteria, and then let the data do a little bit of informing of that, you know, updating of that assumption.

You know, I think if you have, you have a huge amount of data, you know, theory says that the basically the Bayesian approach and the likelihood approach are essentially the same in those cases. So that's maybe not the case where you'd want to make your sort of argument, but certainly in the small data cases, or I think there are definitely tools for different kinds of time series, or I've been reading a little bit about this like Google time series method that relies on Bayesian stuff, or Bayesian like, you know, model selection criterions that, you know, there's definitely ways to make arguments about perhaps how much better those methods do just from a prediction performance. So maybe it's more too about like what are the metrics people really care about, and can you hit those better with the Bayesian model than perhaps like a frequentist model?

Favorite R packages

Yeah, so favorite packages. So I said this in a Twitter thread, so I'll repeat it here. Stats, obviously, we're part of the stats as part of the base load of R, very important. But kidding aside, so dplyr, very important for, you know, I think a lot of the work we're doing is data processing to get, you know, before we get to the modeling part. So dplyr, you know, still work a lot, you know, visualization, ggplot2, and then sometimes adding plotly to ggplot2 or I've tried out a little bit with highcharter, which is pretty neat as well for sort of the interactive visualization. The dtplyr package, I really liked as a having sort of the interface dplyr kind of interface to the data table package, because I have a hard time with the data table syntax just on my own. But I liked having that interface and have used that before as well, just to make handling some of the big data a little bit easier. Well, certainly, Shiny and R Markdown for some of the reporting tools.

I mean, I think after the RStudio conference, everyone really wants to try out Quarto. We're trying to get some installs of the newer version of RStudio to try some of that stuff out. You know, it's a little bit challenging sometimes when you don't have admin rights to your computer to do some of these things, but I think a lot of us have downloaded on our home computer and tried them out. Yeah, I would say those are sort of boring answers, and I don't have anything maybe revolutionary, because probably a lot of people on this call use some of those similar packages for their everyday life. But I mean, dplyr is just sort of game changing. You know, I don't really know SQL very well, because I just select all and then do the processing in R, basically, when I have access to a package like dplyr. So that's been really helpful.

The value of a PhD for industry careers

It was beneficial, but not in the way that I had intended it to be, which like I said at the very beginning, my thought was I want to be a professor, and this is the only way to do that. I think there are probably other ways I could have gotten to this stage in my career, and I would say definitely going to grad school is not recommended for the faint of heart, or it's not like a thing to do when you don't know what to do, because you definitely should be, be confident that you, you want to do it, but I think sometimes like the, the things that have really benefited me from especially doing a PhD are, you know, learning how to think, learning how to connect like really high-level concepts, like I don't know every tool in the sort of econometrics toolkit, but having learned some very general ways to think about modeling and statistics helped me zoom in much quicker when I need to learn those kinds of things.

And I think just, you know, there's a lot of opportunities to talk about research both in a very technical way, but then also in a pretty non-technical way, and working on like your presentation skills in that way. I will say like I didn't learn that much about being a manager or being a leader in grad school, and those are things that I've had to rely on other life experiences to teach me and, and work on directly, so I just want to, you know, not give that implication that if you go get a PhD you become a manager. Definitely those are sort of separate skill sets for the most part, but you know, working in real life kind of outside of grad school, I think a lot of the skills have been transferable, just learning how to solve problems in general is, is a skill that's very transferable.

And I think sometimes maybe I have too much of a logical view of the world, like because of, you know, grad school or luckily I have a very nerdy partner and we can exchange statistics jokes or talk about probability spaces with measure zero and understand what we're, what each of us means, but yeah, I mean, I think, I think I wouldn't, I wouldn't have not done it in sort of the counterfactual case, but there are multiple ways to get, you know, to achieve this path and, you know, not even going to grad school works for some, for a lot of people, just because of all the tools we have available.

Tidy models and Bayesian frameworks

Yeah, thanks, and I joined late, so apologies if you already covered this, Lindsay, but I used to work with a lot of, like, PhD statisticians and actuaries that were very fond of sort of the core Stan packages for Bayesian modeling, and I know that over the last few years, tidy models has really kind of, you know, allowed you to tap into those engines and whatnot. I'm just curious if, in practice, you have kind of, you know, started to take on more of the tidy models approach, tapping into those Bayesian engines and algos, or if you're still using sort of the, you know, core standalone framework, like RSTAN or RSTAN-ARM, or something like that?

Yeah, so we don't have, like, Bayesian frameworks in the space that I'm in. I think people, though, are learning about tidy models and how we can apply those. I think, certainly, people were inspired by the, you know, the idea of, like, recipes and those kinds of things that come with kind of the tidy models framework, but right now, I mean, I think I've used RSTAN a little bit in my, maybe, dissertation work, but have not, and would be very interested to know that there are, I am interested to know that there are newer, better frameworks than RSTAN, better frameworks than sort of the old stuff, because it is definitely an intellectual curiosity for me, as well, to work in that, like, new, like, RSTAN framework. I don't know if my professors would be okay with it, but, no, sometimes they're a little bit purist on the MCMC methods, but, versus the STAN method, but some of us have to get the work done in a timely fashion, so appreciate you bringing it up. Thank you.

Awesome. Well, thank you so much, Lindsay, for joining us today. I don't want to keep you too long over. I really appreciate your time and sharing your insights with us. If people do want to hang out, we'll be here just hanging out and staying on for a little bit longer, but thank you so much, Lindsay.