Matt Iorio and Rumil Legaspi @ Neuros Medical | Data Science Hangout
videoimage: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
Hi everybody, welcome back to the Data Science Hangout. I'm Rachel Dempsey, I lead Customer Marketing at Posit. Posit is the open source data science company building tools for the individual team and enterprise. I'm so happy to see all of you here today with us. The Hangout is our open space to hear what's going on in the world of data across different industries, chat about data science leadership, and connect with others facing similar things as you.
We get together here every Thursday at the same time, same place, unless it's a holiday. If you're watching this as a recording in the future, you can add it to your own calendar if you want to join us live. Just make sure it adds it for 12 Eastern Time so that you can join live. We know people really enjoy connecting with other attendees in the chat too, so if you are interested in connecting with others, I want to encourage you to say hello in the chat and briefly introduce yourself, your role, where you're based.
Also, when you're introducing yourselves in the chat, I'd love to hear if you'll be at Posit Conference in person. I want to start saying this in the weeks leading up to conference because one of my favorite things last year was seeing how many people from the Hangout actually knew each other already when they got to the conference. We're all dedicated to keeping this a friendly and welcoming space for everyone and love to hear from you no matter your years of experience, titles, industry, or languages that you work in.
I'll add real quickly here, also, if you're hiring for any roles, feel free to share those in the chat. Not spammy at all to us. Also, 100% okay if you want to just listen in today or just chat with people in the Zoom chat. There's also three ways you can jump in and ask questions or provide your own perspective. Raise your hand on Zoom. If you don't know where that is, there's a little reactions button in the Zoom bar below. You can put questions in the Zoom chat and just put a little star next to it or asterisk if it's something you want me to read. Third, we have a Slido link where you can ask questions anonymously, too.
I'm so excited to be joined by actually two co-hosts today, Matt and Ramil, focused on clinical affairs and data analytics at Neuros Medical. Matt and Ramil, to get us started, can I have you both introduce yourselves and share a little bit about each of your roles?
Yeah. So my name is Matthew Iorio. I'm the senior manager of clinical affairs and data analytics at Neuros Medical. I joined the company in 2004 as an intern and now kind of help to oversee all patient reported outcome data collection as well as clinical data science activities within the company. It'll be great to talk about with everybody today is kind of the evolution of this department as it's grown in parallel with our recently completed Quest IDE trial and the different kinds of data analytics and data science type activities we have going on there in a small neuromodulation startup.
And Ramil, you want to introduce yourself, too? Yeah. So my name is Ramil. I do data analysis here at Neuros, help build tools and data pipelines here to really support our clinical operations team.
Founding the clinical data department
Thank you both for joining us. And so I guess to kind of kick things off here, I know you just mentioned about building this team, Matt. Can you tell me a little bit about the founding of the company's clinical data department from the ground up?
Yeah. So I joined in 2014 as an intern right when our pivotal phase three trial, the Quest study started. And as we were going through that, kind of the foundation there was the way we were collecting data at the time essentially was we had patients reporting their pain levels and medication use on what we had designed as an e-diary, right? So they had a cell phone and an app on that phone that would collect data. And typically the two ways of data collection were from that as well as paper case report forms once we started the trial.
One other thing to kind of give people some background if they're not familiar with Neuros, just to kind of lay that groundwork. We're a clinical stage neuromodulation company. We're developing the Alteus system to help patients with chronic lower limb post amputation pain. So these patients would be enrolled and implanted in our trial that would not receive adequate pain relief for any other kind of standards of care, be it opiates or gabapentanoids or any other kinds of surgeries. And in the US alone, we have around 185,000 amputations every year, mainly a lot of those due to cardiovascular disease and diabetes, as well as trauma for various kinds of accidents and things like that.
So there's a high demand and high levels of these patients have chronic pain post amputation. So the Alteus system is our technology to kind of ideally relieve their pain and restore their lives, as we say at Neuros. And when patients would use the device, we would have them report this information in their e-diaries. So they would report multiple times a day, every single day. And then they would also report pain levels at the end of the day, as well as their medication use.
And so when we started this trial in 2014, we collected a lot of the data on the e-diary with one of our vendors at the time, as well as paper case report forms at the sites for those particular visits. As we evolved over time and as more and more patients enrolled, we started to see the need also internally to have more of an operational focus around the data as well. What I mean by that is, as probably you can imagine, as patients get implanted, we have more and more patients going on. We have more and more study visits happening.
And at the time, probably around 2020, so right around like the height of the pandemic, the Quest study was really taking off. We had a lot of implanted subjects and we had a lot of need to coordinate data information and coordinate resources at the sites because we needed field staff to be supporting the sites. And so at that point, the clinical team, myself and other members of the team, were getting a lot of outside requests from other departments because we needed to coordinate to make sure that we could support those visits.
At that time, we did not have the foundation to really process these requests from an operational point of view, traditionally using certain tools like Excel, for instance, right? My background is in biomedical engineering, so I have programming experience. So in 2020, I decided, well, I got to start programming again so that we can possibly, you know, kind of keep up with these requests so we can make sure that our patients are well supported and so are our sites and that we have all the adequate staff there. So I randomly picked up Python in 2020.
And then as we kind of went along and we got closer to finishing Quest and we knew the other side of the data trial would be, how do we start to analyze this data along with our vendors, right, to prepare for submission for FDA? And so we kind of need some more horsepower. We're going to need more people that can kind of help kind of crunch these numbers, support the trial as it's coming to a close, as well as process this information. So in 2022, we hired two clinical data employees, both with programming experience in R. Ramil was one of them. Ramil was our first focused, fully focused data hire that we had at Neuros.
And the one thing that I realized at that point was that these guys can really program in R. And I said, well, I'm going to move over and start to do that too. So 2022 onward, our role has really shifted to not only supporting the close of the Quest study, but also supporting the validation efforts of our tables and figures and listings for the FDA submission, as well as post hoc analyses as we move ahead.
So currently, today at Neuros, we're a pretty small company in general, but today, currently we have two full-time clinical data employees with some consultants and vendors as well. We have people, certain people, focus on the safety side and mainly us on the effectiveness, as well as, you know, obviously kind of covering all of that data. And we're also responsible now for helping to generate data for publications and presentations, investor meetings, and things like that. So we kind of do it all now. And the goal is to be as good of a partner as we can cross-functionally and always have the mission in mind that our patients come first as we develop these tools and pipelines and efficiencies so we can move the company forward.
Joining as a first data hire
Thank you for that background. And Ramil, I'm curious to hear from you too about your experience joining a company as one of the first data science hires as well.
Yeah, so it's funny because I was doing my graduate schooling for applied stats and looking for an internship. Fortunately, Matt was able to pick me up full-time and I was able to do both concurrently. And I will say, you know, prior to being here, a lot of it, you know, I thought was ML and AI. And really, it's the complete opposite. I think, especially as a startup company, there's a lot of just data cleaning and some things that, you know, you don't need ML and AI for. And apart from that, really learning about the rigor in clinical trials and building a data team from the ground up. So being a part of that has been, for myself, a huge learning. And yeah, continue to learn every day.
And really, it's the complete opposite. I think, especially as a startup company, there's a lot of just data cleaning and some things that, you know, you don't need ML and AI for.
I think that's so important to stress that it's not all about AI and ML. And sometimes people can think that looking into even some of these like big companies who are presenting here that are not doing some of those like data cleaning activities too.
Hiring for data science roles
I have a few questions, but I want to make sure I'm jumping into questions that you all want to ask here too. I wanted to continue on this, like starting as a first data scientist track and ask you, Matt, what were you looking for when you're hiring and what makes a good data science hire for some of us that are starting to hire too?
Yeah, I think that's a really great question. So I kind of alluded to it first, is that I kind of knew the need that we had. So for me, I have a pretty long history of programming, but we needed more people that could handle that, right? To continue to increase the quality and maintain the quality that we had and make sure that we were asserting our clinical sites, make sure that we were helping our cross-functional partners within the company, make sure that we could execute the trial at an extremely high level, right? And always be there for our patients.
The other thing that was important with this was that I really wanted somebody personally that could push me to be better. And Ramil was that. Ramil also had a great passion for using data science and analytics and his background in statistics to help human health. So one of the things that intrigued me really about Ramil and kind of his interview process was on his GitHub page, you know, he has family members that have various health conditions and he was trying to solve data issues with that, you know, in his initiative as well.
And we, you know, at Neuros, top to bottom, side to side, the mission is clear about relieving pain and restoring the lives of these patients and looking for someone as well who has that passion to do that is really critical. I think, you know, as well that, you know, as a kind of seeing this is, I think if you have a good foundation in programming or some of these tools, you may not necessarily need to know the exact tool to be successful at a company, because it's just like speaking another language. So the ability to learn and want to learn that attention to detail and being patient focused for us was really critical for that.
The FDA submission process
So I know you mentioned FDA submissions in the beginning, and we have talked a lot about FDA submissions in the Hangout. And I know there's so many different kinds as well. Can you just talk a little bit more about that process?
Yeah, so I guess at a high level, Neuros, the Alteus device is a class three medical device. Highest level classification in terms of because of the way it's implanted in the body. And as you could, if anybody, to kind of give everybody a picture, the Alteus system is essentially like a generator that's implanted in the body, similar to a pacemaker, cardiac pacemaker. And then there's an electrode that goes down and attaches to the nerve that was amputated in the lower limb. And the device delivers high frequency electrical stimulation to the nerve.
And the idea is that that signal will dampen the pain signals coming from the nerve to the central nervous system. So the patient would feel less pain. So for this kind of device, it's classified as a class three device. It's never been on the market before. So because of that, the invasiveness in terms of implanting the device, we had to go through the PMA process for FDA submission.
So that consists of obviously early on with the inventors, Drs. Kevin Kilgore and Neelay Bhadra, developing the device in the lab, doing animal testing on it. Then you move to like a small patient pilot, like a first in human study. So like a handful of patients. We use that data to then catapult us to about a 10 patient pilot study. The data got really good results. The device got really good results. That information allowed the FDA to say, yeah, go ahead with Quest, right? So then we implanted 180 patients from that study.
And now the process after the study closed was we have to pull all of the data together per an agreed upon statistical analysis plan with the FDA. So sponsor and FDA go back and forth on what to include, what not to include, and why. And then you have to kind of stick to that. So to Ramil's point, a lot of that can be more of your standard statistical analysis algorithms, such as like logistic regression or other types of analyses, depending on things such as standard t-tests, ANOVAs, depending on what that is, what's laid out about how analyses occur and in what order, those kinds of things.
So then our work shifted to working with our vendor at the time to break down and analyze all of the tables, figures, and listings that we were going to submit. So to give everybody an idea of scale, we had about nearly 235 tables, figures, and listings that were included in our clinical study report. We had another 120 or so that were not included, but we still included it as a supplement. You're looking at combing through about 1.8 million data points on the diary side, 600,000 data points on the study visit side, and then information from the device logs and patient uses around 19 million data points as well.
So we're working hand-in-hand with our vendors to go through that information, make sure that the tables, figures, and listings matched on both sides, that we agreed based on the statistical analysis plan that these are the numbers we should get. Interesting point here, our vendors used SAS. We did it entirely in R. So we kind of were able to kind of validate it in a way as we did the analysis there.
So as everybody can imagine, you have all those tables, figures, and listings. You have one big clinical submission along with the other modules that are put together for, like, manufacturing and biocompatibility and all that stuff, and then it goes to FDA, and then you enter their review process, which is outlined in the FDA compliance program. So typically what that looks like is, you know, going back and forth, they have questions, they conduct audits per their compliance program, they have questions about data, you have regular meetings with them, and then eventually they come to an approval or no approval outcome. So we're in the process of doing that.
SAS vs. R for FDA submissions
Have you used SAS in FDA submissions, and did you feel pressure from FDA to use SAS? Can you talk a little bit about the submission and language user?
Yeah. You know, so a lot of this with the vendor that we partnered with mainly that was responsible for the submission, they use SAS as their standard operating language. So, you know, our expertise was in R at the time and continues to be. So we took a very just collaborative hands-on approach with them to basically say, okay, every table, figure, and listing that you generate that we agree on, we're going to also validate that in R just to make sure that all the numbers align to what we want. And also not only align to what we want, but we align based on the understanding of the statistical analysis plan, based on the assumptions that are in there, based on the way that things were laid up, based on also all the different cuts you were going to give the FDA.
So this is a huge thing. So what I just heard was that a smallish company with their first product or any product for the FDA went through all the processes, statistical analysis plan, talked to people, and at no time was SAS even a factor except from a vendor choice. So it sounds like you could have done the whole thing in R and have somebody else validated in SAS. I'm just on a soapbox now. This is a huge thing for many people in large and small companies.
Yeah. Yeah, and we, you know, I think that that vendor is, you know, they're very comfortable with using SAS, and that's kind of their go-to. So we knew on the back end because they, you know, ultimately as well through their submission of it, right, we knew we could just QC that and validate it on the back end with R as well. But yes, it's possible, I believe, that if you were to do it all in R, you could do the submission. It was a big push for that, for sure.
Data validation and working with vendors
Did I hear correctly the data was validated? If so, what was that process like?
Yeah. So when we talk validation, again, what we talk about in the clinical validation, because all the data from our patients is patient-reported, a lot of it, so the effectiveness data, medication use. So what is reported is reported, and we capture that in the diary. But what else got validated through that process was the matching really of the tables, figures, and listings based on our combined understanding of the study, the stat plan, and all of that.
So if you can imagine if you're trying to generate a table as part of your FDA submission, and let's say you're trying to look at quality of life metrics, you want to make sure based on the assumption that you know to be true, that the numbers with us and our vendor align with the same numbers. And I think what's important to kind of point out here as well is that as the sponsor, we know the data better than the vendor does as well, because we're on the ground, we're interacting with the patients, our particular designated staff are, we're interacting with our clinical sites on a daily basis.
So we also bring that deeper understanding. But the vendor was with us for years as well. So they get that outsider perspective as well, and they bring their own statistical expertise to the table. And I think what that does for us as well, and I think Ramil can add to that too, is I think it gives us more fruitful and meaningful conversations when we talk with them as well as we help to manage that process and help to, you know, obviously submit data for FDA submission.
It sounds like there's a lot of great lessons in communication and collaboration here through your work with the vendors. And I was wondering, have any of those like lessons carried over into the way that you work with stakeholders across the company?
Yeah, and I think that's one of the big things that we always stress. And I think that it's always been stressed across our organization is to be a good partner. I think a data team can really be at the forefront of helping to make really important strategic decisions, whether it be doing, you know, clinical trial execution and helping on the operational side. So whether you're building dashboards to help monitor adverse events in real time, for instance, or like protocol deviations or monitoring your sites and saying, listen, that one site needs to, obviously there might be a challenge there.
So you have that analytical capability. And I think, you know, for clinical data science in general, I think you have multiple parts to that. You have the data management piece, which is super critical as well. So making sure that your clinical data that's stored is what it should be, that it matches source documentation, that it can be verified that way. Then you have your data analytics piece where, you know, the data team can be very involved and should be in terms of creating, you know, additional metrics to kind of help monitor your study sites and your subject performance and make decisions quicker and faster. And that can be in a dashboard, it can be a Shiny app, it can be, you know, whatever tool fits the bill for that.
And then the third thing is really the data science and the analysis on the back end, where this team takes that expertise of the study and really kind of generates the outcomes based on what the statistical analysis plan says you have to do. But then in addition to that, the data science piece that we talk about with clinical trials and things like that is what new insights can you gain from that data as well that you might not have planned for? What post-hoc analyses can you do to kind of help inform company decisions as well?
Data pipeline and tools in R
Yeah, so like R has so much touch points in our company and it's awesome because it really has a ton of utility in what we do. We get source data from our EDC and then we pipe that into Azure databases and then from there we port it into R and then from there we run, whether it's in R scripts or markdown, we run analyses, use it to create some Shiny apps and, you know, use it to validate as well and then from there, you know, you can port it over into Excel or you can also bring it over to Power BI. So that's kind of how our pipeline is.
The second part of the question is how big is our data on average? So it's around 20 million data points. I know in some companies it's much larger but for ourselves that's kind of the size of the data we work with.
Yeah. Oh, sorry. Go ahead, Rachel. No, you go ahead. I was going to say, I was just going to ask you if you had any follow-up or anything else you wanted to ask there too. I guess when I think of productionizing a model, I sort of, I don't know, I'm kind of tunnel-visioned into thinking of creating a script that gets automated and run on batch either, whether it be daily, weekly, or whatever. Do you have anything like that at the company right now?
You know, a lot of that's a good question, Jacob. A lot of it now is not really automated like that. It could be during clinical trials as well, like that would be something we'd certainly like to implement. Now, kind of the way that we run analyses is working with our other department partners and looking at particular analyses. So there may be new scripts that are written for that particular analysis. But if it were to ever turn into something that needed to be repeated over and over again, that's kind of how we would probably approach it. But a lot of this now is like more post hoc kind of on an as-needed basis as we kind of look to scale and move ahead.
Yeah, and I appreciate the discussion on production as well, because I know when we use the word production, it means so many different things to so many people and so many companies. And some people are doing like amazing work in their companies, and it never technically is called production internally. So it's interesting to just hear how everybody describes it too.
SAS vs. R accuracy and stability
Oh, yeah. I did use SAS and R in Python trying to see the accuracy and the stability between different kind of language for the insurance model. I'm just curious because I think pharmaceutical or clinical might be a much more high, more restrict requirement for the model stability or accuracy. So is, through you guys' comparison, is that R is more like comparable to SAS for the accuracy or stability for the model?
Yeah. So I can speak on that. The, so for those that are listening that don't know what SAS is, it's a statistical software that statisticians use. The benefit in using SAS is it's a validated system and it's not open source. R is, and at least in our company, when we validated our tables, it's very close with what our CRO, our vendors, what figures that they were able to get. To the decimal point, still very close. So I can say that at least there, choosing R as a programming language to validate SAS didn't seem like an issue.
Yeah, we didn't end up using that during the time, you know, and also more importantly for the data to ensure that we felt good with what was put into the tables, figures, and listings. You're kind of out to the tenths or the hundredths place, essentially. But I know I saw instances where R and SAS were, you know, generating past those decimal places exactly the same. But in our case, more importantly, it was more important that directionally the numbers were in line and that we weren't necessarily looking at to be, needed to be accurate to like the thousands or ten thousands place, for instance.
Advice for those starting in data science
Ramil, I know when we were chatting a bit beforehand, we were talking about some of like people who are just starting to get into data science and thinking back to when you first started. Is there any advice that you'd give to somebody who's just getting started?
Yeah. So, I will say that the language of choice doesn't really matter. And as Matt knows, like, as soon as you pick up one language, you can, it's much easier to pick up the other. I chose R because I was interested in, or I was doing my master's in stats, and that just seemed like the path for me. And also because it had, like, I just like the letter R since it's in my name. So, that was also another factor I chose it.
And yeah, there's a ton of sources online, especially for R, which I love. The community is awesome. I used to go to meetups, the R user meetups, and watch things like this. And it's just crazy that it's coming full circle that I'm on the other side now. So, go out there and connect and continue learning. Datacamp is also an awesome tool that you can use to, you know, self-teach. There's going to be a lot of self-teaching involved in data science.
Google, Stack Overflow is going to be your best friend. And if I were to tell myself, or if I would help guide myself in the past, I'd probably say, I don't think ML and AI is a huge component into data science. And I guess it depends too on what industry you're in. But I think if you can analyze data just fine, and you can program here and there, you can really learn a lot of the other things, the domain knowledge at your job. So, yeah, there's really data science everywhere. So, choose your domain, get good at it, and you'll figure it out.
How did you go about learning the domain knowledge?
Yeah, so my background, or my undergrad was in public health. I thought I knew clinical research back then. I didn't. The biostats surrounding public health was something that interested me. And I just enjoyed that class, those classes. And so, after that, trying to immerse yourself in articles, in science. And my interest was in diabetes, since my mom is diabetic. And so, the domain knowledge, to go back to the question, is, you'll get more familiar with it, I think, depending on where you go. And from there, just apply your, whatever, like, skill set you have.
From paper to electronic data capture
Matt, I know in the beginning, when you were kind of covering the journey for your team, you mentioned how things were so manual before, and it was like, actually, written down on paper. How has that shifted now? Like, can you kind of hold those up side by side, or give us an example?
Yeah. So, when we started the trial in 2014, you had two parallel data paths. One was our patient-reported outcome piece. So, the e-diary, where patients were reporting on their pain levels, and pain medication use, and prosthetic use, if they had a prosthetic. Like, how many minutes in a day they would use it. And then, at the same time, as you can imagine, these patients would come in for regular study visits. So, when they would come in for study visits, they would also write the information down. Our site coordinators would write it down on paper, case report forms.
And then, when we got to around 2018 or so, we brought on our current vendor on the electronic data capture side, and we changed everything over from paper to an electronic database on the study visit side, while we still had the e-diary also working. So, you have two parallel data paths. And what I'll do also, I think, we actually did a, I think one thing that could be helpful for the group is, this whole data capture and analysis paradigm, we were actually fortunate enough to get the methodology of that published recently in Discover Health Systems Journal. So, I'll send a link to everybody. You can take a look and see kind of how that, you know, it's got the diagrams from, Ramil made all the diagrams.
So, Rachel, to kind of go back to that, 2018 comes along, and we have a parallel path of an electronic database now, where site coordinators are entering data in the computer into a database, and the patients were still reporting all their pain information, their prosthetic use, their medication use on the e-diary every single day. So, that's going on at the same time patients are coming in for visits.
And then that continued through 2021, when the trial, when our last patient crossed their month three visit, and then 2022, when our last patient had their last year visit. And so, then the challenge was, you've got those two data sources, and then you've got the data that the patients were recording on the device, as well, because there's these IPG logs, implantable pulse generator logs, that every time they use the device, they're logging information there automatically. How do we combine all of that to then do the analysis for that's allotted in the statistical analysis plan?
And basically, it was a lot of working with our CRO and developing a lot of rules and things within the stat plan to then kind of combine that together. So, then that framework, when you have all of that data, you then have a need to really automate and have more processing power than just simple Excel. And don't get me wrong, I love Excel. But as we all know, probably there's only so much that Excel can do in certain instances. So, with that being said, we realize that we have a lot of this data.
Then you have that instance where you have all of this data, and if you need to cut the data, do different kinds of slices and all that kind of stuff, it's much easier to have it stored in a database together, organized, different cuts and slices, and be able to pull that into R and then be able to perform the analysis. So, kind of became, Rachel, like a necessity was the mother of invention kind of thing for us, where we had to move that way to be, again, effective for our patients, be effective for the company.
Low-code, no-code tools and meeting people where they are
Hey, Matt, I'm having ongoing discussions with some internal folks and other people in the industry who are looking to the low-code, no-code solutions that are becoming more advanced in their applications. So, I'm curious, as you looked at it more recently as building that data team, what would be your response to those who say you can run the entire workflow for data and the analysis in those low-code, no-code solutions? It's easier to get more talent that way because they're drag and drop. They don't have to know SQL or code. Just curious what your thoughts would be on that.
So, I think there's benefit to it, but I think one of the most important things that you get with people that have programmed is you have a couple things. One is what's really important to us as a data team is really having an in-depth understanding of how things work. But the one thing as well that's important, I think, is if, you know, for programming and things like that, what it helps, at least it's for me, and Ramil can attest to this too, for me, I believe that knowing how to program, for me, has helped me to better understand data analysis and better understand the steps that needs to take to ensure that the analysis is more accurate, that I've covered all of my bases. Because when I program, I'm thinking about the steps logically in order and how that has to work. And so, I feel that that is a benefit.
But I also think, Ben, to your point that, you know, low-code, no-code tools can be very helpful. I haven't personally looked into them a ton, I know a little bit, but I think they can certainly be helpful. But as long as they're implemented correctly and the foundations are there and the analysis is sound, I think that they could be of help, for sure.
Yeah, when I think of low-code, no-code, my mind immediately goes to Excel. I guess you could argue if there's any coding there, but that's almost kind of like a drag and drop, I guess. And I think the benefit in that is you have a programming language that you can use to talk with folks that are more non-technical. And in some cases, I think Excel is really used in, I think, most companies. I think you can use that as a way to, you know, maybe generate your scripts in R and then use Excel as a way to, you know, share data. And Excel is also awesome. So, yeah, shout out to Excel.
Ben, to your question, Ramil made a really good point. One thing that we find in Neuros is I think it's also important about meeting the customer where they are. And so, I've heard of many data science people come on and talk about that. So, we know, too, in our company, there's a lot of people that love working in Excel and want to kind of get their hands on data, too. So, we'll provide data to them as well as in figures so they can play with the data as well and look at things. Because we work with a lot of engineers, for instance, in R&D. They may not be, you know, big in R, but they know how to work in Excel.
One thing that we find in Neuros is I think it's also important about meeting the customer where they are. And so, I've heard of many data science people come on and talk about that.
Measuring pain data and working with subjective outcomes
I was just curious. I work for the National Health Service in the UK, so I'm very familiar with working with different clinical measures of how well someone is doing. And, you know, you get things like your classic biomarkers, like body mass index, blood pressure, blood cholesterol, tumour size, et cetera. But then when it comes to measures of how much pain people feel, it's a lot less clear-cut how you measure that. And, you know, you might say it's a lot more subjective. And I was curious as to whether that introduced any challenges in working with the data or whether it added extra uncertainty to the model or maybe different types of uncertainty that you had to deal with when you were creating your model.
Yeah. So typically for reporting pain, you have a handful of scales that you can do it on. There's things like the visual analogue scale where patients will have a scale from zero to 100 millimetres or so, and they'll actually look at their pain on a millimetre scale and say how high it is. We use the NRS zero to 10 scale. And as we worked, our goal with our patients was to really kind of understand how they perceive their pain and also worked, you know, with them to help them kind of understand how those scales work to help report more accurately. So ultimately though, what the patients report is what we received.
So in that kind of statement, that's really what it is, is you get their pain scores and you try to work with them to help understand their pain, you know, because somebody's seven, maybe somebody else is three, for instance. And so it's really kind of working with the patient and kind of really understanding and saying, okay, as well, I think our field staff did a really great job of this. We heard anecdotally about them understanding the patients as well and saying, okay, you're saying your pain might be this level, for instance, but look at how much more you're doing today. Is it really there, right? And kind of working with the human factor to ensure that we were getting the most accurate reporting as we could.
In terms of challenges, I think when we're working with pain, it is subjective in nature. Like a person's seven might be a person's three. One of the challenges is that since we're working with like pre and post measures, you can't, like dropping from a 10 to a seven might not be as equivalent to dropping from a three to a zero. So there's that also is kind of an issue. So I think one solution to that is collecting a lot of data around that. So whether that's like BPI or like questionnaires to support the quality of life questionnaires to help support that, you know, whether decrease or increase in pain is a good kind of supplemental information apart from just your pain, basically.
Looking ahead
Really interesting. Thank you all for the great questions today. And as we get to the last three minutes here, I was wondering if we could end on, what is something that you're most excited about, both Matt and Ramil, as you think about the year ahead?
I'm excited about the potential to really get this device into the hands of the patients that need it the most. And really kind of help, again, to restore their, you know, restore their lives, relieve their pain, and really kind of make a difference. For me, you know, on a personal note, I got into Neuros really because my mother had suffered from chronic pain for 35 years. And so this was one way that I felt like I could kind of use some of the technical expertise and a way to kind of help people as well. So I'm excited about that potential, you know. And I'm excited to continue to see our patients that still have the device doing well and kind of continuing to move that.
For myself, I think it's, apart from Neuros, or our device, being at the hands of an underserved population, I'm excited to see how this data team will grow. This is my first gig, and seeing it from the start and seeing this data team grow and myself grow, I think it's something that's always exciting.
Thank you both so much for taking the time to join today. I also love that this kind of came out of your team joining the Hangouts as well and then reaching out to be a featured leader, and I love to see that full circle there, too.
Well, we appreciate this a lot, and this isn't the right way, per se, to do it, but it's one way, and we hope that it's helpful for people trying to navigate their way through clinical data science number one and kind of in their own situations and what can be done with this. So, we're just happy to be here.
Well, thank you so much for sharing your experience with us, and thank you for all that you do. Have a great rest of the day, everybody.
