Episode Transcript
[00:00:00] Speaker A: The best clinically, the best designed clinical trial. The best study, if it's not executed correctly, is useless.
[00:00:23] Speaker B: Mike, thanks so much for joining me on the show.
[00:00:26] Speaker A: Pleasure. Brandon, thank you for inviting me to be here.
[00:00:29] Speaker B: Yeah, of course. I've been so looking forward to this for the audience. Mike is currently the Chief R and D officer over at Compass Pathways, the furthest and largest player further along and largest player in the really exciting psychedelic space. But before that he's had a storied career in research and R and D. So Mike, maybe just give us a quick background as to your career and how you got here.
[00:00:53] Speaker C: Sure.
[00:00:54] Speaker A: I'm tempted to start. Once upon a time there was. But you know, so, so I have, I, you know, I've been in this Tree now almost 30 years, which is hard to believe, have been in both large and small companies, behemoths like J and J&GSK and I've also been in, you know, tiny biotech companies with less than 20 people. So pretty much any neuroscience indication that you can think of, I've been involved in it one way or another.
Drugs, drug device combinations, biologicals, diagnostics. So in that space as well, and from an industry perspective, you know, almost any stage of development from first in May and through multiple international studies, registration approvals, et cetera. So pretty much any aspect of the business that you can think of in a clinical development universe, I've actually worked in.
[00:01:46] Speaker B: Yeah.
Now the kind of theme of today is a bit of a Clinops masterclass. I'd love to maybe just have you tee up for me.
How do you think about the kind of core responsibilities for doing a great job at clinical operations?
[00:02:01] Speaker A: Yeah, no, I love the question, but let me then preface it by saying that I am not naturally a Clinops guy.
I am a physician first and last. I'm a clinical development guy.
I've had to back into operations because it's a necessary part of any good clinical development organization.
And so I think the, the, the, the question being, you know, what is, what is, you know, the, the excellence part of it.
[00:02:29] Speaker C: Right.
[00:02:31] Speaker A: The best clinically, the best designed clinical trial, the best study, if it's not executed correctly is useless.
[00:02:40] Speaker C: Right.
[00:02:40] Speaker A: So it doesn't matter if you are a genius and you design the best study possible and it's the killer experiment if it's not run correctly, if the data are not of the appropriate quality, if you're not watching what's going on while you're making the soup, then what you end up with more often than not is not high quality data leads, uninterpretable or difficult to interpret results.
And at the end of the day, to me, it's fundamentally disrespectful of the patients and the participants that come into our clinical trials.
So the operational aspect to me is part of that contract with patients, with participants, because in some cases the participants are healthy volunteers, not patients, but let's just say any, any participant in the clinical trial. For me, it's part of that social contract that says, you know, you're putting your life in our hands, we're going to stick needles in you, we're going to confine you, we're going to ask you all sorts of invasive questions, we're going to take time out of your life.
And the operational aspect to it is, please, let's make sure that whatever data we're collecting, we're doing it right, that we're watching over the safety of participants in a clinical trial. So that's the lens through which I see clinical operations. It's not, I'm not so much kind of in the details about the particular crf, et cetera, so we can get there.
But I, you know, as you asked me, the question, what, what comes back in the head, in the back of my head is always integrity. The integrity of the clinical trial. Is it delivering data that allows us to answer a question in an unimpeachable manner?
[00:04:12] Speaker C: Yeah.
[00:04:12] Speaker B: You know, canonically, people will say that there's a trade off between speed, cost and, and quality. You get to pick two of three. How, how do you think about that trade off?
[00:04:23] Speaker A: Yeah. So, I mean, look, I, I was very fortunate to have a very, very good mentor early on in my career.
And he made it very clear. Quality, time, cost, in that rank order.
You have mistakes with quality, you can't really undo either time or money, you know, other things you can play with, but that's the rank order. So I, I'm kind of a stickler. I don't like or don't endorse any real compromises on quality because sometimes, in fact, you know, you really, you don't catch the mistake until very late. And so there's no amount of time or money to make up for that.
And then there's, I think, sometimes a misconception that if you throw enough resources in terms of people or money, you can make up for shortcomings in quality and time. And that's not my experience.
So I'm going to repeat it in rank order. Quality, time, money.
[00:05:18] Speaker B: Now, I feel like to be a little challenging here that's easy to say when you're running a program at advi, but a little bit a little harder to say when you're at a, at a smaller biotech.
[00:05:29] Speaker A: Yeah, you know, I'll take the challenge and I'll push back. It's even more crazy on a small biotech because you don't have the resources to make up for mistakes, to do it again. To do it again. Right. So there are companies out there and we know some of them and you know, some of them are beloved in the neuroscience field where they ran a study, it didn't work out, they tweak it, fix it, go back at it again.
Biotech doesn't have that luxury. So I actually take the position that an emphasis on quality is probably even more critical for biotech. So we get it right the first time.
[00:06:05] Speaker B: So it sounds to me like the, maybe a, an overarching principle is measured twice, cut once.
[00:06:14] Speaker A: Yeah, it's, it's one way to put it. I mean, look, you know, you can, you can do the kind of, you know, carpenter, surgeon thing which is measure twice, cut once.
There are other, you know, it's a, it's about pressure testing your ideas. And again, you know, all of us who've been working in clinical development, I think the one thing we can agree on is we've never seen a perfect study. Every study, every study is a compromise between what you would like to do, what you can do, what you can afford to do, what you have the time to do. So there are always compromises.
And so, you know, you safeguard as much as you can in terms of the quality. But the things that we can do across the board is to actually pressure test our ideas, either with colleagues, friends, you know, external experts, consultants, etc. Etc.
You know, it's not skin off anybody's back or challenge anybody's ego to put your ideas to the test and make sure that what you're thinking about is really, you know, stands, sends up the pressure and being pushed and prodded and.
[00:07:17] Speaker B: Examined is, is the, and is the, is the ultimate, I guess, safeguard to quality here really pressure testing ideas. Is that, is that how, how it comes through for you?
[00:07:28] Speaker A: It's, it's two things. One, one is pressure testing ideas and making sure that you understand what you're getting into and where you are making trade offs.
[00:07:37] Speaker C: Right.
[00:07:38] Speaker A: And the other part about it is, you know, vigilance over the data as you're actually running the study are your data. And these are both kind of, you know, clinical but also operational data are you in your operational envelope, right, you had a model for recruitment, and you're deviating. How early in that course are you actually willing to start taking action? Because one of the things that I see a lot is what? Let's just wait a little bit longer. Let's just wait a little bit longer, right? In some cases, the answer is we are not using data, we are not using analytics, we're not using other experience to sit there and say, I don't have to wait to see the full disaster. I can start to pick up signals much earlier.
[00:08:17] Speaker C: Right.
[00:08:19] Speaker A: So I would say, you know, almost obsessive vigilance over your data to make sure that you're picking up, you know, signals very early on.
And that's in parallel with making sure that you are pressure testing before you put things into play.
[00:08:36] Speaker B: Yeah, I'd love to maybe, like, go down both of those angles. So there's a design question. Are you making the right design decisions? And then it sounds like there's an operational question around using data to properly make sure you're steering the ship.
Maybe let's start with design first. But I am a data guy, so I want to get there from the design, like, standpoint. What. What are the most important design decisions that. That you have to make?
[00:09:04] Speaker A: Well, my dear padawan learner, there are so many. I'm teasing, right? I'm just teasing.
[00:09:09] Speaker C: No, no, no.
[00:09:09] Speaker B: Tell me how this is.
[00:09:11] Speaker A: Well, I mean, look, so. So you sit there and you ask, what. What is the question that I'm most interested in answering?
[00:09:18] Speaker C: Right?
[00:09:20] Speaker A: And so then you think about which one of a thousand different designs allow you to get to your question most cleanly and most efficiently, right?
It comes down to, okay, so what's the patient population?
What are you measuring? How are you going to measure it?
When are you going to measure it? How often do you want to measure it?
[00:09:45] Speaker C: Right?
[00:09:45] Speaker A: What is the control?
How do you actually manage heterogeneity in a population? What, you know, do you really need to stratify by 15 different things? And, you know, how. How strict are your inclusion exclusion criteria? So, you know, there are a dozen different axes that you are taking into consideration almost simultaneously to figure out, you know, what. What design, you know, what's the envelope of that design look like, Right? So you start to think like an engineer, right? If you're designing a rocket or you're designing some, some, some. You know, there is an operational. There is a design envelope that you're thinking about, and I'm using my hands to illustrate it.
[00:10:25] Speaker C: Right? Yeah.
[00:10:26] Speaker A: X, Y, Z, time, or et cetera, et cetera. But then the nice thing is that within, within the clinical method, clinical trial universe, there are different ways of approaching it. And so you start to think about what is, what is a, you know, a family of designs.
Do you really need an active control? Do you need placebo?
Can you do an adaptive design? Can you do, you know, single blind, double blind? So this is where, you know, being able to really put people together and with experience and knowledge, bring a whole bunch of options to the table, and then you can fairly quickly start to winnow them down and say, okay, this is really interesting, but this family of study designs don't work. That family of study designs don't work. This is starting to zero in. And so this is part of the both creative but iterative process where you start to zero in or narrow down your options to, to where, where you think you have something that allows you to go after that question. And I'm, I'm deliberately, deliberately avoiding this, excuse me, the issue of hypothesis testing versus hypothesis confirming. I, I, you know, I want to avoid that unit. It's a discussion for, for, for maybe later in the conversation.
But you're asking the, the, what is the research question, right? Is there a signal for efficacy? How robust is that signal? How early can I pick it up? In what domains does that signal exist? Um, how important is it for me to show an exposure response relationship? Then it all gets into, what is the mechanism? What is the target?
You know, so it's you, you ask a great question. But there, this is, this is like truly something where you're operating in parallel across many dimensions at the same time, right?
[00:12:16] Speaker C: Yeah.
[00:12:17] Speaker A: And then you ask about the operational part of it. Then I start to sit there, okay, if I have an idea of what that study looks like, then I have to think about, is it feasible? Can I find these people? How fast can I find them? Where do I find them?
[00:12:29] Speaker C: Right?
[00:12:30] Speaker A: Are there, are there, is there a universe of reasonable clinical research sites or investigators that understand what we're trying to do?
[00:12:38] Speaker C: Right?
[00:12:39] Speaker A: Or have experience of what we're trying to do?
The right instruments exist. Can I access them? You know, are there.
So that starts to kind of peel other layers about kind of tactical implementation of the study as opposed to, you know, here, here's sort of the design parameters. Does that, does that kind of help, Brandon?
[00:12:57] Speaker B: Yeah, yeah.
I feel like as you outline this, it feels like a dozen plus different design decisions that are all confounded.
They all influence one another. It's tempted to say oh, let's run an objective function, an optimization function on here and just spit out an ideal design. But I suspect that it's not that easy.
[00:13:20] Speaker A: Well, I have to say, if you were doing a me too drug for something where there were 50 prior clinical trials. Yeah, maybe.
[00:13:27] Speaker C: Right. Yeah.
[00:13:27] Speaker A: When you're doing a first in class like we are, there's no precedent for what we're doing.
And so, so the value of that kind of optimization is limited. That said, you know, we are, we have to be students of history and look at prior studies in related fields to figure out where some of these things worked or did not work. Now this is one of the places where I think industry has done itself a disservice.
We don't publish a lot of operational data.
Well, the, the, the organizations that have a lot of this is actually CROs, but this is how they make their money.
[00:14:00] Speaker C: Right.
[00:14:00] Speaker A: If you wanted to understand what were the recruitment challenges or the retention challenges or quality challenges in a study, you're not going to find that in publications. And so, so that actually creates a, A, a problem for us in terms of figuring out the balance between a design and it's in its implementability or it's, it's, yeah, it's an implementation.
[00:14:21] Speaker C: Yeah.
[00:14:22] Speaker B: So, so then naturally this is where the pressure testing comes into play.
How, how does one pressure test, once they've have maybe two or three candidate designs that you're excited about?
[00:14:32] Speaker C: Sure.
[00:14:33] Speaker A: So I mean, what do you do when you want to pressure test something? You go and you, you, you, you share it with your colleagues in, in the universe or you share it with, you know, consultants out there or friends of friends or sometimes even not friends, you know, people, people who are going to challenge you and you put it out there and say, poke holes in it. Tell me what's wrong with.
[00:14:52] Speaker C: Right.
[00:14:53] Speaker A: Why is this? I mean, there are some people out there that I know, I, I, I can ask them, tell me how this is going to fail. Yeah. And they will, without any shame or hesitation, tell me just how bad my design is and how many different ways it's going to fail. Which is fine. It's what I want. I get to know.
[00:15:08] Speaker C: Yeah.
[00:15:09] Speaker B: Better to, better to know what the blind spots are while you're, while you're heading down the path.
[00:15:15] Speaker A: Right. Yeah. I mean, look, it's, it's, it's a lot easier to fix it up front and kind of, you know, be prepared.
Even if you agree that you're going to take on a design that has risks, you at least know yeah, now you're not walking into blind. You, you can, you can now be forewarned and say, here are the watch apps.
[00:15:33] Speaker C: Right, Here we go. Yeah.
[00:15:36] Speaker B: And then I suppose this dovetails nicely into part two, which is, um, I, I like the term you use, like your, your operational envelope and being vigilant about the data. Um, how do you think about that?
[00:15:49] Speaker A: Well, I mean, I think, I think that that's, you know, look, this is, this is where a really, really good collaboration between clinical scientists and study physicians and, and clinical operations groups is critical.
[00:16:05] Speaker C: Right?
[00:16:06] Speaker A: The flow of data between those groups has to be, you know, absolutely seamless.
[00:16:10] Speaker C: Right?
[00:16:13] Speaker A: I'll give you, I'll give you, I'll give you an example from, from a study that, that I ran many years ago in a, in a different company. And we were running a study that had a time to event endpoint and you know, we were not seeing enough events. Just, it was just, we're not seeing events.
And after that study had been running for well over three years, we finally said, well, damn, it's futile, let's just stop, right? We obviously, we obviously are missing something here. So it turned out to be as soon as, as soon as we stopped the study, there were a whole bunch of events that were sitting in investigators documents that were not put into the system.
Oh, Lord. Well, in fact, there were plenty of events and it ended up favoring the intervention. So we got very lucky. But it drove home a message that, you know, if you're blind to the data or the data are not in your system and nobody's looking at it, I'm operating, you know, completely blindfolded.
And so for me, this is one of, you know, if you are, you know, you're tracking or you're collecting particular data, you know, the rate at which is coming in or the metadata around it, the quality of the data, the variability around it, how long does it take for some piece of information or a clinical trial to make it into your edc?
Those are kind of indicators of quality. You know, if it's taking sites forever to screen and enroll the first participant, you should be asking questions, what's going on here? Why is it so difficult for you?
[00:17:45] Speaker C: Right?
[00:17:47] Speaker A: There are other things that you can look at at a site level that, that can give you cause, right? I mean, if there's a lot of turnover a site, you have to ask why? What's, what's going on?
[00:17:57] Speaker C: Right?
[00:17:58] Speaker A: So those are the kind of things that any operational team should be paying attention to and can flag or raise a flag up and say, hey, there are issues with this study.
You know, studies, sometimes they are designed with a sort of canonical patient population in mind by people who are very smart and well attentioned.
When you get out there, you can't find a participant. Right, right.
[00:18:22] Speaker C: Well.
[00:18:26] Speaker B: You know, like it strikes me as you describe this that there's like a, a stream of data that's coming in that maybe you're, you're checking weekly.
It's a kind of sanity check in a DAC board or, or of some sort.
[00:18:38] Speaker A: I mean, ideally it's real time. It's not even weekly, it's real time. And I get in organizations where heads of R and D or whenever we had dashboards up, I could see exactly who was in the screening queue, who was in the, you know, the randomization. I mean, I knew exactly at a, in real time. I had a pulse on, on, on, on the flow of participants in and out of our studies.
[00:19:00] Speaker C: Right.
[00:19:00] Speaker B: If you could design with me for a second your dream dashboard, like what are, what are all the indicators in real time that you would want to be able to see?
[00:19:07] Speaker A: Yeah, yeah, I know. That's how I love that. Okay, so you, so we're going to design this one and you're going to, then you're going to, you can put them in.
[00:19:13] Speaker B: Good for you.
[00:19:13] Speaker A: You're going to uterus.
[00:19:14] Speaker C: Thank you.
[00:19:16] Speaker A: So one aspect of that is the pipeline, the inflow into the study, you know, pre screening, screening, randomization.
What's the flow in?
[00:19:27] Speaker C: Right.
[00:19:29] Speaker A: What does it look like, what does it look like against some particular model where you work with whatever you're doing, some, you know, some sort of arima or you've got some sort of model in the back of your head about what your equipment should look like.
[00:19:41] Speaker C: Right.
[00:19:41] Speaker A: How's that fitting with the model?
I'm also interested in the back end of it, which is how are we doing on retention of subjects Right now I have to say, some organizations are myopic. They will focus on recruitment but not pay attention to retention. And at the end of the day, first patient is as important. The really important thing is last patient out.
And if you can keep people in your study and reduce attrition and you can reduce missing data. Well, the, the, the data geek in me feels much better because I have less of a missing data problem. So patient inflow, patient outflow.
[00:20:17] Speaker C: Right.
[00:20:19] Speaker A: And then there's a question about the data that we're collecting. A dashboard for me would tell me where are we in terms of missing data?
You know, how many primary Outcome measures are we missing? How's that looking over time?
What is the pattern of missingness to the extent you can infer that.
[00:20:36] Speaker C: Right.
[00:20:36] Speaker A: Complete, assuming that we're in a blinded environment.
And then what I would be looking for are diagnostics.
Am I seeing excess variants? Am I seeing too many outliers?
Is there a pattern, you know, in the data related to either geography or site or some other variable? Right. So I'm looking for some pattern recognition or indicator that there's something more systemic going on.
And then you think lastly in sort of quality of the data.
[00:21:05] Speaker C: Right.
[00:21:06] Speaker A: Queries, deviations, protocol deviations, you know, safety issues. So you know, I'd be tracking AES, taes, saes over time to get a sense of, you know, where, where, where are my data going?
[00:21:21] Speaker C: Right.
[00:21:21] Speaker A: So that, that to me is sort of a, a nice dashboard that kind of mixes clinical level insight with data quality level insight and gives me a sense of where what is my operational envelope looking or.
[00:21:34] Speaker C: Right.
[00:21:34] Speaker A: You know, how are my sites activating in time? You know, time from site activation to first patient day and after that.
[00:21:42] Speaker C: Right.
[00:21:42] Speaker A: I want to know are they actually on, on, are they on track?
[00:21:47] Speaker C: Right.
[00:21:47] Speaker A: Things like that.
[00:21:49] Speaker C: Yeah.
[00:21:50] Speaker B: And as you describe this kind of dream world, like how much of it do you have today versus it doesn't feel like, it's, like it's kind of hard to get.
[00:21:57] Speaker A: I think we have, we have a lot of it but, but I would say it's not, it's not frictionless to actually access it. Right, sure.
[00:22:06] Speaker C: Right.
[00:22:07] Speaker A: And I, and I think it's part of, the, part of the issue is that, you know, we, we work with a CRO that works with multiple vendors. You know, their, their, you know, sites have their own particular platforms, their academic institution. So interoperability is a real problem in terms of getting the data to flow easily. Now layer that on top with, you know, data privacy issues.
And of course, you know, as the sponsor, I need to be arm's length from any individual patient data. So we need to make sure that the right data protectioning mechanisms are in place.
You know, ideally in that dashboard I should be able to drill down or click down on some anomalous data and get to, you know, where is that really coming from?
[00:22:51] Speaker C: Right.
[00:22:51] Speaker A: I've seen that that's doable, absolutely doable. I've seen it done.
Is, it's just to me the, the, the, the friction to actually make that sort of best practice. It's, it's taken longer than I had hoped. Yeah.
[00:23:08] Speaker B: I've heard you described Running a clinical trial as part art and part science.
[00:23:13] Speaker A: Oh, yeah?
[00:23:15] Speaker B: What do you mean by that?
[00:23:17] Speaker A: Well, I mean, the science part of it is, I think, pretty clear. You know, there's a hypothesis, you have a kind of an experiment you're going to conduct.
You have an idea of how you're going to measure the outcome. You know, you've got, you know, some quantitative parameters. So you have an effect size and variance and sample size and hour and alpha. So there's a kind of science in terms of how you conduct an experiment.
[00:23:39] Speaker C: Right.
[00:23:40] Speaker A: And a clinical trial is an experiment. No if, ands or buts about it.
The art part of it is that these are not lab animals. This is not a widget. These are human beings.
And stuff happens.
People get into car accidents, there are family issues.
You know, people can't make it to a session.
People take stuff they're not supposed to. Somebody's husband or wife yells at them. You know, so there are all sorts of kind of things that interfere with the quote, unquote, clean conduct of a trial. I mean, look, let's say you're running a preclinical trial, right? Things happen fairly easily in terms of schedule.
[00:24:23] Speaker C: Right.
[00:24:24] Speaker A: Missing data for a preclinical experience shouldn't really happen.
[00:24:28] Speaker C: But.
[00:24:31] Speaker A: Sorry, I'm getting over cold. Missing data in clinical trials is post and parcel with it. So the art of it is how do you. How do you make a trial the least burdensome on participants without compromising the quality of the data that you want to collect?
[00:24:46] Speaker C: Right.
[00:24:47] Speaker A: How far can you flex? So we have, for example, treatment windows. We say you need to come in on day 14 plus or minus something.
The something. The plus or minus. That's art. Is plus or minus 2 good enough? Can it make it plus or minus 3? When does pl. You know, so that. That's where I can't tell you. There's hard science around some of the things that we do to make the clinical trial operational or viable.
[00:25:14] Speaker C: Right.
[00:25:16] Speaker A: You know, in some cases we say, you know, you're not allowed to be on some particular set of medications.
Well, you know, if you're working with people who have chronic medical illnesses, you're gonna have to allow concomitant medications. And sometimes you may not be happy about what you have to allow, but that's the reality of what you're the universe you're operating in.
[00:25:36] Speaker C: Right.
[00:25:37] Speaker A: Particularly in psychiatry.
[00:25:38] Speaker C: Right.
[00:25:39] Speaker A: If you're gonna working in PTSD or you're gonna work in alcohol abuse, you know, there are compromises that you make, but those turn out to be in the service. So this is more generalizable.
[00:25:47] Speaker C: Right.
[00:25:48] Speaker A: So that's the art part. How far do you go? You know, how far do you bend?
You know, people take inclusion exclusion criteria and there's this particular practice where you just copy and paste from the prior trial to the next one. Now, I understand the efficiency of that one, but it's somewhat mindless in the sense that you need to really think carefully is, do I need all of these? Right.
Which ones are absolute criteria, which ones are nice to have?
And even then it's an art.
[00:26:18] Speaker C: Yeah.
[00:26:19] Speaker B: I'd love to talk about the design of protocols and schedules of events and IE criteria a little bit here. I know that.
I think I saw that you chaired GSK's Protocol Review Committee or something like.
[00:26:33] Speaker A: That for a while. Yeah, yeah, yeah. Also dms. Yeah.
[00:26:36] Speaker C: Right. Yeah.
[00:26:39] Speaker B: What do you think that most people get wrong when taking our first swing at a protocol?
[00:26:45] Speaker A: Most people get wrong.
I think most people overload trials with too much data collection.
Over the years, I become more and more fan of pragmatism in the trial.
[00:27:04] Speaker C: Right.
[00:27:06] Speaker A: So it's the KISS principle. Keep it simple.
Stupid.
Sure. So I think people, people ask, they're asking too much. And you know, there's, there's, there's always this kind of sense of, well, but it's, but it's just another scale. It's another five minutes. And like, I get it, but it's more data that we're collecting. And so I keep on going back and this is what we did at the, at the protocol review committees, which is, what are you going to use these data for?
[00:27:32] Speaker C: Right.
[00:27:33] Speaker A: And if it's a nice to have, get rid of it if you must have it.
[00:27:38] Speaker C: Right.
[00:27:38] Speaker A: Because then you're more vested in actually making sure, collecting it, that it's of high quality.
[00:27:43] Speaker C: Right.
[00:27:44] Speaker A: You're going to do it. You're prepared to stand behind the analysis of those kind of data. And I've seen a large number of studies where it's just a lot of, you know, wouldn't it be interesting to think? Yeah. Like. Yeah. No, yeah. Easy.
[00:27:59] Speaker B: Easy to miss the forest for the trees.
[00:28:00] Speaker A: Right.
[00:28:01] Speaker B: I guess.
[00:28:01] Speaker A: Um, well, I mean, it's, it's, you know, look, and, and, and I, I mean it with all the love in my heart. I get where that comes from.
[00:28:08] Speaker C: Right.
[00:28:09] Speaker A: You're working a rare indication. It, it's going to be, you know, once in a lifetime study and you want to learn as much as you can.
[00:28:17] Speaker C: Right.
[00:28:17] Speaker A: So let's just pile it on or it's going to be a difficult study to recruit for. Let's learn as much as we can. I. I get that. I understand it.
But, you know, it's. It's. You want to focus on the nature of the experiment. What do you really want to learn?
[00:28:32] Speaker C: Right.
[00:28:34] Speaker A: And every time that you keep on piling things on for participants in these studies, you know, they get frustrated, they're fatigued.
[00:28:41] Speaker C: Right?
[00:28:42] Speaker A: It's like, it's not really critical. Move on with it.
[00:28:46] Speaker B: Well, Mike, now you're calling my baby ugly. Uh, I just brought you a protocol, and you told me that it's all wrong. Um, what.
What techniques have you found in having those. Those kinds of conversations as well?
[00:28:56] Speaker A: I mean, so. So I think what.
[00:28:58] Speaker C: What, what.
[00:28:58] Speaker A: What we learned was to actually force the dialogue between the various stakeholders and the teams that are driving the design. You know, what do you really want? What do you really need? What's critical?
What. What is? You know, can you answer some of these questions outside of a clinical trial?
Right. So I'll. I'll give you. I'll give you an example, because I think it's. It's helpful.
There are now online patient communities that you can talk to about certain aspects of the disease journey that don't require getting that information in a clinical trial.
[00:29:32] Speaker C: Right.
[00:29:35] Speaker A: You can sit there and say, okay, you know, here's a list of five different outcome instruments that are all kind of designed to collect something similar. You know, there's a 50% or 70% overlap between these five instruments. Can you all get together, knock your heads together, go have dinner, come back and say, of all the instruments out there, pick the one you want to go with? Can you drive to consensus, sit there and say, okay, is it good enough?
[00:30:04] Speaker C: Right.
[00:30:05] Speaker A: And again, what I learned chairing those committees, what I've learned reviewing dozens, if not hundreds of protocols, is that there's often a lot of space to be. You know, you. You can compress, and you can find common ground to sit there and say, look, what's good enough? It doesn't have to be perfect, but what's good enough? And is it good enough for the two of us? Can we then, you know, simplify a study? Right, that's. That's right.
Sure.
[00:30:30] Speaker B: So you mentioned, you know, status quo of maybe taking a prior protocol, copying it over, and using that as a starting point. And you suggested maybe that's not the ideal practice. So what. What is?
[00:30:43] Speaker A: I mean, I think it's. Again, you can start with what was done in the past, because I think there's value in all the work that was put in by predecessors to get to where they took us. Then you look at it and say, doesn't make sense for what I want to do now.
[00:30:59] Speaker C: Okay.
[00:31:02] Speaker A: Should I stick with exactly the recipe that these people came up with 10 or 15 years ago?
[00:31:08] Speaker C: Okay.
[00:31:10] Speaker A: Again, as a teaching moment, right? So you look at PTSD studies where we're, we're, we're involved right now. Historically, many sponsors excluded any form of alcohol or substance abuse because they wanted to focus on the pure ptsd.
Well, shit, a whole bunch of these people just couldn't enroll because that, that patient population may not really exist. There's such high comorbidity, right? So we've been looking at prior design and say, hey, look, let's operate with reality in front of us, right? Like, let's deal with the real world as we know it now.
And we decided to actually make some allowance for that comorbidity and allow it in our clinical trials. We didn't go full force and say we're going to allow everything in, but we made the decision to allow some mild to moderate abuse in our clinical trial. We think that brings us one step closer to a more generalizable population or more representative population and hopefully allows us to talk about the effect of COMP360 in that patient population. But the thought behind it was don't just mindlessly replicate what somebody's done before. And the reason I've seen that used over and over is, well, that's what regulators want.
That's what regulators are used to.
I don't, I don't, I don't buy that excuse whatsoever.
[00:32:32] Speaker C: Right.
[00:32:35] Speaker A: I don't find those are actually, you.
[00:32:36] Speaker B: Have a more favorable view of, of the regulators than.
[00:32:41] Speaker A: There'S, there's, there's something to be said about certain elements of the design that are consistent.
[00:32:47] Speaker C: Right?
[00:32:48] Speaker A: There's a diagnostic criteria.
Until you show me a better one, we're going to stick with what people know. There's kind of known kind of psychometric parameters. So that instrument, right? Sensitivity, specificity, positive predictive value, right? We know what we're going to get and we know the limitations of the instrument.
But there's an element of kind of sponsor side thinking and risk that I think it's all too easy sometimes to just sit there and say, well, we abdicated in favor of what's been done 15 times before us.
Not sure that's the way to go, right?
I mean, in particular, not if you're testing a new pharmacology where you may actually have an advantage, or you can pick up signals in other areas. I mean, if I'm going to do yet another triptan for migrate, I'm going to go back to a design that's worked every time, that has been done before.
[00:33:42] Speaker C: Right.
[00:33:43] Speaker A: In that case, there's no reason for me to innovate on the design.
[00:33:47] Speaker C: Right.
[00:33:47] Speaker A: I'm taking a novel pharmacology into patient population. I have every reason to give that compound every chance to show me what it can do in spaces that may be new.
[00:33:59] Speaker C: Yeah.
[00:33:59] Speaker B: Okay, so then let's, let's layer on maybe the wrinkle of new technology.
So you've got a new, like a new compound. You're leading the charge on this. At what point do you think, um, yeah, maybe it is time to try a new wearable dct?
Some of these newer techniques, given it is. It's new and new.
[00:34:22] Speaker A: Yeah. I mean, look, I'm a fan, right? I'm a fan because I think that this technology has come a very long way to allow you to measure ecologically valid behaviors in a way that is reliable enough for use in clinical trials.
[00:34:37] Speaker C: Right.
[00:34:38] Speaker A: So you think about where things started, for example, in Parkinson's disease, you know, the ability to measure tremor or ictigraphy.
[00:34:49] Speaker C: Right.
[00:34:50] Speaker A: That seemed like a, you know, it was a low entry point, but a very valuable entry point to sit there and say, right, we can objectify something that is absurd now. People objectified it or made it, made it measurable. You could video somebody walking or doing an Archimedes circle and things like, like that. Then we kind of digitized that and said, okay, now instead of, you know, kind of doing an analog to digital version of the Archimedes, now a digital version of the tool, and now we have other ways we can just measure amplitude and frequency of the tremor in multiple dimensions.
You can see that kind of generalizing in the sense of the tremor, the rigidity. Now it goes to gate, you know, gait analysis. So we have ways to figure out what's really going wrong in people with Parkinson's disease and their lack of balance, et cetera. So that was, to me a great example of where technology really enabled smarter, faster, more efficient clinical trials.
[00:35:43] Speaker C: Right.
[00:35:44] Speaker A: And then you sit there and say it went into other domains.
Motor behavior, cognition, sleep, speech, you know, ways to actually infer mood without, you know, cause if you sit, just say, what, what. You know, how are you feeling today? I'm not sure I need a device to do that.
[00:36:02] Speaker C: Right.
[00:36:02] Speaker A: But. So I think I'm a fan of that. And I think part of the reason that I'm not as a very early adopter, particularly at Abbey. Right.
Is that it's this kind of repeated measures model. I can now collect data from your behavior continuously, every day. As much data as you want to give me, I will take because in some ways that's kind of a regression to the mean. I'm not going to really see trends over time.
And one of the things that we learned, sadly in, in studies where we do a Visit at Baseline 1, at Week 6 or Week 12, if you have a really bad day at the end of the study, you know, you, you're, you're sick or something really bad happened. The entire benefit of the therapy could be mass, massed by a really bad day. And so I, I think the value of those technologies is that it minimizes your, your vulnerability to within subject variation.
Right now you probably are thinking, well, when do you use that technology?
Going from a pilot study to a pivotal trial.
[00:37:09] Speaker C: Right.
[00:37:10] Speaker A: I think we're starting to see more use of those in pivotal trials. When does it become the primary registrational output? That's a good question, right?
[00:37:19] Speaker B: Yeah, yeah, certainly there has to be enough precedent data, I guess.
[00:37:25] Speaker C: Yeah.
[00:37:26] Speaker A: And I mean, I think the values that you can get beyond. We can measure symptom domains now. We can measure things that are how a person is functioning.
[00:37:35] Speaker C: Right.
[00:37:35] Speaker A: If I can, hypothetically, I can see how you're driving your car, or I can see how you're operating at work, or I can get a better sense of, you know, your interpersonal relations. I can see you get out of the house more or the inflection in your voice when you talk to friends, whatever it is, indicates an improvement in mood. I can start to make arguments that were beyond just measuring simple domains.
[00:38:02] Speaker C: Yeah.
[00:38:03] Speaker B: Now, Mike, you started this conversation by saying, let's slash as much of the data collection requests as we can. And now you're saying, well, actually, I love this data stream. Which one is it?
[00:38:14] Speaker A: Well, it's both. I mean, in the sense of that. The thing that I like about, for example, wearables and sensors is that they're unintrusive. They don't require, they don't require people to show up in clinic.
You do it at home. You do it in a way that is kind of doesn't interfere with quality life. I mean, I think I shared this experience with you.
We were working with a device called the MC10 patch, which is a flexible patch that you put on your skin. And we were doing it in a study of multiple sclerosis looking at ambulation, and we had greater than 100% compliance with the use of the compound and we didn't have to push participants to put, put the patch on. So when you have a technology that people in a study have no problem using and in fact find it unobtrusive.
[00:39:00] Speaker C: Right.
[00:39:01] Speaker A: And it, I don't think there's anything wrong with that. My, my objection is to need to have data that requires more clinic visits. It's more burden on the patient.
You know, there's more missing data issues. That, that's why I start to question its utility.
[00:39:18] Speaker B: Yeah, yeah, there's this, there's this principle around fitting it into the natural pathways of, of the individual's life.
[00:39:26] Speaker A: Right, Exactly. I mean, yeah, you know, it's like, is this drug helping you to live your life better?
[00:39:33] Speaker C: Right.
[00:39:34] Speaker A: Ask the question, how is this helping you live your life better?
Let's figure a way to measure that.
[00:39:39] Speaker C: Yeah.
[00:39:40] Speaker B: A topic that comes up a lot when we speak to folks about operations and clinops excellence is working with sites.
[00:39:47] Speaker A: Yeah.
[00:39:48] Speaker B: I'd love to hear how you think about building, designing, cultivating site relationships and ultimately motivating and like maintaining great relationships even when things, things aren't going well.
[00:40:02] Speaker A: No, it's so, look, I, I, I, I was an investigator for a bunch of years before I left academia to go to industry. So I have nothing but the utmost respect for people who are in that space, whether they're commercial research centers or academic research center. It's a tough business. It's a really, really tough business.
My sense is that working out and making sure that there is a very healthy relationship between every site and the sponsor is critical.
And by healthy, I mean a relationship that can withstand difficult convers, difficult conversations.
What's going on at your site? You know, you're not, we're not seeing enough recruitment.
We're seeing a lot of staff turnover. We're seeing, not seeing, you know, there's some glitches with the quality of the data.
You know, we're worried about, you know, PI supervision.
My experience, my experience is that, you know, vast majority, vast majority of sites and investigators are in it for the right reasons. They're good people. They're really trying to do the right thing. Stuff happens.
[00:41:11] Speaker C: Right.
[00:41:11] Speaker A: And then I think our job is to. What's going on? How can we help you?
So, you know, part of, part of what I encourage in my organization is very close dialogue with the sites.
What I don't push for is trying to close a sale that says, you know, Dr. So, and so will you commit to recruiting X number of people by next week? We're not in the business of closing sales.
What we're in the business of is figuring out sites. How can we help them to get to where they want to be in terms of the numbers that they've committed to us?
[00:41:45] Speaker C: Right.
[00:41:46] Speaker A: What do you need? Do you need help with recruitment? Do you need more money? Do you need some staff support?
Where can I block and tackle for you?
[00:41:53] Speaker C: Right.
[00:41:54] Speaker A: Are you having issues with the CRO? Are there problems with the vendor? So one of the principles that I kind of live by is that I'm in this, in the service of patients. I do this because I care about getting new drugs to patients to treat serious illnesses.
From an operational perspective, we're in the service of sites.
[00:42:14] Speaker C: Right?
[00:42:15] Speaker A: There are, in some ways they are customer if, if they're not happy, if they're not, if they don't have what they need. You're not going to recruit the right patients. It's not about the volume of patients. I want them to recruit the right patients, high quality patients and collect high quality data, make sure that the patients are safeguarded.
[00:42:32] Speaker C: Right.
[00:42:32] Speaker A: I don't need any disasters and I don't need to be in any headlines that says, you know, sponsor X, Y or Z lost control of the study.
[00:42:38] Speaker C: Right.
[00:42:40] Speaker A: That to me comes from having a really healthy relationship with him. But it's also the notion that it's not a one off, we're now transactional.
[00:42:48] Speaker C: Right.
[00:42:50] Speaker A: You know, we want to be with you on a journey because after this study will come another study and there may be the third one.
[00:42:55] Speaker C: Right.
[00:42:57] Speaker A: And so that's, that's, I think a lot of the value of, of a long term relationship with the sites.
[00:43:03] Speaker C: Right.
[00:43:03] Speaker A: We get to know them, they get to know us.
[00:43:08] Speaker B: I will say that the, we get to observe sponsor site relationships all the time in the, in the course of the work we do. And I will say that it is remarkable how much a trust based relationship that is collaborative really removes the needle in, in terms of motivation and, and overall study performance.
[00:43:37] Speaker A: No, I mean, look, I, I've been in a situation where I've had to shut down a site and, and my conversation with you, I say it, it look, it's not you as an individual, as a human being, parent, whatever it is, this study is not the right study for your site at this time. Let's, let's park friends and we'll come back to you when we have something. You know, and there are other occasions where it's you know, we have to part ways because there's a problem here.
Fix this stuff, come talk to us, let us know what you did and we'll re. Engage.
[00:44:03] Speaker C: Right.
[00:44:03] Speaker A: I mean sometimes doors close and open and things like that.
But yeah, I, I, I think, you know, this is a, it's an interesting conversation because sometimes CROs will, will complain about, well, the site is meddling there, you know, there's, there, there's, there's too many people are talking to the site. There's confusion and I think, yeah, there's some truth to that and we have to work that one out.
But you know, having been both at the c, at the, at the sponsor level and the CRO level, I'm not as a sponsor. I'm not willing to abrogate my primary relationship with the site.
[00:44:35] Speaker C: Right.
[00:44:36] Speaker A: I, you know, it's, it's my compound, my study.
Thank you. We'll work together.
[00:44:43] Speaker C: Right.
[00:44:43] Speaker B: I was, I was going to go there having, having done a ton of work on the sponsor side and sponsor service work on the CRO side. Um, how, how has the kind of combination of perspectives changed how you, you kind of approach things today?
[00:45:01] Speaker A: I mean I, look you, you may, you may laugh at this, but I actually think we need to talk to people like adults.
[00:45:06] Speaker C: Right.
[00:45:07] Speaker A: And treat them like adults. Okay.
[00:45:09] Speaker C: Right.
[00:45:10] Speaker A: I mean, here's where it is. I'm happy to have the conversation with a CRO and say guys, we love you, but we're going to maintain that close relationship with the sites. We're not going to abandon that.
We can figure out a way how we can coexist in that space. So we're not talking across purposes.
[00:45:26] Speaker C: Right.
[00:45:27] Speaker A: I had that conversation with us here. I will have a similar conversation with my team, which is guys.
[00:45:33] Speaker C: Right.
[00:45:34] Speaker A: You gotta figure out a way not to be stepping all over the CRO to let them do their job now. They got to talk to each other. How do we work it out?
[00:45:42] Speaker C: Right.
[00:45:42] Speaker A: And that's where I think, you know, adult conver, Adults need to be in the room to have an adult conversation about how do we work it out, what are the rules of engagement, where the no go places.
[00:45:51] Speaker C: Right.
[00:45:52] Speaker A: Who do we need to talk to? Who are the point point points of point accountability.
[00:45:57] Speaker C: Right.
[00:45:57] Speaker A: If we hit a wall, how do we ask? I mean this is basic stuff.
[00:46:01] Speaker C: Right.
[00:46:02] Speaker A: But fundamentally it starts from forcing the grown up conversation and not sort of allowing people to devolve into their particular organizational silos.
[00:46:13] Speaker B: Yeah, yeah. The, it's, it's almost a cliche, but it really is one team and then.
[00:46:21] Speaker A: Well, I mean, I mean it was, it was kind of interesting. We had a, we had a local meetup here in Chicago and there was a, there was a gentleman there who's starting a CRO and he was asking couples, you know, how do you envision this working? You know, how would you make it work?
And it's interesting because you know that when you, when you're working with a CRO, you'll get a team initially and that team will change over time and some people will be experienced, some people are junior.
What he said to me is that somebody told him, I said, I'm happy to work with you. I will take the B and C players, I will whip them into shape, but I will keep them on my team. You don't get to move them around.
[00:47:03] Speaker C: Right.
[00:47:04] Speaker A: So it's that kind of one team Lights that you were talking about, right?
[00:47:06] Speaker C: Yeah.
[00:47:07] Speaker A: So this person is willing to work with a average, inexperienced CRO team, whip him into shape. But his, his contractual ask is you don't get to play around and move people around in my team. As I bring him up to speed and I turn him to really, you know, first class drug developers. You don't get, you don't, you don't get to abandon me.
[00:47:27] Speaker C: Yeah.
[00:47:28] Speaker B: If I'm, if I'm going to invest the time to train your team, I would like for you to treat them as my team.
[00:47:34] Speaker A: Yeah, I mean, I had the experience running a study when I was, was, you know, as CMO of a small company. We had a hundred percent turnover in our monitors every six months.
My sites were, were up and they're, they're up in arms like, you know, we've got to train yet another CRA because they can't keep these people and as soon as they learn their stuff, as soon as, you know, they go off to do other things. So I finally, I, you know, I, I had one of those come to Jesus moments with the CRO and said, you, you cannot do this. I mean, contractually I'm going to stop you from doing this because essentially, you know, you're learning at my expense and I'm not getting high quality data.
[00:48:09] Speaker B: Yeah, well, that, that's a hot tip for anybody listening in terms of a negotiating ask if you're not already making it.
[00:48:17] Speaker A: Yeah, no, I mean, I don't think there's anything wrong for sponsors to sit there and say, you know, look, I accept the reality of the kind of team that I'm going to get, but stability in whatever team you get, and I mean, really really getting the CRO to commit to stabilizing a team is really, really important. That, that, that monitor or study coordinator, they're going to have to work with the site too. They, you know, their reputation's in it as well. So, you know. Yeah, I can't, I, you need to figure out how to make it a win win.
[00:48:45] Speaker B: Yeah, of course. Mike. In closing, I've got a few rapid fire questions for you if you don't mind.
[00:48:49] Speaker C: Sure.
[00:48:50] Speaker B: How are you most excited to see the nature of operations and studies change over the next next 10 years?
[00:48:57] Speaker A: Specifically within, within the psychedelic space or in general?
[00:49:00] Speaker B: Uh, let's go neuroscience.
[00:49:02] Speaker A: Okay.
Um, I mean I, I, I think part of it is, at least in some areas the, the development revolution of biomarkers has been remarkable. Right. So if you think about Alzheimer's disease studies and the availability of load based biomarkers.
[00:49:21] Speaker C: Yeah.
[00:49:22] Speaker A: Digital, digital cognitive tools, these non invasive things. So I think the ability to implement studies in a more real world setting I think is really, really impressive.
I think advancements in more advanced imaging, you know, certain pet ligands, you know, the use of meg, fmri, et cetera, et cetera really is helping to kind of drive better design of Earth. Early translational proof of mechanism study. So I think, I think that, that, that's helping, um, I think on the back end the ability to collect out a lot of the stuff digitally at home.
[00:50:00] Speaker C: Right.
[00:50:01] Speaker A: Minimizing the number of clinic visits, minimizing sort of treatment effects or placebo effects. That, that, that I think is making a lot of progress. Um, I think what I'm also seeing is a lot more regulatory curiosity and openness.
[00:50:16] Speaker B: Sure.
[00:50:17] Speaker A: The new, new types of instruments.
[00:50:18] Speaker C: Right.
[00:50:19] Speaker A: So that's helpful.
[00:50:21] Speaker B: What do you think is holding back change?
[00:50:25] Speaker A: I mean, I think right now in, in the US it's wholesale uncertainty about what, that, what, that, what that environment's going to look like right there. We have no idea what you know, the FDA is going to look like or HHS is going to look like. We don't know what the terms are going to bring. So just a lot of anxiety about where it's going to go.
[00:50:41] Speaker C: Right.
[00:50:42] Speaker A: I think, you know, I, I hear certain things that I like. You know, the national priority voucher I think was, was a great idea but you know, how it's implemented is a different story. I think what's also holding folks back is, is, you know, there were, there was a period of, you know, serial successes in neuroscience and then there's some pretty notable, you know, rocket crashes lately.
And so the pendulum kind of swings a little bit and I think that's also in neuroscience. There is this kind of pendular swing in terms of optimism.
[00:51:22] Speaker C: Right.
[00:51:23] Speaker A: Everybody was very happy when lecanemab and Donanemab got approved first, you know, disease modifiers, antimy amyloid therapies.
And then, you know, there was this kind of like, well, what's it really mean and how is it yielding, et cetera, et cetera. So real world application, real world benefit is becoming a little bit more difficult to demonstrate. So I think enthusiasm kind of peaks and you know, goes.
But, but I think in general what, what I'm sensing is a recognition that we don't have a choice but to be engaged in neuroscience related clinical development. I mean, I'm the medical need.
The misery that is out there that remains unaddressed compels us morally to actually keep on going at it.
I forget about the financial part of it, which if you do, if you, if you have a good drug and et cetera, it'll work. But I'm just Morally. Robin.
[00:52:14] Speaker C: Yeah, yeah.
[00:52:16] Speaker B: And certainly more and more the world is, is fighting with these things as we.
[00:52:23] Speaker A: Yeah, yeah, yeah. I mean, I completely agree. If you, if you look at sort of of, you know, mental health or neurodegeneration, this is not a, it's not just a, you know, US Western Europe problem anymore.
[00:52:34] Speaker B: It's not, it's.
[00:52:34] Speaker A: Hasn't been for a while.
[00:52:36] Speaker C: Yeah.
[00:52:36] Speaker B: Okay, so you've mentioned some things that you're excited about. What is overrated?
[00:52:43] Speaker A: What is overrated?
Well, I'm going to pose it as someone maybe rhetorical that what is overrated is, is people's fear of the fda.
Okay, okay. And I think people, people need to get over their. Oh, you know, really, this is a group of really hardworking, dedicated folks that are committed to helping us advance, you know, public health in this country and also in Europe and Japan, et cetera, et cetera. So I think what's overrated is fear of engaging with the regulators. I, I can't wait to have my conversation with these folks and get the best of their advice. Keep in mind that they probably have the broadest view of what's out there and what's being developed and new, new, new method, new trials, you know, so if, if, if there's an answer grape out there or some oracle that can share wisdom. It's regulatory bodies, it's there. And so I think what's overrated is, is people's fear of them or unwillingness to engage or being overly cautious. Like, you know, just ask. God, just ask.
[00:53:56] Speaker B: Yeah, absolutely. I love that they're not the. Not the boogeyman. That's. That they make. They're made out.
[00:54:02] Speaker C: No, no.
[00:54:03] Speaker A: I mean. And I think it's. You know, I think part of it is.
[00:54:06] Speaker C: But be prepared. Right.
[00:54:07] Speaker A: You're a data guy. Show up with data. Yeah, yeah. Show up with, you know, think it through. Right, of course.
[00:54:14] Speaker B: Well, Mike, I have so enjoyed this conversation. Any last comments? Anything that I didn't ask that I should have?
[00:54:21] Speaker A: I mean, no. I mean, this has been thoroughly enjoyable. I mean, just to kind of reiterate, you asked me about the clinical science and the operational part of it. I just want to make sure that if I were to close with a thought is it's all about doing it for the patients. Right. Making sure that we do the right studies, that the design is important, but the execution is as important, just to make sure that we're getting a clean answer. And at the end of the day, the study may not work or the drug fails. That's all well and good, but everybody can walk away with a clean conscience knowing that you've answered that question, you're done, move on.
[00:54:54] Speaker C: Right.
[00:54:55] Speaker A: That's sort of the most important thing for me.
Yeah, yeah.
[00:54:59] Speaker B: I love the way you kind of framed it at the beginning, which is just to paraphrase, we have almost a moral obligation in research to do good research on behalf of the people who decide to participate.
[00:55:10] Speaker C: Right.
[00:55:11] Speaker A: I mean, imagine that you're putting a family member, a child or a spouse or a significant other, and you said, oh, you want to be in a clinical trial? What would you want?
[00:55:21] Speaker C: Right.
[00:55:22] Speaker A: Well, I wanted to make sure it's run really, really well. I don't want the contribution to be wasted.
[00:55:27] Speaker B: Absolutely, absolutely. Powerful stuff, Mike.
[00:55:31] Speaker A: All right, my pleasure.
[00:55:32] Speaker B: Thanks again for joining me here.
[00:55:34] Speaker A: Anytime.