NB Hot Topics Podcast

S5 E3: "Three Coats" song; interview with Dr Jessica Watson on BJGP "WHY Test" study; rosuva vs atorvastatin; DNA cancer blood tests

NB Medical Education Season 5 Episode 3

Welcome to the Hot Topics podcast from NB Medical with Dr Neal Tucker. In this episode we talk to Dr Jessica Watson, lead author of the Why Test study, published in the BJGP exploring the use of blood tests in general practice and how often they make a difference - positive or negative.

In other research we look at a BMJ paper comparing rosuvastatin and atorvastatin for secondary prevention, and in the Lancet, the PATHFINDER study, exploring the role of multi-cancer early detection blood tests - will this be useful for diagnosing cancers earlier?

Plus the usual news, views, and a song about vaccinations.

www.nbmedical.com/podcast

References

BJGP Why Test Study
PACT - primary care academic collaborative website
BMJ Rosuva vs Atorva for secondary prevention
Lancet PATHFINDER MCED study
Lancet Editorial
RCGP Manifesto Seven Steps To Rebuild General Practice and Save the NHS

www.nbmedical.com/podcast

Speaker 1:

The vaccine's. Got to get them done by October for a £5 premium. We've got to get through as many as we can. Got to make some money, so we've got a plan. This isn't the first time that we've done this. We've had three years of regular practice and you've come for your chap. I thought you were a pro.

Speaker 1:

There's just one answer that I got to know why three coats, three coats, three coats? That's a lot of zips. Yeah, won't be quick. Three coats, three coats. Why three coats? It's cold out there. You're old, so you feel it. And you wrap up warm Winter. You can beat it.

Speaker 1:

And you turned up early, pretending to be helpful, but it's really just the highlight of your social schedule. Well, you're too busy chatting to take off the layers and now that you're here it's going to take ages. And I've only got three minutes to give to jabs. The queue's getting longer. So I just got to ask why three coats, three coats, three coats? Aren't you overheating? I'm surprised you're still breathing. And three coats, three coats, three coats. The heat is on for you can't be cold. Can I give you a hand with that? No, you want to show you're independent. Finally, all the jacks are gone. There's a jumper and a long six shirt and dozens of buttons, and we've only got three minutes to give to jabs. The queue's getting long, so I just got to ask why three coats? Three coats, three coats, that's a lot of zips. Yeah, won't be quick. Three coats, why three coats? Why so many clothes? Three coats, are you sure I can't help you with that?

Speaker 2:

No, ok why don't I just go and make us both a cup of tea Back in a minute. It's Friday, the 20th of October, and this is the Hot Topics Podcast. Welcome back to the Hot Topics Podcast from MB Medical. I am Neil Tucker and as usual, I'm here to take you through the next few minutes on what's going on in the world of general practice. In research. We're going to be having a look at a paper in the BMJ comparing the benefits of resuva statin against atorva statin. We're going to be looking at the Pathfinder Study published in the Lancet last week. This is looking at M-SED multi-cancer early detection blood test to see if this is the future of screening. And I'm very excited to have done an interview with Dr Jessica Watson, who is lead author on a new paper in the BJGP examining why we do blood tests in general practice and the impact these tests have.

Speaker 2:

Firstly, some MB Medical news. So there is a new part in your nbmedicalcom dashboards called NB On Call. This is a secure discussion forum integrated into the website created by GPs for GPs. It's a great place for picking the brains of our GP collective thousands of registered users, clinical and non-clinical topics. You can use it towards your appraisal as well, and it's really easy to use. It's completely free. It's integrated into the website. We do need to register you. The office just needs to validate who you are, which is very, very straightforward. You'll have all that information. But that just maintains the security of the forum. It's all very straightforward. Come and join the conversation.

Speaker 2:

We've also got upcoming courses. On the 7th Tuesday, the 7th of November, in the evening, we've got our next free Hot Topics Clinic. This is going to be on Dermoscopy, run by Philippa Davies, who's our dermatology lead with MB Medical. Then on Thursday, the 9th, we've got our next live Hot Topics Update webinar. On the 11th, we've got a live abnormal blood test. There's loads more going on. Do check it out and remember a course that everything else is also on demand.

Speaker 2:

What's going on in the news? Well, this week is the RCGP conference, so Camilla Hawthorne gave her first keynote speech as chair, and what was impressive about her speech was how real it was. Perhaps, if you spent most of your career working in the Welsh Valleys, you can be nothing but real, but it did acknowledge the challenges facing us in general practice and the wider NHS Ancles for the destruction of general practice and demonization of GPs. To stop. They set out a seven point manifesto which has the bold title of the seven steps to rebuild general practice and save the NHS. As they point out, without a functioning general practice, the NHS will crumble. There's already been some pushback from politicians. Wes Streeting from the Labour Party has already been critical of this approach, of this manifesto. He has other ideas about how to save the NHS, and continuity in innovation are top of his wish list. Well, both of those are much easier if you A have adequate funding and, b adequate staffing, which has helped by C stopping the relentless tide of negative press stories about GPs in the media.

Speaker 2:

While my mind was on politics, the other story that caught my eye was in the BMJ, and this was about the push for menopause health checks for all women over the age of 40 in general practice. This is being pushed by an all party parliamentary group. The RCGP has rejected this idea as unworkable in the current state of general practice and risks the overmedicalization of a stage of life that many women successfully manage with lifestyle changes alone and discuss. I'm sure you will all have your own opinions on this. I think there is no doubt that many women are significantly affected by the menopause and would benefit from a medical review of this and potentially have some treatment. I also think there's no doubt that there's huge variation in health care practitioner knowledge about both identifying and managing menopausal symptoms. But screening everyone from the age of 40, that seems kind of bonkers. Could there be anything that's influencing these recommendations coming out from these MPs on this committee? Well, what's interesting is that this all-party parliamentary group is sponsored by two pharmaceutical companies that sell menopause treatments and the secretariat, so whoever's doing the writing down is provided by a lobbying company. Does anyone smell something fishy going on?

Speaker 2:

I was having a drink last night with a Greek friend of mine and she'd recently flown back to Greece to vote in her hometown local elections. She was explaining that some of the candidates had paid locals for votes. I was really shocked by this, but then she pointed out that in Greece this is called corruption. In the UK we call it lobbying and discuss amongst yourselves. Right on to the research, and we're going to start off with this BMJ paper.

Speaker 2:

The title is Resuverstatin vs Atorvastatin treatment in adults with coronary artery disease. The reason this caught my eye was we were doing a Hot Topics course this weekend and of course we're talking about nice guidance and lipid modification and someone asked about why aren't we using resuverstatin? Nice, of course, recommends atorvastatin. Well, I thought that maybe it was a cost thing, but now resuverstatin is off-patent and really very cheap. In my mind, resuverstatin had always been sold as a little bit more effective at lowering cholesterol than atorvastatin, but we didn't prescribe it because it wasn't cost effective, because it was still on patent and very expensive. So is it a better drug and should we be changing practice? So this was a randomized, open label multi-center trial.

Speaker 2:

This was conducted in South Korea between 2016 and 2019. 4,400 adults were recruited. They all had coronary artery disease and then they were either given resuverstatin or atorvastatin. They were followed up for three years and the primary outcome was a composite of all cause death, myocardial infarction, stroke or any coronary revascularization. Worth just knowing the demographics of this study. So on average, the participants were 65 years old. Just over a quarter were women. The mean body mass index was just under 25. That's pretty low, right. So they typically associate cardiovascular disease with people who are more overweight in maybe Western countries.

Speaker 2:

13% were current smokers and they had a range of comorbidities. Two thirds had hypertension, a third had diabetes. 7% had CKD. This, of course, is a secondary prevention trial, so they'd all had some kind of coronary artery disease, so MI or unstable angina or revascularization. Just under 20% had asymptomatic coronary artery disease via screening. This meant that the majority of them so 84% were already on a statin prior to entry into the trial. Most commonly that was a moderate intensity statin. Funny how our perspectives become a bit skewed because they class that as a resuver statin 10 milligrams or a torver statin 20 milligrams, and I now consider that a low dose. But that, of course, is substantially more efficacious than a lower dose of your old sim, the statin or pravastatin.

Speaker 2:

They then went through this slightly complicated randomization process, at least complicated to explain on a podcast. The bottom line is that if they were statin naive, they probably ended up on resuver 10 or a torver 20. And if they were already on statins and if their cholesterol wasn't very well managed with whatever their current dose was, they went on resuver 20 or a torver 40. All of the patients then had their cholesterol levels measured and they were aiming for a target LDL of less than 1.8. They weren't achieving that. They were up-titrated the statin. Interestingly, if they got to an LDL of less than 1.3,.

Speaker 2:

They down-titrated the statin, which is a concept that we are certainly not familiar with in this country and I don't think there's an evidence-based rationale for this. But actually that's probably based on more recent research and when they started this trial back in the mid-2010s, there was a lot more uncertainty about the concept of is there a level of cholesterol which is too low and might be dangerous? What they didn't do was, if they failed to meet their targets on statins, they didn't add in a second agent like azetamib, because it would make it very difficult to interpret the findings and discuss the ethics of that amongst yourselves. But I guess also maybe this just reflects our understanding about the issues at the time and our understanding of the issues now, and now we feel that intensifying treatment is appropriate. Gosh, that is a really long explanation of the methodology, neil. Hopefully you're all still sticking with me, but I think it's probably important so that we can have confidence in the results. The most useful thing to know is that they were treating to a target and those targets are the same targets that you and I would be using today.

Speaker 2:

So, for all that excessive explanation, what did the results show? No difference in the primary outcome between resuva statin and atorva statin at three years. Both agents were effective at lowering cholesterol, and at the end of three years, 55% of people in the atorva statin group had got their LDL down below 1.8, but more in the resuva statin group achieved that, so 62.5%. So what's really interesting there, then, is that that greater reduction in LDL didn't translate into greater reduction in cardiovascular events or mortality. Also, unfortunately for resuva statin, it came out slightly less well on the safety profile, so it had a higher incidence of new onset diabetes compared with atorva statin. So 7.2% of participants versus 5.3% of participants required the initiation of anti-diabetic medications. And also there was a slightly higher increase in the rates of cataract surgery, so 1% higher in the resuva statin group. What can we take home from this study, then? Well, in our patients who have established cardiac disease, resuva statin and atorva statin are both effective options for secondary prevention. However, atorva statin may be a slightly safer option, and I guess this supports the current nice guidelines where we're using atorva statin as first line.

Speaker 2:

Okay, next paper, and this is the Lancet and this is the Pathfinder study, so blood-based tests for multi-cancer early detection, a prospective cohort study. We've been talking on the most recent hot topics, course. A little bit about genetic testing in the context of checking for clopidogrel resistance after TIA and stroke. Well, here's another window into the future. So there's been lots of advances in genomics over the last few years and in this context it is possible to check for fragments of DNA circulating in the bloodstream to see if they share a genetic profile for one of 50 different cancers. The concept, of course, is very compelling.

Speaker 2:

Current screening programs only cover a handful of cancers. Each are checked for separately, often at great time and expense, and we already have some data published on these multi-cancer early detection tests or MZ tests. So the Simplify trial, which was a UK-based trial, published some data earlier in the year which its authors concluded showed positive outcomes of the test for cancer detection and its feasibility as a possible screening tool. This new study was conducted in America. They recruited 6,662 participants aged over 50, with or without signs or symptoms of cancer, from oncology and primary care outpatient clinics, and then they were tested with MZ, this multi-cancer early detection test. Interestingly, the primary outcome was time to and extent of diagnostic testing required to confirm the presence or absence of cancer. I can't help but think that the primary outcome should actually be.

Speaker 2:

Does this test accurately identify cancer or not? But I guess we all appreciate that any test may have false positives and may require further investigation to clarify what on earth is going on. So the results showed that 1.4% of the participants had a cancer signal identified. Now that doesn't mean that they actually have cancer. 38% of them were ultimately diagnosed with cancer, so were true positives. 62% ended up having no cancer diagnosis, so were false positives. The median time to confirmation of cancer in the true positive group was 57 days and 162 days in the false positive group. I've made an assumption here that this has been conducted in the private healthcare system of the US, so I'm pretty surprised that it's taken quite so long to actually get an accurate diagnosis of cancer in these patients, for all our current woes. Maybe the NHS isn't that bad, although also I suppose you could argue that the US is not the healthcare provider that you wish to be benchmarking yourself against and discuss amongst yourselves.

Speaker 2:

The conclusion of the authors is that this study supports the feasibility of multi-cancer early detection screening for cancer, possibly a slightly surprisingly upbeat conclusion, although maybe not surprising given that it is written by the authors of the study and the study is funded by the makers of the genetic test. The linked editorial, one of the authors of whom is from the UK and part of the National Institute for Health and Research and arguably very independent, is less upbeat. They make a number of observations. So, on the positive side, many of the patients were diagnosed with early stage cancers. The specificity of the test is high at 99%, with a high negative predictive value of 98.6%. However, the sensitivity is poor, identifying only 28% of cancers, and the numbers needed to screen is 189. Now, while that would be considered pretty reasonable in terms of screening programs, it doesn't suggest that it is going to be the miracle replacement to set programs.

Speaker 2:

Ultimately, the editorial calls for more research in this area, and those trials are underway, but I think it's going to be many, many years before we have a greater understanding about the true accuracy of this new technology and where it fits into modern medicine. Okay, moving on to our final bit of research today, and we're going to be looking at a new BJGP article, and this is the Y-Test study. So an exploration of reasons for primary care testing, a UK wide audit using the primary care academic collaborative. I'm lucky enough to have been joined for an interview by the lead author of the paper, dr Jessica Watson, who is a GP and a GP academic researcher as well, who has kindly shared some of the insights from the paper. Jess, maybe you could tell us just a little bit of background about yourself and the paper.

Speaker 3:

Thanks, neil. Hi everyone, my name is Jess Watson, as you said, so I split my time between clinical and academic work. I'm a GP, practicing GP in Bristol, or just on the outskirts of Bristol in a commuter town called Yate, for one to two days a week and currently an academic clinical lecturer at the University of Bristol, which means I do a mixture of research and teaching work, and my main research interest is around the use of tests, in particular blood tests, in primary care.

Speaker 2:

Given that's your interests, why did you choose to do this specific piece of research?

Speaker 3:

Well, I think primarily it was sort of inspired by my day-to-day clinical practice. Like most GPs, I often found myself at the end of a long clinical day opening up my pathology inbox, finding a heap of blood tests there and often kind of looking at them and thinking why was this test done in the first place? What are we looking for here? So there was that sort of frustration and sometimes looking at tests done by colleagues where I thought not sure I would have done that test. So that was one part of it.

Speaker 3:

And then it was a few years back now, 2018, that a colleague, jack O'Sullivan, published an article in the BMJ that really caught my eye, which showed he looked at rates of pathology testing in primary care in the UK and he showed that between 2015 there'd been a three-fold increase in the number of tests, huge increases which, you know, clearly hasn't been matched by a three-fold increase in the rates of diseases that we're picking up here. So something is happening and, taking into context the huge workload challenges and the workforce challenges we're facing in primary care, it felt like a really important thing to try and look at how we can optimise the use of blood tests and kind of the first thing you need to do if you want to work out how to improve blood tests. We're doing is to work out why we do those tests in the first place. So that's really where this idea came from.

Speaker 2:

So tell me a little bit about how you and the other researchers then went about trying to understand this a bit better.

Speaker 3:

The interesting thing here is that we've got really good data from databases such as CPRD, which is looking at anonymised electronic health records, and we can look at things like, like Jaco Sullivan did, at the number of tests, those test results. But the problem here is that the reason for testing is very rarely coded. So we might code the kind of clinical problem, but we wouldn't code what the reason for testing would be. So this is kind of information that can only be extracted by getting clinicians really to review and to look at those medical records to understand the decision making that's going on there. So that's where this new academic collaborative primary care academic collaborative, or PACT came in.

Speaker 3:

So a colleague of mine, polly Duncan, had set up PACT in 2019 and spoke to me about it and said we're looking for studies that we could do using the PACT network and it seemed to me like an ideal opportunity to explore this question.

Speaker 3:

So the idea of PACT is that it's modelled very much on other collaboratives that exist across other specialties. So the surgical specialties have got lots of these academic collaboratives and the idea is to get grassroots GPs and allied health professionals who aren't on kind of traditional academic career trajectories but who might be looking to try and work out how they can diversify or develop a portfolio career, and it can be very difficult to take that first step into academic primary care. So it was about allowing clinicians a sort of a way to dip a toe into research by taking part in a PACT project. And so, with the Y-test study, what we did was ask clinicians working in their own practice to basically audit and review the notes of 50 patients who'd had recent blood tests and help us to answer this question of why the tests are being done.

Speaker 2:

When you want to find answers like this, you really need to go down to kind of patient level data, don't you, and you need to be sort of sifting through. You actually need to be sifting through the notes. Would you tell me then, what were the key findings from the study?

Speaker 3:

We got PACT members to look at the reasons for testing. We also asked them to code other things who had done the test in the first place, why those tests were done, and then what happened, what the actions were following on from testing, so the kind of headline figures in terms of why tests were done. I mean. Firstly we had data all together from over 2,500 patients. So although it was quite laborious we did manage to get a really good sample overall. And when we looked at that overall, the commonest reason for testing was symptoms. So around 43% of patients had symptoms In that category. Sort of non-specific symptoms such as tiredness was the commonest category, but there were a range of other types of symptoms.

Speaker 3:

But interestingly, monitoring was also a really big category. So 30% of tests were for monitoring existing disease, a further 10% for monitoring medications. So overall almost 40% for monitoring versus 43% for symptomatic testing. And then the other reasons, the sort of smaller categories. Just under 7% of tests were done as a follow-on from a previous abnormal result. So kind of testing begets more testing. It's this sometimes called the cascade effect. And patient requested testing was pretty rare actually. So only 1.5% of our overall sample of tests were done directly at patient request. Now, of course that may underestimate the proportions where it was a kind of contributory factor, but it's not primarily in UK primary care the primary reason for testing for the most part.

Speaker 2:

Is it helpful, then Is our strategy useful? So what did you reveal?

Speaker 3:

Yeah, so when we look at the outcomes of testing, that kind of helps us to unpick that a little bit. I mean, I must emphasise this is exploratory work and I hope this is going to be a starting point for lots more research and for more conversation really amongst GPs and between GPs and patients. But when we looked at the outcomes of testing 6.2% of tests overall that led to a new diagnosis or confirmation of a diagnosis and even when we looked at the patients who were having symptomatic testing, so that 43% category amongst them it was around 10% who then subsequently had a diagnosis, and I guess for GPs I think that's probably not that surprising. We know that often we're ruling things out rather than ruling things in. But I suspect if you spoke to patients about that they might be surprised, because certainly some of my qualitative work with patients suggest that patients have quite high expectations and really invested in getting answers from their tests.

Speaker 2:

I think we often use tests as a mechanism of reassurance, don't we? For us and maybe for themselves, but perhaps that suggests that it's not such a great tool in that situation.

Speaker 3:

Yeah well, I think it's an interesting one. Probably this study can't clearly answer that question about are they worthwhile for the purpose of reassurance, but certainly there have been other systematic reviews that I'm aware of that have looked at whether normal test results do provide reassurance and those systematic reviews have concluded that no, a normal test result on its own isn't reassuring. I suspect it's the explanation and the GP's discussion that's important to provide the reassurance there. And when we looked overall at whether these tests were leading to a change in management, around half or I think 48.8% to be precise didn't lead to any kind of change in outcomes. They didn't lead to a diagnosis, a medication change, a referral, a follow on test. There was no measurable change in outcomes. Again, I think for GPs that's not surprising at all, but I suspect the implication here is how do we share that with patients when we, when we embark on testing, to ensure that they don't then feel let down or invalidated or frustrated when they receive the results?

Speaker 2:

I'm sure a lot of GPs at home will be thinking. Well, sometimes I just do a blood test to finish the consultation, to get the patient out of the room, because we kind of reach some impasse in our, in our consultation.

Speaker 3:

Tests have lots of different purposes, and there's all the medical reasons that I've explored here, but there are non medical reasons. I did a qualitative study about what the tests do for doctors. We called it, and doctors did talk about using a test as a gift, as a way of showing the patient that they're taking them seriously, and I'm not saying that that can't be an important tool as well here. But I think you know, like I say, particularly with the workload challenges that we're facing now and these massive increases in testing, I think we need to be just stopping to think a little bit more before we click all those buttons.

Speaker 2:

Yeah, I mean, I see one of the other figures from your research was that around a quarter of the blood tests that we do were either partially or fully unnecessary. And yes, I guess on the one hand we've got those other non clinical reasons why we might end up doing a test. On the other hand we've got this reality that maybe what we're just doing is deferring pain here. Maybe you need to do that sometimes, but but actually when we think about workforce pressures, we're probably just contributing to it a bit. And then, of course, you've already mentioned about this cascade effect where the more tests you do, the more likely you are to find maybe minor abnormalities, incidental findings that are not actually clinically relevant. But of course you don't know that. And then you go on and do you have to do more testing to try and figure out what on earth is going on?

Speaker 3:

You know that was. Another thing that was highlighted here as well was that only around a quarter of the tests were completely normal, as in, every single test done in the panel was within the reference range. Now we would be confident and expect these border line tests to come back normally. But you know, looking forwards, very soon all our patients will be looking at the test results online. There's a real risk if we're doing tests, we're generating more of these borderline results. Patients are looking at those results themselves that that again can contribute more and more to this workforce problems that we're already really struggling with.

Speaker 2:

That's a really, really important point. As patients get more and more access to their records, I'm sure they're going to be picking us up on what we write in the consultation. They are becoming more and more savvy with looking at their blood results and questioning the results, aren't they? So yeah, that could be the next big explosion in our workload and that may be another big driver for us to maybe tighten up on how we're doing this, perhaps, jess. What next then? So, as you said, this is an exploratory piece of research. What are the other areas of uncertainty do you think that we need to look into, and maybe what you and your team might be exploring?

Speaker 3:

It's opening up lots of avenues really, and I think one of the big things to highlight here is that actually there's a real lack of research to guide us. We've talked about whether tests are necessary or not. In reality, it's going to be shades of grey, isn't it? More or less necessary tests, but often we aren't able to work clearly to guidelines and we aren't working within an evidence base here, whereas when it comes to prescribing, there's always trials, there's always guidelines to tell us who should and shouldn't receive medications. Often with blood tests, the evidence is much more thin on the ground, and so I think one of the things I'm keen to do is is similar to kind of potentially inappropriate prescribing, where we've got lots of research looking at markers of inappropriate prescribing guidelines, like the stop-start criteria. We don't have anything like that for testing, so trying to look at how can we define what is a potentially inappropriate test and how can we measure that and how can we feed that information back to practices so that they can help to optimize their test use.

Speaker 3:

I think another big aspect, like you just highlighted now, is how we communicate test results and particularly how we do that within the changing landscape.

Speaker 3:

So we've got text messages.

Speaker 3:

Now we've got online access, we've got the NHS app, and how can we make sure that we communicate tests safely to patients and that we reduce the risks of generating anxiety and further workload?

Speaker 3:

And then I guess the other big area that I'm really interested in looking at is PACT and the whole idea of using a collaborative approach to research like this, and so this was the first study using the PACT network. We've shown that that can work. We've developed a bit of a blueprint and we're really keen to use that network for future research studies, to explore other important questions for primary care and to do it in a different way, to do it in a way that's engaging GPs and to give back to those GPs who participated. So one of the things we did with the Y-test study is we gave all participating practices a practice report which gave them their own results, benchmarked against every other practice, and we got really positive feedback that the practices had used those reports, they discussed them in practice meetings and that they'd use them for quality improvement or teaching activities engaging with trainees, looking at protocols, to standardize testing or to standardize how they communicated tests and I think really keen to use that similar model going forwards in other areas.

Speaker 2:

And if clinicians were interested in joining PACT and trying to help do some of this research, is there a way that they can do that?

Speaker 3:

Yeah, yeah, it's really simple. So we've got a new website which is gppactorg. They can go onto the website and on the top right corner there's a little button join PACT, and if you click on that and join PACT, you'll receive a monthly newsletter and you'll be kept up to date about all the future projects, and I can give you a sneak peek into the next plan. So we're planning early next year to launch a new study which will be looking at the sort of hot topic of hidden workload. So I hope that's something that most GPs will be interested in and it'll be something that's fairly easy to take part in and hopefully will give us some really important results as well. So do click on our website and get involved.

Speaker 2:

I'm sure there will be a lot of interest in looking into that, so I will put a link to the PACT website in the podcast description. If you are interested in taking a look at that, please do and get involved. It's fascinating talking to you, jess. Thank you so much for coming on the podcast today. I look forward to hearing about your next piece of research.

Speaker 3:

Thanks so much, neil, really great to join you.

Speaker 2:

That's a wrap. Everyone, thanks for joining us on the Hot Topics podcast Once again. We'll be back in three weeks with more news, research, maybe some more interviews. And don't forget, in the meantime, check out the nbmedicalcom website for our upcoming courses, check out NB On Call so that you can carry on the conversation and, most importantly, just make sure you have a little bit of time to have some fun for yourself. See you later. Bye-bye.