Okay. Well I’m very glad to be able to participate in this webinar and what I’m going to do is I’m going to talk about sort of science issues in drug repositioning, and I’m going to really put it in the context of the problems that we have nowadays in drug discovery, so there are really major flaws in our biology knowledge. So we know that 92% of drugs fail in clinical studies, so that’s the most recent Bio report. And as shown just below the bullet, I don’t have to really repeat this, this is just something that’s economically unsustainable. And so over the last 30 years, we’ve had this predominant drug discovery model of the mechanism approach, finding – – identifying a target mechanism and then a single compound superbly selective for that target, and it’s that approach in general that’s giving us this very, very poor success in the clinic.
Now then the question really becomes in drug repositioning, if you use the same target mechanism approach, what’s the efficacy problem going to be like? And I would guess that to some degree you have the same problem in drug repositioning going after target mechanism – single compound that you have in sort of de novo drug discovery. So that’s just sort of a caveat to put out that drug repositioning doesn’t get you away if you’re using this target mechanism approach from the same problem that we have – – we’ve had really for the last 30 years.
Now when you discover a new use for a compound, it’s really an interesting and important actual question whether the new mechanism is on target or off target, meaning is it related to the original mechanism of the compound or is it may be related to something new? Now why would that be important? Well because if it’s off target, the compound may not be optimized for that mechanism, so you might have to do some of this chemistry whereas if it’s on target you might not.
Now it’s really important to appreciate that few, if any, approved drugs have a single target. To describe a compound acting by a single target is tremendously simplifying and in fact I would say almost no clinically useful drug has a single target and the fudge is probably if you really look hard enough, we would find there would be multiple mechanisms for that clinically useful drug, and that has really spun into this whole rise of polypharmacology that’s sort of hitting the medicinal chemistry community and the basic biology community in terms of screening. That is to say if we know that so many clinically useful drugs attack more than one target, why not do this deliberately, screen for multiple activities? And that can be done in a whole variety of different ways.
Now when you’re trying to discover a new use for a compound, probing for a new medical use, what’s the proportion of on target and off target? And Melior Discovery, that’s the company I’m associated with, has pretty good data in the sense that the model for discovering a new use for a drug is totally mechanistically unbiased. So you go and with phenotypic screens and you see what happens in these screens and then subsequently you see: Is there based on the literature, any possible link between new phenotypic observation and the original mechanism? And so first of all, the success rate in this sort of phenotypic screening is about 30% of clinical phase drugs have a new use and that using the sort of literature observation linking the phenotype the literature that 90% of the new uses are on target, and that’s really based on a fairly large database that Melior Discovery has based on studies of drugs from the companies that Melior is collaborating with.
Okay now – – so this 30% of clinical phase drugs have a new use. What that says is that combined with the 90% of the new uses that are on target that there are really major flaws in our biology knowledge that we take a compound out with let’s say a primary mechanism, we predict that it does one thing, but when we test clinically, we find out it does other unexpected things because we didn’t realize that that mechanism was linked to other potentially useful effects. So although we like to think we’re smart and we understand what we’re doing, there are really major flaws in our biology knowledge.
Now this slide has to do with network biology challenges. This first bullet really blew me away when I first learned about this that 85% of mechanistic blocks do nothing. So for example, if you look at the yeast literature, if you take a signaling pathway and do a complete block in one step, 85% of the time you can’t observe anything happen. And why is that? Well it’s because of network robustness that blocking in a single step in a signaling pathway, it is easy signaling network to bypass the block. But if you block at multiple sites, then even if it’s a more modest block, then the efficacy, the ability to observe something goes up tremendously, so that has led – – again this contributes to this rise of polypharmacology because blocking it in two spots, two loci in a signaling pathway, that works better than in one. And this whole issue is not theoretical. It really plays out. And a good example of that in the recent last year or two is with the whole story of p38 alpha blockers, kinase blockers that are intended for treatment of inflammation.
Now there have been about 20 clinical studies here and the efficacy is either transient or non-existent and the toxicity, which was hoped to be low, is significant, would’ve been okay if the compounds had been efficacious. So here’s an example of a kinase signaling network where people logically said, “Look, let’s go really downstream, okay, and if we go downstream and we have a very selective compound that minimized the probability of off target effects, we maximize the chance the compound won’t have toxicity and it just didn’t work, and the current explanation that’s being bandied around is that: Well the signaling network because it’s the block was so far downstream can bypass the block and maybe what you need to do to have efficacy in inflammation end points is to block higher up in the kinase signaling cascade. Okay, so again people went in with good faith but a lack of biology knowledge, now it is so easy to say, “Yeah, this is what should’ve been done, but there have been almost 20 clinical trials.
So network biology is really in its infancy. So given this horrible efficacy failure rate, how do you get around that? Well given the infancy of network biology, well you go to a commonsense position. That is to say you’re most likely to get a validated target to get clinical efficacy in a rich biology scenario, and I think that is a great factor in this rise of academic collaborations because where the bulk of the academics, the basic biologists supported by NIH RO1grants? They’re in academia. So if you want to get into a rich biology scenario, you really have to haves collaborations with academics.
Now this slide or the next one illustrates this sort of lack of biology knowledge and what tremendously frustrates basic biology researchers and additional chemists like myself. So you have here signaling cascade and maybe two ligands, D1 and D2, and things look like maybe you can develop some antagonists and things look very simple and this is what the project looks like when you start out and this is what it looks like 10-15 years later, just incredibly more complex, far more signaling rounds, a whole variety of bypass roots that you never even thought of and interactions all of which bedevil the clinical efficacy and make it so hard to predict based on the basic biology whether you will actually be – – have an effective compound in the clinic.
So what are the actual drug repositioning advantages? Well if you develop a known drug for a new use, one of the things most obvious is that you’ll bypass the approximately 80% pre clinical failure rate. So I mean we’ve talked a few slides before about the fact that not only 8% of drugs in the clinical pass, but what pre clinically, before compounds get in the clinic? Well generally speaking about 75-80% pre clinical failure rate is about par for the course. So if you started something that has already survived the pre clinical phase and got into say like phase two, you’re largely bypassing that 80% failure rate. And because it’s a compound that’s been taken out, you’re going to bypass much of – – not all, but much of the clinical toxicity failure rate, but will you bypass some of the efficacy failure rate? Well I mean this is really – – it’s debatable. I think in many ways you won’t except that the mechanism, an approach, has seemed to fail more and more as it moves to newer time point. That is to say: We have this sort of low hanging fruit hypothesis. So to the extent that you have low hanging fruits in orphan diseases or neglected diseases that maybe it’s not going to be so bad. But if the targets become more like those we’ve had in the last ten years, then I think in drug repurposing, you’re going to have the same – – if you’re starting sort of from a basic biology view point, you’re going to have the same failure rate that we’re getting in the clinic today.
Now another advantage to drug repurposing is a drug that does something means a perturbable signaling pathway. So if you start out with something that did something in clinical trials, it means it perturbed in humans a signaling pathway so that you actually got an observable effect, which means that something else may be also observable. And also what you need to do in terms of repurposing is you need to avoid the signaling extremes. The isolated nodes, you definitely don’t want to do those and the very dense nodes, you also potentially want to stay away from those kinds of pathways for most indications, right. So this literature from Schreiber that the natural products, for example, tend to get the, attack dense nodes and signal pathways, so this is – – might be quite okay for a sort of a cytotoxicity application, something maybe in cancer may not be so useful in treatment of a chronic, non-life threatening disease.
And as sort of alluded in Barry’s talk, this is the era now in terms of drug repurposing where we have really a rise of portals and collaborations, so we have portals to drug company compounds in data, so for example the CTSA portal, so mechanism, potential mechanism for sharing compounds that have been published on from industry collaborations with major pharmaceutical companies like Pfizer, GSK, and Novartis. We have portals to literature structures and data. For example, the EMBL databases out of Cambridge, UK, and most recently from the NIH, the NCGC, National Chemical Genomics Center, NPC Browser, so it’s a compilation of a variety of drug classes including approved drugs, this public domain chemistry data, so it’s a great start. It does have the problem with all public chemical databases, and that’s the problem with curation, so there are always in public data errors in structure and so this is sort of a work in progress. And then finally we have collaborations between academia, pharma, and government. Barry talked about CDD, and we have the new NCATS, the NIH Translational Center which is actually also involved heavily in drug repurposing and drugs for neglected diseases.
Now I just want to talk briefly about phenotypic screening advantages. This is actually a very nice graphic from actually this month’s Nature Reviews Drug Discovery, a paper by Swinney and Anthony and it looks at first in class drugs and follower drugs and how were they discovered. And you’ll see that for first in class drugs in this tan column, phenotypic screening works really great, it works better and actually better than target-based screening, even though this is the time in the period where target-based screening has tremendously predominated. Now once you discover the first in class drug, then, yes, that’s when you go to the target-based screening, the mechanism-based screening. So this is something to remember in this era where many people are still fascinated with target-based screening and think it’s the only way to discover a drug that the historical record says that phenotypic screening works really well for discovering really new types of compounds.
Now Barry had mentioned the public – – the Sean Ekins publication of tuberculosis and I just want to also mention that there are even in tuberculosis there are limitations. So if you want to reposition a drug for TB treatment as an example, bacteria have evolved to be impermeable drugs. Gram positive is more permeable than gram negative in micro bacteria, which are you know, in tuberculosis are the least permeate and any bacterial target screening is an absolute disaster. It’s a poster child for failure and both – – this is published work both from GSK and from Pfizer.
Now… And what’s the problem? Well micro bacteria have all kinds of efflux pumps, so you might think about trying to block out efflux pumps. And again, we know from at least 20-25 years of work that pump inhibitors which were tried in the cancer area as a way of improving sensitivity of tumors to cancer chemotherapy are just a clinical disaster. There were two compounds that did get into phase three, but nothing ever came out of this. So the guess would be that this would be very, very difficult to do in tuberculosis. There are absolutely no predicters for micro bacterial penetration. So what it does is it suggests a limited impact on data approaches on real TB drug discovery. And what it says is you need to know the scope and limitations of predicted methods and the relative applicability of domains. So I think drug repurposing is probably easiest on the front end, on the very early discovery, end and the further you go along in vivo and the further you go into the clinic and the further you’d actually tried to round up money to get compounds in the clinical trial the harder it gets.
And so my last slide, summary slide, so drug repositioning is a very fast growing area with a lot of potential, as you’ve heard yesterday and in the talks today, but you really need to be realistic, you know realistic and a bit optimistic. You don’t want to accept – – expect miracles. And thank you very much.
Tag(s):