CGT Potency Assays: When Will We Wake Up From The Nightmare?
By Anna Rose Welch, Director, Cell & Gene Collaborative
Subscribe to my blog ARW on CGT here!
“Ask not what your potency assay can do for you, but what you can do for your potency assay.”
I’ll admit, my version of this quote doesn’t quite have the same amount of patriotic verve as the original. But this was the phrase that came to mind as I listened to an incredibly thorough potency assay working session put on by the Alliance for Regenerative Medicine (ARM) & American Society for Gene & Cell Therapy (ASGCT) for FDA and ATMP industry members. As you’ll note, the agenda for this workshop encompassed several critical arenas of discussion, including — but not limited to — the perceived and/or lived experiences in the development of potency assays and what the industry and agency both need to do more or less of in ironing out this challenging analytical hurdle.
For those of you who may have missed the live stream, never fear: the insights shared during the five hours will be captured in a forthcoming ARM/ASGCT white paper. In fact, ARM and ASGCT have already compiled a three-page summary capturing some of the most meaningful thoughts/quotes.
As a (hopefully) complementary effort, I also offer below my biggest takeaways from this deep dive into one of the most popular topics to gossip about both in and outside the analytical labs. In part one of this two-part article, I start by unpacking what I see as the most prominent knowledge gaps and questions underpinning the industry’s potency assay challenge. Part 2 will share some of the best practices for tackling these knowledge gaps.
Why Potency Assays Remain The Industry’s “Monster Under The Bed”
We hear and read regularly that potency assays in the ATMP space are “hard.” They’ve been heralded so regularly as a common source of industry distress that I’d argue this assay/matrix has risen to the ranks of the dreaded “monster under the bed” of the ATMP space.
We’re all familiar with the nightmarish tales. If you don’t have a potency assay in the works early in development, you’re in big trouble. But even if you do have a single potency assay or the makings of a potency assay matrix early enough in development, legend has it that one or multiple things will inevitably happen down the line. The FDA will decide it doesn’t like what you’ve chosen, and your clinical trials will be delayed, your BLA stalled, and/or your executives forced to resign.
As much as I’m being facetious, each of the scenarios mentioned above has happened at least once in the ATMP space thanks to potency assay shortfalls. But outside of blanket statements about the fickleness of biology or vague press releases alluding to project-halting regulatory disagreements, it’s rare we learn why establishing a potency assay or matrix has been so difficult.
So, for the sake of clarity, I’d like to start first by spelling out the big picture behind why we’re so challenged to quantitatively demonstrate the potency of our products today. Such factors are incredibly important to have out in the open first so we can then understand why the phrase “it’s only temporary” can be applied to our current potency woes.
The root of all our problems comes down to one critical fact: We don’t know (yet) what matters structurally and if/how it influences our treatment’s function. In turn, it’s infinitely more difficult to know what to measure and which of our measurements are most meaningful/indicative of our products’ function(s).
I dare say everyone is familiar with this famous Far Side cartoon: Two mathematicians are discussing a complicated 3-step mathematical equation in which step two is, simply, “Then a miracle happens.” In a lot of ways, figuring out our therapies’ mechanisms of action from the “black box” of our molecules has left R&D and analytical teams believing in miracles, much like our Far Side mathematical friends. We may know what we want our products to do in vivo, but how our products effect that change remains much less clear-cut.
Overall, I’d argue that understanding our therapies’ structure and function — including their potency — requires mastery of two equally important skill sets. While these skills are developed in parallel, I chose to separate them here for maximum clarity:
- First, we must have the analytical proficiency to grasp the if/why/and how of establishing and (eventually) narrowing down of a potency assay matrix.
- Secondly, we need to become fluent in understanding the intricate system of biological cause and effect that occurs in vivo when we administer our therapies. In turn, this will enable us to better manage and mature the analytical “logistics” in the long-term.
From “Nice-To-Know” To “Must-Know:” Establishing & Evolving An Assay Matrix
There are three big “tools” we rely upon to become analytically and biologically smarter. Albeit, using these tools sufficiently requires making friends with time, which is often our and our patients’ greatest enemy. These tools include extensive analytical characterization; short-term clinical data from trials; and long-term clinical data post-approval — the last two of which are currently limited in scope and scale.
That leaves us with analytical characterization, both prior to and during clinical development. It’s commonly emphasized that ATMP companies need to be more gung-ho about deep characterization of their products from the start. As one SME explained in this previous article, we often overestimate just how well we analytically understand our products. It’s not unusual for our knowledge to fall victim to steep time and resource constraints. We may also lack access to or have a fear of being “locked” into using next-gen analytical tools long-term by regulators. (FYI, I’ve heard many a time that this fear is predominantly irrational. In fact, the FDA even acknowledges in its potency assay guidance that not all tests will be practical for release, a fact which at least suggests that industry and agency are currently living in the same universe.)
It goes without saying that there are a lot of merits to undertaking a more earnest analytical exploration of a molecule early on. Of course, it helps you start figuring out what you’re working with from an identity, purity, and potency standpoint prior to and throughout early clinical development. But it is also educational for the regulators (which is essential for more practical, straightforward regulations moving forward), and/or can help you justify future development-streamlining proposals to the agency.
However, as we also know well, ATMP products are biologically complex, multicomponent products. Behind each gene therapy, ex-vivo or in vivo gene-editing therapy, or cell therapy, there is a combination of critically important drug substances (e.g., mRNA, viral vectors, nonviral vectors, etc.…) and critical raw materials, the agencies’ quality expectations for many of which are still not well understood or harmonized. In turn, being “gung-ho” about deep characterization early can also leave us up a river with too many paddles and no sense — or a misguided sense — of which will get us home the most efficiently and safely.
Put another way, just because we’ve learned to measure something reproducibly doesn’t mean it is important to our product’s clinical performance or that it’s demonstrative of the overall quality of our molecule/process.
Such proactive analytical efforts also give rise to another important question: How many assays should or will be expected as release assays for our drug product (as opposed to remaining characterization assays)? Since potency is our topic du jour, this question becomes a bit more specific: How many and which assays should be expected to be part of a potency matrix for release? In fact, I’ll even pose this delightfully controversial question: Is the potency assay matrix here to stay in the long run, or will it eventually go the way of the dinosaurs as we get smarter biologically?
These were just some of the many analytical questions posed and debated (but never fully answered) during the ARM/ASGCT potency assay working session. As these questions above also remind us, our analytical paradigms evolve over time. As we learn more, we can eliminate or demote less meaningful assays in favor of performing, qualifying, and eventually validating only the most important. For a good visual of this process, I’d refer you to this slide from an FDA presentation depicting the FDA’s definition of a poorly designed & a well-designed incremental potency assay framework.
FDA visuals aside, however, prioritizing assays to validate is much easier said than done today. As numerous comments throughout the ARM/ASGCT working session revealed, there is no consistent definition of a potency assay matrix, nor is there a clear-cut framework for “pruning” a large collection of exploratory assays down to a single assay OR to a smaller set of the most meaningful assays. As such, there were multiple comments emphasizing the need for a more straightforward definition around what a potency assay matrix is. We could also use more guidance spelling out how companies can rank their assays in terms of criticality for all CQAs, potency included. Such guidance would help the industry separate the must-have assays for lot release from the helpful-but-not-necessarily essential characterization assays. But it could also clarify the currently presumed-to-be laborious process required to remove certain assays from the list. (As this session also revealed, the pharma industry is not immune from regulatory gossip.)
It’s a Bird! It’s A Plane! It’s… The Most Meaningful Step In The Biological Cascade?
While I tried to prioritize some of the questions around the practical “logistics” of establishing a meaningful analytical platform, we can’t divorce the analytical from our therapeutics’ biological/clinical performance.
The ARM/ASGCT working session made it clear that many in the industry remain overwhelmed biologically, and, in turn, analytically today. However, the discussion did home in on one milestone that could be revolutionary for streamlining analytical development — particularly the demonstration of potency — in the future. It all comes down to greater understanding of each therapies’ “biological cascade.” In particular, we aspire to identify which single step within that cascade is the most important for determining the potency of that product. In turn, we could potentially narrow down which aspects need to be demonstrated analytically to support potency for release (e.g., infectivity, expression, function). This could also lend itself to a more platform-centric — and, in turn, a more precise, concise, and accurate — approach to potency testing for each type of product, as opposed to recreating the analytical wheel for each individual ATMP. For now, however, the questions and opinions around how our products function in vivo and how we can/should identify and measure their potency are numerous (and, yes, modality-, product-, & indication- specific).
One of the most complicating factors for the potency-related analytical tasks at hand is, arguably, the FDA’s definition of an ATMP’s MOA, which is ultimately what our potency assay is supposed to represent (page 6/17). To the FDA, MOA= “relevant therapeutic activity” or “intended biological effect.” But as we also know, there often can be multiple “measurable” steps contributing to that final therapeutic effect (e.g., gene transfer, protein expression, the activity of that expressed protein).
This raises some key questions: Are all of these steps created equal? Do they all actually need to be measured as a function of potency? In particular, when functional activity can be demonstrated quantitatively, do infectivity and expression also need to be measured? Though the FDA would ideally like to see a functional activity assay demonstrating potency for release, is this an appropriate “blanket” expectation? In which situations might a functional activity assay not be needed or appropriate — and why? Could expression ever be sufficient for release? (The mRNA nerd in me loves this question.)
Some attendees of the ARM/ASGCT workshop complicated it further by pointing to a seeming lack of harmonization between EMA and FDA guidance as it relates to potency expectations. For example, EMA emphasizes the importance of measuring infectivity and expression for potency accompanied by functional activity, where possible. In fact, one of our very own Pfizer was no stranger to the differences in global regulatory opinion that can arise over potency assay development.
Receiving a straightforward answer to any of these (or other) questions could have revolutionary implications for our analytical development moving forward. The ability to align around a single potency assay — when possible — could promote much more reliable, reproducible, and focused analytical work for companies in the long-run. However, we also cannot forget that straightforward answers typically come after a lot of exploration and experience. Straightforward answers are also dependent upon the totality of the data presented and the strength of a company’s justifications. We as companies — and we alone — are the experts on our individual products, not the FDA.
As the previous paragraph should have hinted, there are several things we as an industry need to do — or get better at doing — to improve our analytical and biological understanding of our ATMP products. In part 2, I share four of my biggest takeaways from the ARM/ASGCT potency assay working session.
If you liked what you read, sign up to receive my bi-weekly newsletter — ARW’s C&G+RNA Manufacturing Must-Reads. This newsletter features a variety of content — my own included — that best portrays the challenges and evolutions impacting the ATMP manufacturing space.