One of the best and toughest parts of being a science writer is acting as a kind of jargon liaison. Weird, obscure, aggressively multisyllabic words appear in scientific discourse; I, wielding nothing but a Google Doc, a cellphone, and the Powers of the Internet™, wrest these terms from their academic hidey-holes and try to pin them down with some endearing yet accurate analogy. If I do my job well, sometimes readers never even need to see the original word, because there’s a more approachable way to describe it. In a lot of cases, that’s how these words move—from academic to journalist to reader. (Hi there.) But sometimes the words leapfrog me. And that’s when I panic. I have panicked a lot in this way during the pandemic. The coronavirus has prompted a huge shift in the ways we talk with one another, and about one another. That’s what people do in crisis: We borrow, massage, and invent words to make sense of what’s happening around us. But this most recent go-round has involved a lot of linguistic “leakage,” the linguist Elena Semino told me last month. “All of a sudden, something for a professional community is being used for everyone.” We’ve had to assimilate a whole slew of terms from public health, immunology, and medicine, some of them totally foreign (cytokines, positive predictive value, R-naught), others more familiar but with colloquial and academic meanings that at least partially conflict (bubbles, breakthroughs, boosters). The transition doesn’t always go smoothly, and confusions and misunderstandings, much like contagion, are very hard to rein in once they’ve started to spread. By now a lot of our pandemic verbiage has been misconstrued. Last week, I asked experts, friends, family, and colleagues what field-hopping terms or phrases had been causing the biggest headaches this past year; the recommendations came pouring in. What follows is by no means comprehensive, and probably represents a futile exercise in refining and redefining: The horses have left the barn, the ships have sailed from the harbor, the words have already slipped through my fingers like so much semantic sand. But I suppose I will continue to grasp at them, until they have escaped me entirely. Let’s start with asymptomatic, which scientists use to denote infections that never make people feel sick. Seems simple enough. But many who start off their infection symptomless might not stay that way, and until someone is rid of the coronavirus, it’s impossible to say whether they’re asymptomatic or presymptomatic. The boundary between no symptoms and symptoms is also surprisingly fuzzy. COVID-19 symptoms vary enormously from person to person, and are somewhat subjective: A headache two days after a positive coronavirus test could be a COVID symptom or an ill-timed hangover. Truly silent cases, though, are detectable only through a test that hunts for bits of the coronavirus. These infections don’t count as COVID-19, a term that’s supposed to be reserved for a documentable, symptomatic disease that unspools from a subset of SARS-CoV-2 infections. The virus, SARS-CoV-2, is what actually infects us, what actually transmits, what tests actually detect. Not COVID. (I am screaming into a void here, but that also means there’s no such thing as a COVID test, and there’s no such thing as asymptomatic COVID.) Okay, fine. Say you do test positive for SARS-CoV-2, and you lose your sense of smell, and your nose is kind of running a bit—you have straight-up symptomatic COVID. Maybe the person you mingled with unmasked a few nights ago does too, but they’ve got chills, nausea, and a high fever that will wreck them for weeks. Surprise! Both of you have mild COVID-19, a euphemistic term that’s still commonly used to describe all cases too “inconsequential” to land someone in the hospital. (At that point, a case is “severe.”) Mild might be useful for collecting population-level data, but a lot of experts dislike the adjective because it elides the debilitating and sometimes very lengthy illnesses that can unfurl from a SARS-CoV-2 infection, including long COVID. From the beginning, it’s been clear that “there’s mild, moderate, and severe, even for outpatients,” Sri Edupuganti, an infectious-disease physician and vaccinologist at Emory University, told me. Whichever direction the pendulum swings, for the first few days after your symptoms start, you’re going to be in … quarantine, right? Sadly, no. Two years into our run with COVID, that’s still one of the terms we most commonly mess up. Correctly used, quarantine describes the period of time when people who think they’ve been exposed to SARS-CoV-2 are supposed to cloister themselves—a precaution in case an infection manifests. If you know you’re infected, thanks to, say, a positive test or legit COVID symptoms, you’re going into full-blown isolation. (Unless you’re in the United Kingdom, where they apparently play it pretty fast and loose with these terms and “use them interchangeably,” Saskia Popescu, an infection-prevention expert at George Mason University, told me. Woof.) To confuse matters further, we have also adopted quarantine as a catchall moniker for somewhat sheltered pandemic life, or lockdown-lite. (Just check Google for 8 trillion listicles on quarantine cats, quarantine TV shows, quarantine meals, quarantine quarantinis …) Part of this obsession is probably cultural baggage: If Americans heard quarantine before the pandemic, it was usually in foreboding contexts—outbreak-centric history texts, or the plot twists of Contagion-esque sci-fi thrillers. (We have, after all, been using the term for centuries, since at least the time when ships arriving from plague-stricken countries were cordoned off for 40 days before docking—hence the quar- prefix.) Isolation is a much more well-worn term, something we’ve all gotten at least a taste of; it lacks that only-in-crisis allure. Quarantine--quarantine!--sounds way worse. We’ve struggled with cheerier words, too. The prospect of being fully vaccinated, for instance, is pretty appealing. Our COVID shots substantially reduce the risk of getting infected or seriously sick with SARS-CoV-2, and slash the chances that the virus will be passed on to others. But oh boy, is fully vaccinated also a nightmare to define. For starters, being fully dosed isn’t the same as being fully immunized, since it takes a couple of weeks for immune cells to learn the contents of a shot and react. (Even the professionals use this one in a confusing way: The CDC counts people as fully vaccinated the day they receive their second dose of Moderna’s or Pfizer’s vaccine or their first of Johnson & Johnson’s, but says they aren’t “considered” fully vaccinated until two weeks after that.) The rise of third doses and booster shots has also made the concept of full vaccination quite a bit squishier. If these additional shots are meant to build iteratively on prior defenses, does that take us to … fuller vaccination? Super vaccination? Or did we at some point get less full? (For now, at least, you don’t need a third dose or a booster to be considered fully vaccinated.) Fully also implies completeness, even invulnerability, when no vaccine in existence can ever confer such a thing. That sounds like a bummer, but SARS-CoV-2 infections among the vaccinated are entirely expected—especially because our shots were designed to help us stamp out disease, not eradicate all positive test results. It’s unfortunate, then, that we’ve spent months wringing our hands over breakthroughs of all severities. The term breakthrough has an established history in vaccinology--counting up these events is necessary to know how well inoculations are working in and out of trials. But because of our fuzzy understanding of vaccine effectiveness, the word’s use in pandemic times has become much more doom and gloom, with some reports even equating breakthroughs with vaccine failures. That’s absolutely not the case. Consider the CDC’s definition for a SARS-CoV-2 breakthrough: any test-based detection of the virus in someone who’s been fully vaccinated against the coronavirus. This dumps an enormous range of postinoculation outcomes into the same category, everything from exceedingly rare hospitalizations and deaths to totally silent infections that would’ve gone unnoticed if not for that choicely timed test. Simply receiving a positive test result does not guarantee that a person will experience disease or spread the virus to someone else. For these reasons, a lot of experts have sworn off using the term breakthrough—and wince noticeably when it comes up in conversation. (Many prefer post-vaccination infection.) If the terminology of breakthroughs has been exaggerated toward the negative, the discourse around natural immunity might be its overhyped foil. Natural immunity is another foster-phrase; long before the pandemic started, scientists used it to describe the protection left behind after an infection by a bona fide pathogen. But in the age of COVID, the phrase has become weaponized into a false binary: If infection-induced immunity is natural, some have argued, immunity obtained through different means must be unnatural--artificial, undesirable, a dangerous hoax, or even, in some cases, a moral failure, the religious-studies expert Alan Levinovitz recently explained in The Washington Post. But that dichotomy is scientifically nonexistent. Inoculations are designed to mimic the microbes that cause infections, and often end up tickling pretty similar responses out of immune cells. The main difference is that vaccines deliver their defensive lessons safely, without risking disease. As a nod to this, the immunologist John Wherry and others prefer using terms such as infection-acquired and vaccine-acquired immunity. They’ve even started using another phrase--hybrid immunity—to refer to the heightened protection that’s afforded when people with a prior SARS-CoV-2 infection get vaccinated. If the worry truly is that vaccines are a technological unknown, there’s at least one other way to look at this. Vaccines, like many other human inventions, are body-inspired. They leverage and build upon our inborn defenses, in much the same way that glasses can enhance vision and good running shoes can speed up a person’s pace. They’re not an indictment of the immune system and its numerous powers, but a tribute to them. In a pandemic, vaccines, in protecting both the people who receive them and the people they interact with, really do accomplish what no other tool can—and that, if anything, is worth saying over and over and over again. from https://ift.tt/3aiVBmf Check out http://natthash.tumblr.com
0 Comments
Two years into the pandemic, we’ve gotten a lot better at tackling the coronavirus at the extremes of infection. We have preventives—including masks, distancing, ventilation, and our MVP vaccines—that can be deployed in advance of a viral encounter. We have regimens of last resort: drugs, such as dexamethasone, that do their best, lifesaving work in hospitals with trained health-care workers, in patients whose disease has already turned severe. But in the chasm that sits in between—the hazy period after infection and before severe illness—decent tools that can derail COVID’s progression have been sparse. We now have a new candidate aiming to fill that crucial niche: the experimental antiviral molnupiravir, developed by Merck and Ridgeback, which comes in an easy-to-swallow pill. According to a company press release posted this past Friday, the drug can halve rates of hospitalization among people recently diagnosed with mild or moderate COVID-19. Molnupiravir hasn’t yet been given emergency clearance by the FDA, and won’t be available for at least a few months, but Merck and outside experts have said they expect a formal green light soon. With the Delta variant still ravaging the world’s unvaccinated, a pill such as this one could ease the burden on overtaxed health-care systems—which most other COVID treatments have struggled to do. “To have something to take by mouth the minute you’re diagnosed, that reduces your chances of getting severely sick … that’s kind of the dream,” Nahid Bhadelia, the founding director of Boston University’s Center for Emerging Infectious Diseases Policy and Research, told me. But in that middling stretch of the COVID timeline, molnupiravir might be able to stake out only limited territory. The drug is meant to be taken within the first five or so days of illness, “the earlier, the better,” George Painter, a pharmacologist at Emory University and one of molnupiravir’s early developers, told me. That’s a punishingly tight window, especially in nations short on diagnostics to detect the virus—as well as access to health workers and infrastructure to prescribe and provide the drug. “Rolling out an oral medication is hugely important,” Erin McCreary, a clinical pharmacist and COVID-treatment expert at the University of Pittsburgh, told me. But a pill, she said, has to be “paired with access”—of which a drug itself is no guarantee. Despite its experimental status, molnupiravir is a pretty familiar face to the antiviral-research community. In the pre-COVID era, the drug generated some buzz when scientists found that it could stamp out a menagerie of viruses, including influenza. Its modus operandi is pretty similar to that of remdesivir, the only COVID-19 drug with full FDA approval. Both mimic building blocks of SARS-CoV-2’s genetic code, allowing them to mess with the fastidious self-xeroxing process that the virus uses to generate copies of itself inside human cells. The two antivirals are slightly different agents of chaos, though. To make more of itself, SARS-CoV-2 deploys a scribe-like enzyme called a polymerase to scan and duplicate its genome letter by letter. When the polymerase spots a stray remdesivir molecule, it stumbles, as if flustered by a bad typo. Molnupiravir is more insidious still. It’s such a good mime of the letters in the viral alphabet that the polymerase often overlooks the interloper, making genome copies riddled with mistakes. “An analogy might be gross misspellings,” Painter said. The drug’s sabotage is so extensive that experts call it an “error catastrophe”: Dangerous viral particles have essentially no shot of emerging out the other end. Molnupiravir’s packaging might give it another leg up. Researchers have long known that a bad case of COVID-19 tends to unfurl in two stages—one dominated by the virus, and a second by the immune system’s overzealous reaction. The point of antivirals is to act early, and fast—to nip a growing virus population in the bud, before it can wreak havoc on our tissues, or trip too many of the body’s hypersensitive alarms. These drugs are largely useless once people have descended into the second phase. Remdesivir has to be delivered intravenously, over several days—usually in a hospital, after most patients are pretty sick. (This might explain why remdesivir studies in these settings have produced mixed or underwhelming results.) Molnupiravir, meanwhile, was designed as a pill so it could be “easily administered in the outpatient setting,” Daria Hazuda, Merck’s vice president of infectious-disease discovery, told me. The drug is easily shipped and stored, and can be taken pretty much anywhere. Merck’s recent trial, which has yet to be documented in a peer-reviewed scientific study, used the drug in people who had at least one risk factor for developing severe COVID-19 and had just begun to feel ill. Only 7 percent of them ended up getting hospitalized, compared with 14 percent in a placebo group, and none of them died. “That’s hugely clinically significant,” Ilan Schwartz, an infectious-disease physician at the University of Alberta, who wasn’t involved in the drug’s development, told me. The pill also, so far, appears to be playing nice with human cells, dealing its deathly blows only to viruses--no serious side effects have been reported yet, though Merck’s final data are expected to provide more details upon publication. And there’s been little sign that SARS-CoV-2 can evolve to skirt molnupiravir’s effects, which should make the drug relatively variant-proof. The trial’s results were so promising that an independent panel of experts evaluating the data decided to halt the study early so the company could move forward with its product. Realistically, molnupiravir might be better compared to monoclonal antibodies—the only treatments for COVID’s early-infection phase that have gotten emergency authorization from the FDA so far. Across trials, monoclonals have proved highly effective at stopping mild and moderate cases of COVID-19 from ballooning into serious ones; one formulation has even been okayed for use in people who have recently been exposed to SARS-CoV-2 but haven’t yet developed symptoms. But monoclonals have weaknesses, too: They still need to be infused or injected by professionals, viruses can adapt to resist them, and skyrocketing demand has seriously strained supply. Molnupiravir, if it pans out, could expand the therapeutic options for this stage of disease. In a best-case scenario, the people who take it would be able to stop themselves from getting seriously sick, while also shortening the length of time the virus lingers in their body—potentially making them less of an infectious threat. Treated people could end their disease earlier in the COVID timeline. Molnupiravir’s name, however tough to pronounce, has a story behind it. The drug’s been packing such a punch in trials, Emory’s Painter said, that it inspired him and his collaborators to name it after Mjölnir, the mythical hammer of the Norse god (and Marvel Avenger) Thor. “All we wanted was something that carried the idea of potency,” he told me, referencing Arthur C. Clarke’s The Hammer of God, a novel about a human mission to deflect an asteroid on course to collide with Earth. “That it can stop something.” The Mjölnir reference might work in another way too. Wielding a hammer effectively requires impeccable timing. A powerful tool still needs to hit its mark. Treatments are, by definition, reactive; a drug, no matter how early it’s dosed, can’t undo an infection, or a prior transmission event. It can only contain the fallout. The 50 percent reduction in hospitalizations noted in Merck’s press release is stellar, but some participants “still did get hospitalized,” Bhadelia pointed out, and without public data, outside researchers can’t yet identify who benefited most, or least, from the pills. Drugs such as this one might not block other outcomes, including long COVID. And Merck has yet to test the pill in pregnant people and kids. Experts also pointed out the paucity of data on the drug’s performance in vaccinated people, most of whom remain at very low risk of severe disease but could still benefit from early treatment, especially if they’re in high-risk groups. Molnupiravir won’t ever replace tools that can exert their effects before the virus even shows up. “I’m really hoping people don’t look at this as a reason to not get a vaccine,” Elizabeth Campbell, an expert in COVID antivirals at the Rockefeller University, told me. Also, Molnupiravir is going to be used by humans, not gods. Which means it’s going to be subject to some very human limitations. For the pill to work, people will need to realize they’re sick and confirm that with a test; they will need to seek care from a health-care provider and successfully nab a prescription; they will need to access the drug and have the means to obtain it. Then they will need to take the drug successfully, which, according to Merck, means swallowing four capsules twice a day for five days—a total of 40 pills. Molnupiravir’s been billed as a cheaper alternative to remdesivir and monoclonal antibodies, which can carry price tags of up to about $3,000 and $2,000, respectively, for the drugs alone. But at a projected $700 per course of treatment, molnupiravir still “isn’t very affordable,” Bhadelia said, especially in lower-income countries, where vaccination rates have been low and drugs like these are desperately needed. Merck has pledged to set up tiered pricing that could cut the pill’s cost abroad, and has partnered with several other manufacturers in other parts of the world to speed the timeline of availability “in maybe 100-plus countries,” Hazuda, of Merck, told me. Even if pills were free and abundant, their effects could still be constrained by a diagnostic bottleneck. Since the pandemic’s early days, access to timely, accurate testing has been woefully inadequate, an issue that’s been exacerbated by the structural barriers faced by communities of color, Utibe Essien, a health-equity researcher at the University of Pittsburgh, told me. If a result comes too late, or a test seems out of reach, then the sick person can easily miss that crucial early-infection window—a big loss, considering that molnupiravir has essentially “no effect on patients once they’re in hospitals,” Campbell told me. “If treatment is contingent on diagnosis, we need to make sure testing is more readily available,” Essien said, or risk widening equity gaps. In this arena, in particular, molnupiravir might stand to be a bit less like its namesake: accessible only to those deemed worthy enough to wield it. from https://ift.tt/3iFCNCp Check out http://natthash.tumblr.com On the surface, the September 24 announcement from the head of the CDC outlining who, exactly, would be eligible for COVID-19 booster shots seemed like a clarifying moment. But even as the agency’s leader, Rochelle Walensky, declared the need to make “concrete recommendations that optimize health,” the new guidance was hard to parse. It said, for instance, that people as young as age 18 who received the Pfizer vaccine may get a third shot as long as they have any of a list of “certain medical conditions” that might put them at “high risk” or “highest risk” for developing complications from severe COVID. What are those high-risk conditions? The CDC has compiled a vague and partial list, presented in alphabetical order, that includes cancer, diabetes, liver disease, and smoking among 29 named conditions in all, divided into 17 categories. One of those categories—“immunocompromised state”—is itself a Russian doll of health disorders such as rheumatoid arthritis, lupus, and HIV; transient conditions such as pregnancy; and various treatment-induced vulnerabilities. Even the group most at risk from COVID—elderly people—can be said to be in an “immunocompromised state.” Researchers have termed the gradual weakening of bodily defenses as a person ages “immunosenescence,” and there’s evidence that an older immune system may also get stuck in an inflammatory state, a problem called “inflammaging.” So when we say that someone 85 years or older has 570 times the chance of dying from COVID as a young adult, we’re really using age as a stand-in for some invisible, underlying immune state. Just as immune function—and its associated protection against severe COVID—tends to wane across our life span, it also varies along a continuum from person to person as a product of genetics. One evidently healthy 30-year-old, for example, could be more predisposed to getting very sick with COVID than another, even if they had the same set of medical conditions as listed on the CDC website. Scientists have been working out the details of these individual differences in immune function, but their findings haven’t yet been brought to bear on the pandemic in any widespread way, let alone considered guidance for the use of booster shots. Instead, as we struggle to set up rival groups of high- versus low-risk people, or immunocompromised versus immunocompetent, we tend to ignore all the gradations of vulnerability that might lie in between. Those who don’t clearly fit into the CDC’s official categories are left to guess at their personal levels of risk, counting their COVID antibodies “like calories” or grabbing booster shots of their own accord. In the future, we may have more precise ways of gauging our individual vulnerability to COVID, or indeed to all infections. To some extent, we already do. When I asked Harry Malech, the chief of the Genetic Immunotherapy Section at the National Institute of Allergy and Infectious Diseases, about natural differences in people’s ability to fight off disease, he recalled a case he took on in 2008. A young man with a strange constellation of symptoms had been referred to him for help with a diagnosis. The patient was in his late teens and had endured recurrent infections throughout his life. The roof of his mouth had become soft and was beginning to melt away, which was distressing. The patient’s sister had died in early childhood, so doctors wondered if a genetic factor was to blame. At that point, scientists had decades of experience uncovering specific mutations that made certain patients unusually vulnerable to infections. The first inherited immune-system deficiency was described in 1950, when the Swedish pediatrician and army doctor Rolf Kostmann published a report of congenital neutropenia, in which babies are born with a shortage of white blood cells known as neutrophils. In the 1970s, a boy named David Vetter, who lived his life separated from the world by a plastic barrier, opened the public’s eyes to other inborn conditions that disable the immune system. Yet almost 25 years after Vetter’s death, at age 12, clinicians like Malech still didn’t have a cheap, fast way to sequence and analyze the DNA of patients like the ailing teenage boy with mouth ulcers. Instead, they’d sequence tiny chunks of a patient’s genome after developing a hunch of where to look. By luck, one of Malech’s colleagues came across what was then a brand-new report in The New England Journal of Medicine mentioning a rare immune disorder in a few young girls who also had ulcers. “She said, ‘Maybe he’s got this. Let’s sequence this gene.’ And by golly, he had [the] deficiency,” Malech recalled. The teen received a bone-marrow transplant to reset his immune system, and he got better. That’s how things used to go, Malech told me: It would take “a sage diagnostician, coupled with a bit of serendipity and a whole lot of immune tests, to get at the heart of things.” Now everything is different. “In the last seven to eight years, the entire field has changed,” Malech explained. “The ability to do rapid, cost-effective, high-throughput sequencing of people’s genomes has turned the whole process on its head.” You can easily obtain a patient’s “exome,” which tells you the code for all the proteins in their body. From there, faster and more powerful computers facilitate careful searches through those genetic sequences for mutations of interest. Thanks to all these changes, scientists are finding DNA mutations with subtler—but still important—effects on the immune system. Today, more than 400 different chronic immunodeficiencies caused by genetic variations have been identified, according to the Immune Deficiency Foundation. Very few of these would land a person in a plastic bubble from infancy, but many could make someone more prone to repeat visits to the doctor’s office for infections as an adult—and perhaps more vulnerable to COVID. [Read: You might want to wait to get a booster shot] Jean-Laurent Casanova, of Rockefeller University in New York City, co-leads a consortium called the COVID Human Genetic Effort. A year ago, he and his collaborators made headlines with a study in the journal Science that described how mutations affecting certain genes were more common in a group of more than 650 individuals with life-threatening COVID pneumonia than in their control counterparts, who were infected but asymptomatic. Specifically, the scientists found mutations that could disrupt immunity controlled by molecules known as type I interferons. Other scientists have not yet been able to replicate all of the same findings, but they haven’t discounted the importance of interferon in protecting people from severe disease. More recently, Casanova and his teammates tried to get a sense of the scope of immune deficiencies in relatively young people who get very ill with COVID. In a paper published in August, they offered evidence suggesting that about 1 percent of men under 60 years old who developed life-threatening COVID have a mutation on the X chromosome that affects a receptor known as TLR7, which sits on the surface of immune cells and carries signals about microbe invaders. The scientists found this mutation in 16 of the more than 1,200 people with unexplained critical COVID in the study, but it was totally absent in more than 300 people who either had mild illness or were asymptomatic. The connection between TLR7 and severe COVID has been found by other groups as well. “Given that there have been many studies demonstrating the impact of TLR7 variants on COVID-19 severity, we believe this is likely a true signal,” says Tomoko Nakanishi, a respirologist at McGill University, in Montreal, who was in one such group. An international collaboration has also uncovered genetic variants associated with severe illness from the coronavirus. Some are thought to diminish levels of an enzyme called oligoadenylate synthase, which normally helps chew up viruses. A paper published in just the past few days joined others in finding that a variant affecting one form of that enzyme is also associated with worse COVID outcomes; its authors note that this variant is common in all people, although less so among those with African ancestry. Yet another common variant—this one found in as many as 15 percent of individuals of European descent—could increase the risk of severe COVID by 70 percent, and by 170 percent in people less than 60 years old, according to a paper this month from Nakanishi and colleagues. Not every immune-system glitch is necessarily predetermined at birth. Another study from a large, international group—this one including Casanova—skipped over the genome and looked for subtle immune disorders that people might acquire over time. In particular, the group found signs of antibodies that had gone rogue and were attacking the patients’ own immune molecules in about one-fifth of those from its sample who had died from COVID. It also looked at the immune systems of healthy individuals and found that older people were far more likely to have these same autoantibodies—a fact that could help explain why increasing age has such a strong association with COVID disease risk. Casanova told me that the progress made in understanding immune-system vulnerabilities to COVID already outpaces that which he’s seen for other illnesses. For two decades, he tried to uncover underlying predispositions for tuberculosis, and while he did succeed in finding a seemingly relevant mutation in an immune-system enzyme called TYK2, it could account for only about 1 percent of cases among Europeans. Now scientists around the globe are making much faster headway with COVID, Casanova said. “I still can’t believe it.” Given the lure of personalized genetic testing, some companies have started offering 23andMe-style diagnostics with a pandemic spin. Earlier this year, an Australian firm called Genetic Technologies collaborated with distributors to release an individualized, $175 assessment in the U.S. called the “COVID-19 Risk Test,” based on age; body mass index; preexisting conditions, such as diabetes; and seven DNA markers. Another testing company, Nutrigenomix, which has focused on nutritional genetic testing in the past, began offering an add-on feature to its 70-gene test this summer: For an additional $79, it tells you the status of your TAS2R38 gene, which codes for a receptor involved in bitter taste that has also been linked to poor COVID outcomes. Some researchers see promise in personalized testing, reported Jocelyn Kaiser in Science this past June, but many believe that the underlying science remains too murky to sustain widespread genetic testing for COVID risk. Indeed, despite the run of notable findings in the research literature, there hasn’t been much push for understanding how to test and segment the U.S. population according to people’s genetic susceptibility to COVID. Some wariness may be leftover from a confusing episode at the beginning of the pandemic. In early June 2020, a large team of scientists in Europe put forth preliminary data suggesting that, according to a genome-wide analysis of nearly 2,000 COVID patients in Italy and Spain, people with blood type A had a 50 percent increased risk of experiencing respiratory failure, while those with blood type O were somewhat protected. “A genetic test and a person’s blood type might provide useful tools for identifying those who may be at greater risk of serious illness,” wrote NIH Director Francis Collins at the time. 23andMe quickly followed with its own preliminary data suggesting that people with blood type O were more impervious to COVID. (A full study of the massive dataset from 23andMe data, published in April, confirmed this protective effect.) But just a month after the initial raft of news about COVID blood types, reports emerged that two similar studies had failed to find a strong connection. People with blood type A were not more prone to falling severely ill from COVID, and any protective effect from blood type O was so small that scientists said it was basically useless. An author of one of these follow-up studies told The New York Times that the case for identifying genetic susceptibilities via simple blood-typing was closed. “I wouldn’t even bring it up,” she said. A year later, is it any wonder that COVID risk tests aren’t getting that much press? According to Julien Textoris, a vice president of global medical affairs at the diagnostics company BioMérieux, we have a long way to go before people can make a doctor appointment to know if their immune system is up to snuff. Beyond the exceptional cases, such as inborn immune-system mutations or immunosuppression following an organ transplant, “there is no operational definition” of what it means to be immunosuppressed, he told me. For his part, Casanova believes that anyone who gets severely ill or dies from COVID is “immunodeficient” by definition—even if there is no current explanation for why they fared so poorly. Researchers say that continued advances in genetic sequencing will help unravel some of that mystery. “As to who counts as immunocompromised? I think it will be easier in the future to answer this question than it is today,” Wayne Koff, president and CEO of the Human Vaccines Project, told me. [Read: The uncertain future of genetic testing] Even if the science were all worked out, it would be an “expensive proposition” to do comprehensive sequencing of people’s coding DNA at a population level, Jeffrey Townsend, a biostatistician at the Yale School of Public Health who has studied COVID, told me. At the moment, we’re in a chicken-and-egg situation in that more genetic sequencing is needed for scientists to sort out which bits of DNA have the greatest influence on our immune function, which would in turn justify the cost of sequencing more people’s DNA. Complicating matters is the fact that our immune system derives from a vast network of genes. The genetic subtleties of COVID risk are intriguing, Townsend said, but we can already try to gauge people’s vulnerability to disease by means of simple SARS-CoV-2 antibody tests. “The evidence so far seems to indicate that antibody level is a major predictor of your level of defense against COVID-19 infection,” he said. In the meantime, the further development and deployment of genetic methods for determining each person’s individual COVID risk may bring along some dangers of its own. In an essay published in June 2020, the sociologist Richard Milne warned that although such tools “may have potential value,” they could also lead to discrimination. People deemed to be particularly susceptible to COVID because of their DNA “may be advised to continue shielding or self-isolation measures long after the rest of the population,” he wrote, leading to significant psychological and financial hardships. Even well-established markers for risk could be misleading when taken out of context. Researchers who were looking at the 23andMe data found that having blood type O offered slight protection against developing COVID—but the company’s previous research suggested that the same blood type was also a possible risk factor for seasonal flu. Or consider the discovery in the mid-1990s that a mutation to a certain immune-cell receptor could protect people from getting infected with HIV. A decade later, scientists learned that the same genetic quirk also puts people at higher risk of falling ill from West Nile virus. “One of the things that people don’t appreciate is that the human immune system is a compromise,” Malech said. “If you do better at A, you may be less good at B. There’s no free lunch here.” from https://ift.tt/3AimG3w Check out http://natthash.tumblr.com The word booster kicked off the pandemic benign and simple, a chipper concept most people linked to things such as morale and rockets. Then, at the start of 2021, the word began to undergo a renaissance. By summer’s end, booster was a common fixture of headlines and Twitter trends; it was suddenly tethered tightly to words such as shot, vaccine, and immunity online, as experts and nonexperts alike clamored for the more, more, more promise of extra protection against SARS-CoV-2. According to Elena Semino, a linguist at Lancaster University, in the United Kingdom, English-language news reports now deploy the word booster about 20 times more often than they did in pre-COVID times. The pandemic has, in effect, boosted boosters into the public sphere. And yet, we are still really bad at talking about them. In the top echelons of the CDC, in the back alleys of Twitter, no one can seem to agree on who needs boosters, or when or why, or what that term truly, technically means—even as additional shots that officials are calling boosters continue to enter arms. Some experts insist that boosters are necessary; others vehemently disagree; a few have insisted that we shouldn’t be using the B-word at all. Discussions among the rest of us have been no less chaotic. A September poll from the Kaiser Family Foundation shows that more than a third of respondents find information on boosters to be confusing instead of helpful. Last week, my own mother, a retired medical technologist, asked me whether she should get a booster. “What do you think the booster is for?” I asked her. She paused. “Well,” she said, “I don’t know.” The battle over boosters is about more than semantic precision. Without properly defining what these additional injections are, and what they’re intended to accomplish, experts can’t demarcate success. Defining the goals of boosters now would help us figure out who needs them now, who might need them eventually, and even how often we’ll all need them in the future, if we need them at all. To fully capture what boosters can and should do, though, we may need to reframe what that word means to us—or, as some have argued, dispense with it entirely. Booster isn’t new to the vaccine lexicon; American adults, for instance, are asked to tangle with the term every 10 years or so to maintain their defenses against tetanus. But the word sprouted independent of immunization, as the linguist Ben Zimmer recently wrote. Its roots date back to 1801 at the latest, though it’s hard to pinpoint when, or from where, it actually arose. The term has since gained a pretty straightforward connotation—“upward movement.” A boost is a lift, a push, an increase, the ability to take us “to new heights, further than we could otherwise go,” Neil Lewis Jr., a communications and social-behavior expert at Cornell University, told me. We use boosters to raise up children sitting in cars, and to launch rockets into the beyond; boosters naturally evoke ideas of support or benefit, which makes them a PR windfall. By the 1940s or so, perhaps earlier, booster had entered the immunizer’s lexicon, and might have made additional doses of tetanus, diphtheria, pertussis, and polio vaccines more palatable to the public. It almost certainly helped “put a positive spin on the need for extra shots” of the inactivated polio vaccine in the latter half of the 20th century, Elena Conis, a vaccine historian at UC Berkeley, told me. But this perky portrait of boosters might obscure why we need them at all. There’s more than one reason to administer an additional dose of the same vaccine. Many immunologists and vaccinologists draw a distinction between doses in the primary series, which create immune protection in a person who’s never been inoculated before, and boosters, which replace those defenses when they’ve started to fade. The primary series can comprise a single dose, or more commonly, multiple, as with two-dose MMR shots, or three-dose hepatitis B vaccines. The aim of a primary series is to reach and maintain a protective threshold, with each dose building iteratively on the quantity, quality, and durability of that defense, and a person can’t be considered fully vaccinated without finishing those initial shots. But once they do, they might never need another injection again. Primary-series doses, in other words, are generative. Boosters are the optional second chapter in this story. They’re not necessary for all vaccines —just the ones whose protection appears to ebb, usually over the course of years, à la the once-per-decade tetanus touch-up. Boosters are restorative, meant to put back something that was once there, but has since been at least partially lost. An added shot “gets you back up to some threshold we know is important,” Rishi Goel, an immunologist at the University of Pennsylvania, told me. (Not every shot administered in regular intervals is a booster: The annual flu shot, whose ingredients change every year, is issued less because our bodies are forgetting a specific strain, and more because the many viruses we encounter change so rapidly.) [Read: We’re asking the impossible of vaccines.] What we now refer to as boosters, then, might be better described as refresher, refill, or reminder shots—something that signals not just growth, but growth from a place of temporary loss. This mirrors the way several Romance languages describe booster shots: Spanish speakers say refuerzo, a term that signifies reinforcement, while Italians say richiamo, and the French say rappel--both words that signify recollection. For COVID-19 vaccines, booster is already a popular term, but it’s not obvious how restorative the additional shots are, in terms of guarding against the coronavirus. In one group, at least, third shots are generative: people who are moderately or severely immunocompromised, and may not have marshaled a sufficient immune response to their initial vaccine doses. “In this population, that’s really clear,” Grace Lee, a pediatrician at Stanford University and the chair of the CDC’s Advisory Committee on Immunization Practices, told me. (There is still, frustratingly, a huge paucity of data on the one-dose Johnson & Johnson vaccine, though several experts have told me in recent weeks that J&J’s regimen may become a two-shot primary series for everyone, based on the company’s recent findings.) When it comes to the rest of us, especially people who are younger and healthier, experts remain divided on how to categorize third shots. Anthony Fauci told me recently that he’s very much in the generative camp: “I bet you any amount of whatever that when we finally look back on it,” he said, three doses is going to be “the standard regimen for an mRNA vaccine.” (Still, even Fauci’s been blurring the semantic boundaries. In a recent interview with my colleague Ed Yong at The Atlantic Festival, he alternately described the shot as a “third dose,” a “third-shot booster,” and a “third booster shot” in a five-minute span.) If that turns out to be the case, experts would first need to show that what the first two doses gave us wasn’t good enough, opening up the opportunity for a third jab to make our defenses “more durable, and much more able to protect us” than they were with two shots alone, Paul Offit, a vaccine expert at the Children’s Hospital of Philadelphia, told me. But so far, there’s really no clear evidence to suggest that a third shot elevates us into a new tier of protection, especially against the worst COVID-19 outcomes. The two-dose mRNA vaccines are still blocking hospitalizations and deaths to an extraordinary degree. “If the goal is to prevent serious illness, it does that,” said Offit, a member of the FDA’s vaccine advisory committee. Data from Goel and others back this up on a molecular level. Even several months after getting their second primary doses, vaccinated people (with the possible exception of some folks who are older or not in great health) appear to retain massive legions of immune cells that remember SARS-CoV-2 well enough to thwart it. Some of these defensive populations even seem to be refining themselves into larger and more sophisticated pools of assassins over time, long after the vaccine itself is gone. [Read: You might want to wait to get a booster shot. ] So maybe these third injections are restorative, meant to replace a defense that has withered over time. The burden of proof for that would be twofold: identifying some sort of waning, as well as evidence that an extra shot reverses the ebb. Inklings of the former have, arguably, started to appear. Vaccines still reduce the chances of getting infected; experiencing nasty, lingering symptoms; and passing the virus on to others. But since the spring, mild-to-moderate sicknesses have become a bit more common among the inoculated. Though some of that’s definitely attributable to the rise of the super-contagious Delta variant, this trend also likely reflects the decline in antibody levels that happens after all vaccinations, as the body, freshly roused by the shot’s contents, starts to return to a peacetime state. That leaves the actual restoration bit. In recent presentations to expert committees that advise the FDA and CDC, Pfizer executives crowed about sky-high antibody levels appearing after vaccine recipients got a third shot—evidence, they said, that the injections were bringing the body’s frontline defenses back up to snuff. That could make it easier for people to fight off infections early, before they turn symptomatic, or spread to someone else. But again, antibody levels always drop. (If the body kept pumping out antibodies ad infinitum, it would drive itself into the ground—and rapidly thicken its own circulatory system into a protein-packed sludge.) That raises the possibility that post-booster bumps in protection, too, might be only temporary. “That’s where I get tripped up,” Stanford’s Lee told me. “If we’re boosting to boost antibodies, will we need another dose six months from now?” Some researchers (and Pfizer’s CEO) think we might need annual, even twice-annual, COVID shots for as long as the virus is with us. That prospect can feel demoralizing, and experts worry about the message it sends to the unvaccinated. “I hear the skepticism,” Lewis, of Cornell, said. “‘Well, if this stuff is just going to keep fading away, what’s the point?’” Another sector of the population doesn’t mind the threat of repetitive boosting—“the more protection, the better” has become a common refrain, as some seek out fourth, fifth, even sixth shots. Cloaked in this behavior is another downside of using booster as our linguistic crutch: its near-unilateral promise of more and more benefit, as if shots can be stockpiled like so many rolls of toilet paper. Some Americans have clearly been clamoring for spare shots since at least the spring, among them booster bandits who wriggled through loopholes to nab their jabs ahead of schedule. “With boosters, you’re getting more, and as consumers, we like more,” Stacy Wood, a marketing expert who studies public perception of vaccines at North Carolina State University, told me. It’s a natural response in times of crisis, she said, to “buffer against a lack of future supply.” [Read: Booster bandits are walking a fine line.] Vaccines, unfortunately, don’t work like that. Boosting too early and too often can be counterproductive, for the same reasons that cramming the night before a big exam is: Immune cells, being the students of microbiology that they are, can’t internalize all that information at once; there’s little point in foisting a second lesson on them when they’re still frantically trying to take notes on the first. Immune responses also have ceilings, and administering shot after shot after shot, even somewhat spaced out, could eventually drag the body toward the point of diminishing returns. “That’s a waste of a vaccine,” Lauren Rodda, an immunologist at the University of Washington, told me. After about half a dozen tetanus boosters, for example, “no matter how many more you give, you can’t get any higher antibody response,” Mark Slifka, a vaccinologist at Oregon Health & Science University, told me. We actually used to boost more often against tetanus. But countries loosened their requirements after realizing there was no point. Shots also come with side effects, including a small number that, though quite rare, can be dangerous, Slifka said. Data on the safety of third COVID-19 shots are still being gathered, and although the expectation is that they should be very well tolerated, all this is uncharted territory. Such complex calculus is tough to encapsulate with a term like booster. This, perhaps, is part of the fallout when technical, hyper-specific terms “leak into other communities,” Semino, the linguist, told me. “All of a sudden, something for a professional community is being used for everyone.” Pre-pandemic, most of us didn’t automatically tie boost to vaccines. Now we’re being asked to. And it’s very difficult to know how much our booster preconceptions are coloring our attitudes around extra shots--when to get them, how often to get them, when to stop. Calling them reminder shots--vaccines that offer a richiamo or rappel--skirts some of those issues, capturing dimensions of immunity that booster does not: that there is loss; that there is, sometimes, a replenishing; that protection is not linear, and can shift up or down over time. This framing could also be a more clear-eyed way to assess global equity. Boosters, by default, top off resources that have already been given. If the goal is truly to tamp down transmission, infection, and disease on a wide scale, generative shots—especially first doses—will go much further than restorative ones. “Public health is a collective phenomenon,” Martha Lincoln, a medical anthropologist at San Francisco State University, told me. “We can’t pass the buck to individual immune systems.” Boosting and primary-dosing aren’t mutually exclusive goals. But they draw resources from the same, finite pool. And Lee worries that our third-dose mania might be a bit myopic, especially with so many still unvaccinated here in the United States, and around the globe. “In a highly vaccinated population, boosters can really put you over the edge, and reduce overall circulation,” she told me. Eventually, that will be a priority—tailoring our vaccine rollouts to ensure that we’re cutting down on all kinds of infections, to the extent that we can. Right now, though, with Delta still erupting throughout unimmunized communities, and the health-care system unbearably overstretched in many parts of the country, “we’re not even close to where boosters are going to do anything [other] than provide some individual level benefit.” Our own bodies, after all, seem to be remembering SARS-CoV-2 just fine. It’s everyone else we can’t afford to forget. from https://ift.tt/3Ftl8b0 Check out http://natthash.tumblr.com Every time I leave my apartment, I grab a mask from the stack by the door. After all these months of pandemic life, I’ve amassed a pretty big collection: Some are embroidered, while others bear the faded logos of the New York Public Library or the TV show Nailed It. What all of them have in common is that they’re made of cloth. At this point, cloth masks are so ubiquitous in the United States that it can be easy to forget that they were originally supposed to be a stopgap measure. In April 2020, when surgical masks and highly coveted N95s were first in short supply, the CDC released its initial mask guidance and said cloth masks were the way to go for most people—noting that they could be sewn at home from old T-shirts. Even at that point, when the pandemic was full of unknowns, we knew that cloth masks, although far better than going maskless, weren’t as protective as other types. A growing amount of research supports the idea that our masking norms don’t make much sense: A recent study in Bangladesh, which has yet to be peer-reviewed but is considered one of the most rigorous to date to tackle masking, linked wearing surgical masks with a 11.2 percent decrease in COVID-19 symptoms and antibodies, while cloth masks were associated with only a 5 percent decrease. It’s no wonder that many other countries, including France, Austria, and Germany, shifted their mask guidance away from cloth masks toward those offering higher protection a long time ago. We might have once hoped that vaccines would entirely obviate masking, but unfortunately, masks seem poised to stick around for quite some time. And yet, even as much of our approach to the pandemic has changed in the past 18 months, our approach to masking largely has not. So why are we still strapping pieces of fabric to our face? [Read: The masks were working all along] Unless you work in health care, the CDC still recommends masks made with at least two layers of washable, breathable fabric. A big reason for this is that, yes, surgical masks are still in limited supply, according to the FDA, and so they must be prioritized for health-care workers. Though the shortage appeared to relent this summer, when widespread vaccination led to a dip in demand for both surgical and cloth masks, the rise of the Delta variant precipitated another major mask crunch. But that’s not the only reason masking habits haven’t shifted. Part of the problem is that the enduring mask wars have helped frame mask wearing as a simple binary. “Unfortunately there’s been so much misinformation that’s come out about masking that it’s become so polarized,” Michael Osterholm, an epidemiologist at the University of Minnesota, told me. “People are just divided into either you’re masked or you’re not. And that would be like saying everything that has wheels”—including a tricycle and a jetliner—“is the same.” Faced with this binary, Americans generally don’t pay enough attention to the quality of a mask and how it’s worn. As the Harvard epidemiologist Bill Hanage told me in an email, we’re still wearing cloth masks because they’re “expected to still be better than nothing.” And they really are far better than nothing: He likened surgical masks to a sturdy, well-made umbrella and cloth masks to the cheap kind that inverts. “Both are better than a plastic bag held over your head, which is itself better than nothing,” he said. [Read: Why aren’t we wearing better masks?] But America’s complacency about masks is not simply the result of individual decisions. Public-health agencies could have prioritized using government resources to remedy the mask shortage, as well as simply mailing all Americans more-protective masks. “I can’t speak for the CDC,” Hanage said, “but I would hope that they would be able to convey the message that all masks are not alike, just like all umbrellas are not alike.” A spokesperson for the CDC told me that although the agency believes that N95 masks are “better at protecting the wearer, and if available should be worn,” cloth masks have been shown to be an “effective method of source control,” according to CDC research, and are still recommended when N95s aren’t available. (The spokesperson did not mention surgical masks, and did not respond to a follow-up question.) Many less scientific reasons also play a role in our continued obsession with cloth masks. Even if you’re not making cloth masks at home, they’re generally more affordable than surgical masks because they are meant to be reused. (That being said, the Bangladesh study found that even a surgical mask that had been washed 10 times was more effective at filtering particles than a cloth one.) A 24-pack of cloth masks costs $9 on Amazon—about 37 cents apiece—while single-use surgical masks are about 30 cents each and N95s are upwards of 63 cents. For the same reason, cloth masks are considered more eco-friendly—a nontrivial consideration, given mounting concerns about the waste generated during the pandemic. And for all the companies now offering fabric masks, selling them is a brisk business that, by one estimate, was worth $19.2 billion in 2020. Like T-shirts and baseball caps, cloth masks have become a way to encourage that most American of pastimes: pledging one’s allegiance to sports teams, colleges, and political causes. For the more luxury-inclined, Fendi offers a logo-embroidered silk version for $590. [Read: Vaccines are great. Masks make them even better.] Ultimately, while masking is important, it’s not the most important thing we should be doing to protect ourselves from the coronavirus. Although Osterholm makes it clear that he’s very pro-masking, “it’s really all about the whole hierarchy of environmental control,” he said, referring to the various methods for reducing risk within a space, a key concept in occupational safety. Vaccination is by far the most protective measure a person can take. Second is ensuring proper ventilation—replacing the air in a room at least five to six times an hour, he explained. Next is social distancing. And then there’s masking: “You keep going down in that order, and finally the lowest thing in terms of overall prevention potential is individual respiratory protection,” he said. And there is still much to learn about the effectiveness of masking. Even the most rigorous studies on masking have limitations, said Osterholm, largely because of shortcomings in their methodology. Cloth masks are less protective than surgical masks, but exactly how much less remains uncertain. Roger Chou, an epidemiologist at Oregon Health & Science University who tracks mask studies, told me in an email that he “really has not found much evidence” on the effectiveness of cloth versus surgical masks in stopping the spread of COVID-19 in communities, even though he said that plenty of other data back up their effectiveness. The most important thing, Chou said, “is to wear a mask, whether it is a surgical mask or cloth mask.” Even if a pivot toward surgical masks wouldn’t be some pandemic panacea, America’s mask inertia is in many ways a symptom of the nation’s single-pronged pandemic response. The country has collectively banked on vaccination to end the pandemic, and one consequence is that attention to other protective measures has lagged. Our vaccines are terrific, but it’s now clear that our best way out of the pandemic does not rely on shots alone. “If you have enough pieces of Swiss cheese, you can cover every hole, and you can’t see the table,” Osterholm said. “If you put [one slice] by itself on the table, I promise you, you’re going to see the table.” If masks are slivers of Swiss, cloth ones have more holes than the surgical kind. As long as America is stuck with masks, we might as well make the switch to a less permeable slice. from https://ift.tt/3BbUhh4 Check out http://natthash.tumblr.com The Delta variant’s arrival this summer delivered a blow to the nation’s entire coronavirus arsenal, but its impact on the champion of last year’s vaccine race—Pfizer—has been particularly humbling. Compared with Moderna’s competing shot, Pfizer’s vaccine seems to induce half the amount of virus-fighting antibodies, and is associated with nearly twice as many breakthrough infections, according to two recent studies. Pfizer’s shots remain highly protective against hospitalization, but the latest numbers from the U.S. Centers for Disease Control and Prevention suggest that their effectiveness has dropped from 87 percent to 80 percent during the Delta wave, while that of Moderna’s shots remains in the 90s. Although Pfizer has now sold authorities around the world on the imminent need for third shots to combat waning immunity, the company doesn’t believe that its vaccine, worth more than $30 billion to its bottom line, is inferior in any way to competitors. Recipients of Moderna’s shots, after all, may also need a booster eventually. “All of the real-world evidence you have to take with caution,” Pfizer’s chief scientific officer, Mikael Dolsten, told me recently. “It’s very hard to compare two very effective interventions.” Other experts see the evidence of a difference, however slight, starting to grow. Shane Crotty, a researcher at the La Jolla Institute for Immunology, told me that after looking at some of the recent data, he went to double-check his own vaccination record and was pleased to find Moderna listed on it. Is it possible that Pfizer, in its all-out sprint to bring the first-ever human mRNA vaccine to market, ended up delivering the second-best product? In reporting my forthcoming book on the COVID-19 vaccine race, I never got the sense that Pfizer had cut any unnecessary corners, but I knew that the story for all the companies had been one of compromise, of making the least-bad decisions in the shortest time possible. Pfizer’s first decision, in early 2020, was to sit things out. In January of that year, the company turned down a chance to help its German partner, BioNTech, develop an mRNA vaccine for the emerging coronavirus disease, figuring that the outbreak would burn out on its own, as many such outbreaks do. By the time the two companies joined forces in March, a rival product, developed by Moderna and the National Institutes of Health, had already been given to the first participants in a Phase 1 safety trial. Operation Warp Speed, a joint effort of the Department of Health and Human Services and the Department of Defense, began to come together the following month and promised pharmaceutical companies billions of dollars to fund the manufacture of vaccines before any had even proved effective in large-scale Phase 3 trials. As a term of these investments, any company taking Warp Speed money would have to plan its Phase 3 clinical trials with the input of scientists from the NIH and other agencies. I quickly learned that these negotiations were often contentious and sometimes protracted. Moderna was forced to push back the start date of its Phase 3 trial by several weeks, from July 9 to July 27, because of protocol changes demanded by the Warp Speed team. Pfizer, by contrast, was playing catch-up and decided that it didn’t want to be hamstrung by government bureaucracy. Instead, it sank $2 billion into its own development efforts and refused Warp Speed handouts. [Read: How mRNA technology could change the world] Moving fast meant navigating significant uncertainties. Dosing was a particularly fraught issue, and the prospects for producing a successful mRNA drug or vaccine hinged on getting it right. A smaller dose would be easier to manufacture and less likely to produce side effects. At the same time, previous experimental mRNA vaccines had not been shown to induce the kind of long-lasting cellular immunity one could get from, say, an adenovirus vector vaccine, such as Johnson & Johnson’s. Back in 2019, Moderna published data from a Phase 1 trial of two mRNA-based bird-flu vaccines: The results had looked solid in the first month or two, but antibody levels dropped back toward baseline by month six. The two doses of those vaccines had been spaced just three weeks apart, which may have limited the body’s immune memory. John Mascola, the head of the Vaccine Research Center at the NIH, told me that durability was going to be a big unknown with all of the COVID-19 vaccines, and the Moderna team “wanted to be conservative” in selecting sufficiently large doses and spacing those doses at four weeks. They knew from early-stage trials that with just 25 micrograms, the immune response declines by one-fourth after a month. A 250-microgram dose seemed, conversely, was clearly too high. In the end, they settled on 100 micrograms. In the meantime, Pfizer and BioNTech were still scrambling to choose among four possible mRNA-vaccine candidates. At first, the internal favorite of the scientific team was one named BNT162b1, which consisted of just a fragment of the coronavirus spike protein, known as the receptor-binding domain. (Moderna was using the full spike for its vaccine.) As was the case for Moderna, the Pfizer-BioNTech team had to figure out the right dose. Across Phase 1 trials in Germany and the U.S., the companies had tested that candidate at doses of 10, 20, 30, and 100 micrograms, injected in volunteers at just three weeks apart, compared with Moderna’s four. The highest dose produced such severe side effects, including fever and chills, that it was dropped from the trial. That’s what vaccine makers call a “hot” reaction, and it’s something Dolsten’s team wanted to steer well clear of. Then Pfizer and BioNTech tested their own version of the full-length spike vaccine, BNT162b2—this time going up to only 30 micrograms. Because the full-length spike’s gene sequence was about five times as long as the fragment’s, each microgram of vaccine contained one-fifth the number of copies. It was immediately obvious that the side effects were less intense as a result, but the antibody response might end up being smaller too. That would take several weeks to assess—and the clock was ticking. Dolsten said that during a virtual meeting on July 24, 2020, he told the team that it was time to make a final decision on the candidate and the dose if they were to have any hope of rolling out a vaccine in the fall, when COVID cases were expected to rise. Days earlier, Pfizer had finalized a purchase agreement with Operation Warp Speed. If its vaccine received emergency authorization from the Food and Drug Administration, the U.S. government would pay the company about $20 for each of 100 million doses. And if Pfizer got to the finish line before its Warp Speed competitors delivered their promised doses, the government would be more likely to exercise a purchase option, locking in up to 500 million more doses. Moderna had now published its antibody data, and the company’s trial with a 100-microgram dose was scheduled to start any day now. According to Dolsten, the dilemma for Pfizer’s scientists was that they still had more human data on their first candidate (the fragment) than on their second (the full-length spike). The two candidates looked comparable, but the team still didn’t know how much of an immune response the full-length spike would produce among the most vulnerable, elderly subjects. Those first data points wouldn’t reveal the durability of the response, simply whether it was on par with the one seen among people who had recovered from COVID. Waiting a few weeks for those data (and potentially adding a higher dose or changing the dose spacing), as one might do during a more relaxed vaccine-development process, was out of the question. Although Dolsten told me that Pfizer wasn’t necessarily looking over its shoulder, such a delay would certainly have set its Phase 3 efficacy trial back on a timeline akin to AstraZeneca’s or Johnson & Johnson’s. By the end of the meeting, Dolsten and Pfizer’s CEO, Albert Bourla, had persuaded the vaccine team to follow Moderna’s lead and advance the full-length sequence, but at Pfizer’s lower dose of 30 micrograms. That would likely give Pfizer’s vaccine a safer profile in terms of side effects. It would also be cheaper and easier to manufacture, though Dolsten said the scientific team didn’t weigh that in its decision. Two months into Pfizer’s efficacy trial, in early October, the team was still poring over the antibody data that had come in from the elderly subjects in the dosing study. Measuring the vaccine’s efficacy would involve comparing the number of symptomatic infections in the vaccinated group with those in the placebo group. Moderna and the other companies in Warp Speed gave their own vaccines two full weeks to protect a person following the second dose before any breakthrough infections would be counted against them. Under Pfizer’s more aggressively paced protocol, however, breakthroughs would be tallied at just seven days after the second dose. It now seemed likely that the antibody response in the elderly would not have reached its peak by that point, though no one knew for certain how high it needed to be to protect against infection. At Pfizer’s request, the FDA made a rare allowance, agreeing to let the company add a fall-back measure to its statistical analysis plan, scoring the vaccine at 14 days, like the other companies. In the end, that last-minute change didn’t affect the final result, as the vaccine proved to have more than 90 percent efficacy at preventing symptomatic COVID infections just 11 days after the first dose. The FDA granted it emergency authorization on December 11, 2020. As the first COVID-19 vaccine on the market, Pfizer’s vaccines were deployed to hard-hit nursing homes and senior-living facilities, both in the U.S. and around the world. Pfizer soon became the “hot-person vaccine,” grabbing the largest market share in countries that could afford it. [Read: Pfizer gang is pfinished] What no one could have predicted at the time was just how fleeting Pfizer’s status at the top of the pack would prove. Phase 3 trials of Moderna’s mRNA vaccine produced very similar efficacy numbers; the only hint of a difference was that Moderna’s more potent shot produced more complaints of fever and headaches. Indeed, until the latest observational studies came out, most scientists figured that the two vaccines were equivalent in terms of real-world effectiveness. “It’s surprising that this enormous group of patients already needs a new dose of Pfizer,” says Deborah Steensels, a microbiologist at East Limburg Hospital, in Genk, Belgium, who led the recent study comparing antibody levels generated by the two vaccines. If seniors who received the Pfizer vaccine had gotten Moderna instead, she notes, “we might have had an impact on the duration of protection.” Shane Crotty, the immunologist, said that Pfizer’s dosing might not have been the optimal choice for the durability of an individual’s immune response, but that doesn’t mean it was the wrong decision for public health. “The decision process was very much about what’s the fastest we could vaccinate people and be successful,” he said. There were clearly benefits to stretching out the limited mRNA supply during a pandemic. Three doses of Pfizer, at 30 micrograms each, still amount to less material than a single, 100-microgram dose of Moderna. That means more lives saved for every droplet of vaccine. Indeed, Moderna may have erred in the other direction: The company has now asked the FDA to consider a half dose for its own potential booster. As for Pfizer’s Dolsten, he has no doubts about the path his company took last July. “Clearly, you can go with a hotter dose,” he said. “You may get a slightly higher immune response, it may be longer-lasting, but that’s not the right thing to do for a medicine or a vaccine.” He insisted that the company was never racing against Moderna; it was just racing against the virus. “If I could relive that moment,” he told me. “I am absolutely certain it was the right decision.” from https://ift.tt/3B3hX75 Check out http://natthash.tumblr.com |
Authorhttp://natthash.tumblr.com Archives
April 2023
Categories |