In 1988, the World Health Assembly announced a very ambitious goal: Polio was to be vanquished by the year 2000. It was a reach, sure, but feasible. Although highly infectious, polioviruses affect only people, and don’t hide out in wild animals; with two extraordinarily effective vaccines in regular use, they should be possible to snuff out. Thanks to a global inoculation campaign, infections had, for years, been going down, down, down. But 2000 came and went, as did a second deadline, in 2005, and a third, in 2012, and so on. The world will almost certainly miss an upcoming target at the end of 2023 too. In theory, eradication is still in sight: The virus remains endemic in just two countries—Pakistan and Afghanistan—and two of the three types of wild poliovirus that once troubled humanity are gone. And yet, polio cases are creeping up in several countries that had eliminated them, including the United Kingdom, Israel, and the United States. Earlier this year, New York detected America’s first paralytic polio case in nearly a decade; last week, the governor declared a state of emergency over a fast-ballooning outbreak. This is the cruel logic of viruses: Give them enough time—leave enough hosts for them to infect—and they will eventually find a way to spread again. “You have to stop transmission everywhere, all at the same time,” says Kimberly Thompson, a health economist and the president of the nonprofit Kid Risk. Which means eradication will demand a near-perfect syncing of vaccine supply, access, equity, political will, public enthusiasm, and more. To beat the virus, population immunity must outlast it. Right now, though, the world’s immunological shield is too porous to stop polio’s spread. At the center of the new epidemics are vaccine-derived polioviruses that have begun to paralyze unimmunized people in places where immunity is low—a snag in the eradication campaign that also happens to be tightly linked to one of its most essential tools. Vaccine performance has always depended on both technology and human behavior. But in this case especially, because of the nature of the foe at hand, those twin pillars must line up as precisely as possible or risk a further backslide into a dangerous past. In the grand plan for eradication, our two primary polio vaccines were always meant to complement each other. One, an ultra-effective oral formulation, is powerful and long-lasting enough to quash wild-poliovirus transmission—the perfect “workhorse” for a global vaccination campaign, says Adam Lauring, an evolutionary virologist at the University of Michigan. The other, a supersafe injectable, sweeps in after its colleague has halted outbreaks one country at a time, maintaining a high level of immunity in post-elimination nations while the rest of the world catches up. For decades, the shot, chaser approach found remarkable success. In the 1980s, wild poliovirus struck an estimated 300,000 to 400,000 people each year; by 2021, the numbers had plummeted to single digits. But recently, as vaccine coverage in various countries has stalled or slipped, the loopholes in this vaccination tactic have begun to show themselves and grow. The oral polio vaccine (OPV), delivered as drops in the mouth, is one of the most effective inoculations in the world’s roster. It contains weakened forms of polioviruses that have been altered away from their paralysis-causing forms but still mimic a wild infection so well that they can stop people from spreading wild pathogens for years, even decades. In the weeks after people receive the vaccine, they can also pass the weakened virus to others in the community, helping protect them too. And OPV’s transportability, low price point, and ease of administration make it a “gold standard for outbreak interruption,” says Ananda Bandyopadhyay, the deputy director for the polio team at the Bill & Melinda Gates Foundation. Since its mid-20th-century debut, OPV has helped dozens of countries—including the U.S.—eliminate the virus. Those nations were then able to phase out OPV and switch to inoculating people with the injected vaccine. But OPV’s most potent superpower is also its greatest weakness. Given enough time and opportunity to spread and reproduce, the neutered virus within the vaccine can regain the ability to invade the nervous system and cause paralysis in unvaccinated or immunocompromised people (or in very, very rare cases, the vaccine recipient themselves). Just a small handful of genetic modifications—three or fewer—can spark a reversion, and the mutants, which are “better at replicating” than their kin, can take over fast, says Raul Andino, a virologist at UC San Francisco. In recent years, a few thousand cases of vaccine-derived polio have been detected around the world, far outstripping the toll of wild viruses; dozens of countries, the U.S. now among them, are battling such outbreaks, and the numbers seem to be only going up. Vaccine-derived polio is still a true rarity: Billions of oral vaccines have been delivered since the global campaign began. But it underscores “the real problem” with OPV, Lauring told me. “You’re fighting fire with fire.” [Read: The epidemic that preyed on children] The injected polio vaccine, or IPV, which contains only chemically inactivated versions of the virus, carries none of that risk. To purge all polio cases, “you have to stop using oral polio vaccine,” Thompson told me, and transition the entire globe to IPV. (Post-eradication, countries would need to keep IPV in their routine immunization schedule for at least 10 years, experts have said.) But the injected vaccine has a different drawback. Although the shot can very effectively stave off paralysis, IPV doesn’t elicit the kind of immunity that stops people from getting infected with polioviruses and then passing them on. In places that rely on injected vaccines, “even immune individuals can participate in transmission,” Thompson told me. Which opens up a vulnerability when too many people have skipped both types of vaccines: Paralyzing polioviruses erupt out of communities where the oral vaccine is still in use—then can spread in undervaccinated areas. It might be tempting to blame OPV for our troubles. But that’s not the main threat, Bandyopadhyay told me. “It’s the lack of adequate vaccination.” As things stand, the goal in the endemic countries of Pakistan and Afghanistan remains achieving sufficiently high vaccine coverage, Bandyopadhyay said. But many of the communities in these nations are rural or nomadic, and tough to reach even with convenient drop-in-the-mouth vaccines. Civil and political unrest, misinformation, natural disasters, and most recently, the COVID pandemic have raised additional hurdles. So have intermittent bans on house-to-house vaccination in Afghanistan, says John Vertefeuille, the chief of the polio-eradication branch at the CDC. Cases of wild polio have experienced a recent jump in Pakistan, and have also been imported into the non-endemic countries of Malawi and Mozambique. But the toll of those outbreaks—all featuring type 1 polio—currently pales in comparison with those featuring vaccine-derived type 2. The last case of wild type 2 polio was detected in 1999, but that version of the virus has persisted in its modified form in oral polio vaccines. And when it reverts to its dangerous form, it gains particularly infectious oomph, allowing it to spread unchecked wherever immunity is low. Some 30 countries around the world are battling outbreaks of poliovirus whose origin can be traced back to the oral inoculations; vaccine-derived type 2 is what’s been circulating in Jerusalem, London, and New York, where it ultimately paralyzed an unvaccinated young man. The extent to which the virus is churning in other parts of the country isn’t fully known; routine immunization has dropped since the COVID pandemic’s start, and the U.S. hasn’t regularly surveyed its wastewater for the pathogen. The success of these vaccine-derived viruses is largely the result of our own hubris—of a failure, experts told me, to sync the world’s efforts. In 2016, 17 years after the last wild type-2 case had been seen, officials decided to pivot to a new version of OPV that would protect against just types 1 and 3, a sort of trial run for the eventual obsolescence of OPV. But the move may have been premature. The switch wasn’t coordinated enough; in too many pockets of the world, type-2 polio, from the three-part oral vaccine, was still moseying about. The result was disastrous. “We opened up an immunity gap,” Thompson told me. Into it, fast-mutating vaccine-derived type-2 viruses spilled, surging onto a global landscape populated with growing numbers of children who lacked protection against it. A new oral vaccine, listed for emergency use by the WHO in 2020, could help get the global campaign back on track. The fresh formulation, developed in part by Andino and his colleagues, still relies on the immunity-boosting powers of weakened, replicating polioviruses. But the pathogens within have had their genetic blueprints further tweaked. “We mucked around” with the structure of poliovirus, Andino told me, and figured out a way to make a modified version of type 2 that’s far stabler. It’s much less likely to mutate away from its domesticated, non-paralyzing state, or swap genes with related viruses that could grant the same gifts. Technologically, the new oral vaccine, nicknamed nOPV2, seems to be as close to a slam dunk as immunizations can get. “To me, it’s just super cool,” Lauring told me. “You keep all the good things about OPV but mitigate this evolutionary risk.” In the year and a half since the vaccine’s world premiere, some 450 million doses of nOPV2 have found their way into children in 22 countries—and a whopping zero cases of vaccine-derived paralysis have followed. But nOPV2 is “not a silver bullet,” Andino said. The vaccine covers just one of the three poliovirus types, which means it can’t yet fully replace the original oral recipe. (Trials for type-1 and -3 versions are ongoing, and even after those recipes are ready for prime time, researchers will have to confirm that the vaccine still works as expected when the three recipes are mixed.) The vaccine’s precise clinical costs are also still a shade unclear. nOPV2 is a safer oral polio vaccine, but it’s still an oral polio vaccine, chock-full of active viral particles. “You can think of it as more attenuated,” Thompson said. “But I don’t think anybody expects that it won’t have any potential to evolve.” And nOPV2’s existence doesn’t change the fact that the world will still have to undergo a total, coordinated switch to IPV before eradication is won. As has been the case with COVID vaccines, and so many others, the primary problem isn’t the technology at all—but how humans have deployed it, or failed to. “Vaccine sitting in a vial, no matter how genetically stable and how effective it is, that’s not going to solve the problem of the outbreaks,” Bandyopadhyay said. “It’s really vaccination and getting to that last child in that last community.” [Read: How the poor get blamed for disease] If dwindling vaccination trends don’t reverse, even our current vaccination strategies could require a rough reboot. In 2013, health officials in Israel—which had, for years prior, run a successful IPV-only campaign for its children—detected wild type-1 virus, imported from abroad, in the country’s sewage, and decided to roll out another round of oral vaccines to kids under 10. Within a few weeks, nearly 80 percent of the targeted population had gotten a dose. Even “polio-free countries are not polio-risk-free,” Bandyopadhyay told me. The situation in New York is different, in part because type-1 polio causes paralysis more often than type-2 does. But should circumstances grow more dire—should substantial outbreaks start elsewhere in the country, should the nation fail to bring IPV coverage back to properly protective levels—America, too, “may have to consider adding OPV as a supplement,” says Purvi Parikh, an immunologist and a physician at NYU, “especially in rural areas” where emergency injected-vaccine campaigns may be tough. Such an approach would be a pretty extreme move, and a “very big political undertaking,” Thompson said, requiring a pivot back to a vaccine that was phased out of use decades ago. And even then, there’s no guarantee that Americans would take the offered oral drops. The CDC, for now, is not eager for such a change. Noting that most people in the U.S. are vaccinated against polio, Katherina Grusich, an agency spokesperson, told me that the CDC has no plans to add OPV or nOPV to the American regimen. “We are a long way from reaching for that,” she said. But this week, the U.S. joined the WHO’s list of about 30 nations with circulating vaccine-derived-poliovirus outbreaks. The country could have avoided this unfortunate honor had it kept shot uptake more uniformly high. It’s true, as Grusich pointed out, that more than 90 percent of young American children have received IPV. But they are not distributed evenly, which opens up vulnerabilities for the virus to exploit. Here, the U.S., in a sense, had one job: maintain its polio-free status while the rest of the world joined in. That it did not is an admonition, and a reminder of how unmerciful the virus can be. Polio, a fast mutator, preys on human negligence; the vaccines that guard against it contain both a form of protection and a catch that reinforces how risky treating these tools as a discretionary measure can be. from https://ift.tt/1Gcp4vt Check out http://natthash.tumblr.com
0 Comments
At a press briefing earlier this month, Ashish Jha, the White House’s COVID czar, laid out some pretty lofty expectations for America’s immunity this fall. “Millions” of Americans, he said, would be flocking to pharmacies for the newest version of the COVID vaccine in September and October, at the same appointment where they’d get their yearly flu shot. “It’s actually a good idea,” he told the press. “I really believe this is why God gave us two arms.” That’s how I got immunized last week at my local CVS: COVID shot on the left, flu shot on the right. I spent the next day or so nursing not one but two achy upper arms. Reaching high shelves was hard; putting on deodorant was worse. And it did make me wonder what would have happened if I’d ignored Jha’s teleological advice and gotten both jabs in the same arm. Maybe my annoyance would have been lessened. Or perhaps the same-side shots would have made the soreness in my left arm way worse. When I posed this puzzle to immunologists, vaccinologists, and pharmacists, I got back a lot of hems and haws. For the millions of Americans who will be getting two-shot appointments by fall’s end, they told me, the choice really does come down to personal preference in the absence of clear data: You’ve just gotta pick a side. Or, you know, two. On the one hand (sorry), there are the vaccine double-downers. Sallie Permar, a pediatrician at Cornell University, and Stephanie Langel, an immunologist at Duke University, both said they’d probably get both shots in the same shoulder; so would Rishi Goel, an immunologist at the University of Pennsylvania. “Personally, I’d rather have one arm that’s slightly uncomfortable than both,” Goel told me. On the other hand, we’ve got Team Divide-and-Conquer. Several experts said they’d follow the White House protocol of splitting shots left and right. Ali Ellebedy, an immunologist at Washington University in St. Louis, told me he’d prefer to have two slightly sore arms to one totally dead one. Jacinda Abdul-Mutakabbir, a pharmacist at Loma Linda University, says she generally recommends that her patients get the vaccines on separate sides “for comfort.” Last year, she opted to get the flu shot and a COVID booster within a few inches of each other, and “I wanted to chop my arm off,” she told me. “Never again.” [Read: America’s fall booster plan has a fatal paradox] The deciding logic here should be pretty intuitive, Permar told me. Two shots on one side might be expected to double how sore that arm will get, though the experience of each vaccine recipient will depend on a bevy of factors, including the ingredients in the shots and that person’s infection and vaccination history, as well as their immune-system health. Also, for people like my husband—who’s prone to very heavy vaccine side effects—the choice may not matter at all. He was so knocked out by the fever and chills that came with his COVID-flu-shot combo, he couldn’t have cared less which arms got the shots. I dug around for studies examining the consequences of the one-versus-two-arm choice and found only one: a Canadian trial from 2003, which vaccinated a few hundred sixth-graders at two dozen middle schools against group C meningitis and hepatitis B at the same time. Roughly half the kids got both shots in the same arm; the others received one on each side. (Some kids in the latter group requested that their shots be administered by a pair of nurses who could plunge both syringes at the same time.) Among students in the same-arm group, 18 percent ended up with tenderness at the injection site that they rated “moderate or severe.” But those kids fared better than the ones in the two-arm group, 28 percent of whom experienced moderate or severe tenderness in at least one arm, and 8 percent of whom had it in both arms at the same time. But those results apply only to that group of kids in that setting, with those two specific vaccines; there’s no telling whether the same trends would be seen with flu shots and COVID shots when given to children or adults. Michela Locci, an immunologist at the University of Pennsylvania, told me she suspects that combining flu and COVID inoculations in the same arm could actually drive extra side effects: “The overall inflammation might be higher,” she said. Many pediatricians, who often have to administer four or five shots to a baby at once, are habitual splitters. “If there’s more than one vaccine syringe to give to a baby, generally, two legs are used,” Permar told me. (Kids usually upgrade to arm shots sometime in toddlerhood—it’s all about finding a muscle that’s big enough for the needle to hit its mark.) Doctors also have a nerdy reason to split shots between arms or legs. “If there’s a local reaction to the vaccine,” Permar said, “you can identify which vaccine it was if you separate them by space.” (For the record, I had a more painful reaction in my left arm, where I got the COVID shot. Others I’ve spoken with have reported the same disparity.) [Read: Families are going rogue with rapid tests] The CDC advocates for separating vaccination shots by at least one inch of space. Per the agency, if a COVID shot is being given at the same time as a vaccine “that might be more likely to cause a local injection site reaction,” the shots should be dosed into “different limbs, if possible.” Two types of flu shots cleared for use in people 65 years and older—the high-dose vaccine and the adjuvanted one—fall into that category. But the different-limb advice doesn’t seem to apply to other flu shots, including those cleared for use in younger adults and kids. However someone ends up taking simultaneous flu and COVID shots, the placement is unlikely to affect how much protection the vaccines provide. There could be an argument for letting “each side focus on its own thing,” says Gabriel Victora, an immunologist at Rockefeller University. “But it probably doesn’t make a whole lot of difference.” Children routinely get combo vaccines, such as DTaP and MMR, each of which combines multiple disease-fighting ingredients in a single syringe. The triple-threat formulas work just as well as injecting their individual parts. The immune system is used to multitasking: It spends all day being bombarded by microbes, so there’s good reason to believe that with vaccines, too, our body will see simultaneous shots “as independent events,” Goel told me. Which arm gets picked for which shot, though, will affect where the jab’s contents end up. After a vaccine is injected, its immunity-inducing ingredients meander to the nearest lymph node, such as the ones in the armpits. There, hordes of immune cells fight over the vaccine’s bits, and the fittest and fiercest among them are selected to leave the lymph node and fight. Here, again, doubling up on one arm shouldn’t be an issue, Goel said: The immune-cell boot camps in these lymph nodes have “a good amount of real estate.” It might even be a good idea to stick the same limb—and thereby, the same lymph node—every time you get another dose of a particular vaccine. After immune cells in a lymph node spot a particular bit of pathogen, some of them march off into battle, but others may hang around like reserve troops, mulling over what they’ve learned. A couple of recent studies, one of them in mice, hint that repeated delivery of the same ingredients to those veteran learners could give the body a slight edge—though the extent of that advantage “might be marginal,” Victora told me. Still, Langel, of Duke, told me jokingly that because she usually gets all of her vaccines in her “non-writing” arm, the lymph node beneath it could now be especially superpowered—a “nice bonus” for her defenses on the whole. That said, no one should stress too much about getting a shot in the “wrong” arm. “It’s not like you’re immune on the left side and not on the right side,” Goel told me. Immune cells travel throughout the body; there is no midline DMZ. Permar even points out that getting the newly formulated COVID vaccine, which includes new ingredients tailored to fight Omicron subvariants, on the opposite side from the previous rounds could help its ingredients reach a fresher slate of cells. “I think you could convince yourself either way,” she told me. Which, honestly, leaves me totally at peace with my choice. Apart from arm achiness, I had no other side effects—and in a way, I preferred the symmetry of the one-on-each-side injections. With all that said, it’s worth briefly acknowledging a third option: Splitting the flu and COVID vaccines into separate visits. I was, before my most recent COVID shot, some 10 months out from my previous dose. But it felt awfully early for my flu shot, which might be better timed for peak protection if taken later in the season. Still, the allure of getting it all over with was too tantalizing, especially because I happen to have a lot of travel up ahead. In the grand scheme of things, the bigger, more important choice was opting into the shots at all. from https://ift.tt/Bk57PlG Check out http://natthash.tumblr.com As someone with dog allergies who nevertheless has been around many dogs as a trainer, a fosterer, and an owner, Candice has learned not to trust the promise of a “hypoallergenic” dog. She’s met low-shedding, hypoallergenic poodles and Portuguese water dogs that supposedly shouldn’t trigger her allergies yet very much did. But she has also met fluffy, longhaired breeds such as huskies and spitzes that set off nary a sneeze. “I’ve had more misery with short-haired dogs,” she told me. That includes her own Belgian Malinois, Fiore, with whom her symptoms got so bad that she started allergy shots. Fiore’s equally furry full sister Fernando, though? Totally fine. No reaction! Candice—whose last name I’m not using for medical-privacy reasons—is not alone in discerning no rhyme or reason to which dogs she’s allergic to. In studies, scientists have found no difference in how much of the dog allergen Can f 1 is present in homes with hypoallergenic versus non-hypoallergenic breeds. One study found no difference in the amount of allergen on the fur of different dogs either. Another actually found more allergen on the fur of hypoallergenic breeds. Hypoallergenic doesn’t seem to mean much at all. “There’s really, truly no completely, 100 percent hypoallergenic dog. Even hairless dogs can make the allergen,” says John James, a spokesperson for the Asthma and Allergy Foundation of America. “It’s really a marketing term,” says David Stukus, an allergist at Nationwide Children's Hospital and a member of AAFA’s Medical Scientific Council. When I asked several allergists around the country if perplexed owners ever come in allergic to their expensive, supposedly hypoallergenic dog, their answers were unequivocal: “All the time.” One of the biggest sources of misinformation on this topic is, in fact, a former U.S. president. “When President Obama was in office, they allegedly had a hypoallergenic dog because their daughter had allergies, and that didn’t help matters,” Stukus told me, referring to the Obamas’ first Portuguese water dog, Bo. “Everybody got Portuguese water dogs.” And—surprise—they can still cause allergies. Technically, hypoallergenic means that a dog is less likely to cause allergies, not that it never causes allergies, though this distinction is often lost in colloquial use. But even then, there is no such thing as a consistently hypoallergenic breed. That’s because, although breeds that shed less fur or hair are commonly considered hypoallergenic, the fur or hair itself is not what causes allergies. Rather, it is proteins present in the dander, or small flakes of skin, or saliva. All dogs make these proteins, and all dogs have skin and saliva. It is true, though, that a person might find one dog less allergenic than another. The studies that couldn’t find a clear pattern of lower allergens in hypoallergenic breeds did find differences among individual dogs of the same breed. And a smaller dog is generally going to shed less dander than a big one. On size alone, “it does make sense that a chihuahua is less problematic than a Great Dane,” says Richard Lockey, an allergist at the University of South Florida. Dogs also make a whole suite of proteins that can cause allergies. The best known is Can f 1, although there are seven others. Some people might be more allergic to one of these proteins than another; some dogs might make more of one of these proteins than another. Whether a particular human actually ends up allergic to a particular dog depends on these details—and can’t be predicted from the breed alone. For this reason, doctors recommend that anyone with allergies spend time with a specific dog before taking it home. “I literally say, ‘Have your child hug them, rub their face on them.’ If nothing happens, that’s a good sign,” Stukus said. People who are allergic can also develop tolerance to a specific dog over time. Candice, for example, eventually developed a tolerance to her German-shepherd mix, Tesla, despite getting all watery-eyed and sneezy at first. In addition, allergy shots, also called immunotherapy, can help people build up tolerance by gradually increasing exposure to an allergen; Candice eventually resorted to them with Fiore. The inverse of this principle explains the Thanksgiving effect, where people who leave for college come home suddenly allergic to their childhood pet after not being exposed for a long time. Nasal steroid sprays and antihistamines such as Claritin and Allegra, which are available over the counter, can also be used to manage allergies these days. That wasn’t always the case, recalls Lockey, who began practicing medicine in the 1960s. Back then, there weren’t good medications for controlling allergies, and he would just tell patients to keep their pets outdoors. “That just doesn’t go anymore,” he told me. Now, few dogs are kept exclusively outdoors, especially in cities. They sleep in our homes and even our beds. As dogs have become physically enmeshed in our lives, dog allergies can no longer be as easily ignored as when the animals lived outside. The myth of an allergy-free dog persists, though, and Stukus often sees this frustration play out in families with allergic kids. “This is the point that I hear all the time from families: It’s the grandparents,” he told me. Parents might quickly discover that their kids are allergic to “hypoallergenic” dogs. But grandparents, eager for their grandkids to visit, push back because their expensive pet is supposed to be hypoallergenic—“The Obamas had the same dog. It’s fine!”—only for the kids to end up coughing and miserable. He keeps hearing the same lament. “They just don’t understand,” the parents tell him, “that there’s no such thing as a hypoallergenic dog.” from https://ift.tt/x7ruHs2 Check out http://natthash.tumblr.com A decade into her optometry career, Marina Su began noticing something unusual about the kids in her New York City practice. More of them were requiring glasses, and at younger and younger ages. Many of these kids had parents who had perfect vision and who were baffled by the decline in their children’s eyesight. Frankly, Su couldn’t explain it either. In optometry school, she had been taught—as American textbooks had been teaching for decades—that nearsightedness, or myopia, is a genetic condition. Having one parent with myopia doubles the odds that a kid will need glasses. Having two parents with myopia quintuples them. Over the years, she did indeed diagnose lots of nearsighted kids with nearsighted parents. These parents, she told me, would sigh in recognition: Oh no, not them too. But something was changing. A generation of children was suddenly seeing worse than their parents. Su remembers asking herself, as she saw more and more young patients with bad eyesight that seemed to have come out of nowhere: “If it’s only genetics, then why are these kids also getting myopic?” What she noticed in her New York office a few years ago has in fact been happening around the world. In East and Southeast Asia, where this shift is most dramatic, the proportion of teenagers and young adults with myopia has jumped from roughly a quarter to more than 80 percent in just over half a century. In China, myopia is so prevalent that it has become a national-security concern: The military is worried about recruiting enough sharp-eyed pilots from among the country’s 1.4 billion people. Recent pandemic lockdowns seem to have made eyesight among Chinese children even worse. For years, many experts dismissed the rising myopia rates in Asia as an aberration. They argued that Asians are genetically predisposed to myopia and nitpicked the methodology of studies conducted there. But eventually the scope of the problem and the speed of change became impossible to deny. In the U.S., 42 percent of 12-to-54-year-olds were nearsighted in the early 2000s—the last time a national survey of myopia was conducted—up from a quarter in the 1970s. Though more recent large-scale surveys are not available, when I asked eye doctors around the U.S. if they were seeing more nearsighted kids, the answers were: “Absolutely.” “Yes.” “No question about it.” In Europe as well, young adults are more likely to need glasses for distance vision than their parents or grandparents are now. Some of the lowest rates of myopia are in developing countries in Africa and South America. But where Asia was once seen as an outlier, it’s now considered a harbinger. If current trends continue, one study estimates, half of the world’s population will be myopic by 2050. The consequences of this trend are more dire than a surge in bespectacled kids. Nearsighted eyes become prone to serious problems like glaucoma and retinal detachment in middle age, conditions that can in turn cause permanent blindness. The risks start small but rise exponentially with higher prescriptions. The younger myopia starts, the worse the outlook. In 2019, the American Academy of Ophthalmology convened a task force to recognize myopia as an urgent global-health problem. As Michael Repka, an ophthalmology professor at Johns Hopkins University and the AAO’s medical director for government affairs, told me, “You’re trying to head off an epidemic of blindness that’s decades down the road.” The cause of this remarkable deterioration in our vision may seem obvious: You need only look around to see countless kids absorbed in phones and tablets and laptops. And you wouldn’t be the first to conclude that staring at something inches from your face is bad for distance vision. Four centuries ago, the German astronomer Johannes Kepler blamed his own poor eyesight, in part, on all the hours he spent studying. Historically, British doctors have found myopia to be much more common among Oxford students than among military recruits, and in “more rigorous” town schools than in rural ones. A late-19th-century ophthalmology handbook even suggested treating myopia with a change of air and avoidance of all work with the eyes—“a sea voyage if possible.” By the early 20th century, experts were coalescing around the idea that myopia was caused by “near work,” which might include reading and writing—or, these days, watching TV and scrolling through Instagram. In China, officials have become so alarmed that they’ve proposed large-scale social changes to curb myopia in children. Written exams are now limited before third grade, and video games are restricted. One elementary school reportedly installed metal bars on its desks to prevent kids from leaning in too close to their schoolwork. Spend too much time scrutinizing text or images right in front of you, the logic goes, and your eyes become nearsighted. “Long ago, humans were hunters and gatherers,” says Liandra Jung, an optometrist in the Bay Area. We relied on our sharp distance vision to track prey and find ripe fruit. Now our modern lives are close-up and indoors. “To get food, we forage by getting Uber Eats.” This is a pleasingly intuitive explanation, but it has been surprisingly difficult to prove. “For every study that shows an effect of near work on myopia, there’s another study that doesn’t,” says Thomas Aller, an optometrist in San Bruno, California. Adding up the number of hours spent in front of a book or screen does not seem to explain the onset or progression of nearsightedness. A number of theories have rushed to fill this confusing vacuum. Maybe the data in the studies are wrong—participants didn’t record their hours of near work accurately. Maybe the total duration of near work is less important than whether it’s interrupted by short breaks. Maybe it’s not near work itself that ruins eyes but the fact that it deprives kids of time outdoors. Scientists who argue for the importance of the outdoors are further subdivided into two camps: those who believe that bright sunlight promotes proper eye growth versus those who believe that wide-open spaces do. Something about modern life is destroying our ability to see far away, but what? Asking this question will plunge you into a thicket of scientific rivalries—which is what happened when I asked Christine Wildsoet, an optometry professor at UC Berkeley, about the biological plausibility of these myopia theories. Over the course of two hours, she paused repeatedly to note that the next part was contentious. “I’m not sure which controversy we’re up to,” she said at one point. (It was No. 4, and there were still three more to come.) But, she also noted, these theories are essentially two sides of the same coin: Anyone who does too much near work is also not spending much time outside. Whichever theory is true, you can draw the same practical conclusion about what’s best for kids’ vision: less time hunched over screens, more time on outdoor activities. By now, scientists have moved past the faulty assumption that myopia is purely genetic. That idea took hold in the ’60s, when studies of twins showed that identical twins had more similar patterns of myopia than fraternal ones, and persisted in the academic world for decades. DNA does indeed play a role in myopia, but the tricky factor here is that identical twins don’t just share the same genes; they’re exposed to many of the same environmental stimuli, too. Glasses, contacts, and laser surgery all help nearsighted people see better. But none of these fixes corrects the underlying anatomical problem of myopia. Whereas a healthy eye is shaped almost like an orb, a nearsighted one is more like an olive. To slow the progression of myopia, we would have to stop the elongation of the eyeball. Which we already know how to do. Treatments to slow the progression of myopia—called “myopia control” or “myopia management”—exist. They’re just not widely known in America. Over the past two decades, eye doctors—mostly in Asia—have discovered that special lenses and eye drops can slow the progression of nearsightedness in children. Maria Liu, a myopia researcher who grew up in Beijing, told me that she first became interested in nearsightedness as a teenager, when she began watching classmates at her school for gifted children get glasses one by one. In this intensely competitive academic environment, she remembers spending the hours of 6:30 a.m. to 10 p.m. on schoolwork, virtually all indoors. By the time she finished university, nearly all of her fellow students needed glasses, and she did too. Years later, when she started an ophthalmology residency in China, she met many young patients who wore orthokeratology lenses—also known as OrthoK—a type of overnight contact lens that temporarily alters the way light enters the eye by reshaping the clear front layer of the eyeball, thus improving vision during the day. Liu noticed, anecdotally, that those who wore OrthoK seemed to have better vision down the line than those who wore glasses. Could long-term use of the lenses somehow prevent elongation of the eye, thus impeding myopia’s progression? It turns out that other scientists and doctors across Asia were noticing the same trend. In 2004, a randomized controlled study in Hong Kong of OrthoK confirmed Liu’s hunch. By then, Liu had moved to the U.S., and she soon began a doctoral program in vision science at Berkeley to study myopia. Her classmates, she recalls, were tackling exotic-sounding topics such as gene therapy and retinal transplants and wondered why she was studying “something that’s so boring.” She ended up working in Wildsoet’s lab, researching the development of myopia in young chick eyes. In humans, the majority of babies are born farsighted. Our eyes start slightly too short, and they grow in childhood to the right length, then stop. This process has been finely calibrated over millions of years of evolution. But when the environmental signals don’t match what the eye has evolved to expect—whether that’s due to too much near work, not enough outdoor time, some combination of the two, or another factor—the eye just keeps growing. This process is irreversible. “You can’t make a longer eyeball shorter,” Liu said. But you can interrupt growth by counteracting these faulty signals, which is what myopia control is designed to do. When Liu became a professor at Berkeley after receiving her Ph.D., she started envisioning a myopia-control clinic—the first of its kind in the U.S.—that could bridge the gap between research and practice. By then, she knew that many doctors in China were already successfully using OrthoK for myopia control. The school administration was skeptical. Liu says that the clinical director didn’t see how the clinic would benefit optometry students, or how it could attract enough patients to be worthwhile financially. But in 2013, Liu started it anyway, as a one-woman operation. She began seeing patients on Sundays in borrowed exam rooms with no extra pay and without relinquishing any of her teaching or clinical duties. Within months, her schedule was full. The Berkeley Myopia Control Clinic now runs four days a week and has 1,000 active patients—some of whom drive hours through Bay Area traffic to get there. Liu was one of the only people at the school who anticipated the clinic’s massive success. Jung, who is also an assistant clinical professor at Berkeley, told me that Liu’s knowledge of the latest myopia-control treatments made it feel like she came “from the future.” When I arrived at the clinic at 8 a.m. on a Saturday morning this past spring—an hour at which the rest of the campus was still quiet—it was already filling up with optometry students and residents who work there as part of their training. Liu, who is petite with neat, wavy hair, moved through the clinic with frightful efficiency. One moment she was examining eyes, the next talking down a parent whose son’s contact-lens shipment had gone missing, the next warning staffers about a malfunctioning printer. The clinic offers three different treatments: OrthoK, multifocal soft contact lenses, and atropine eye drops. The first two both work by tweaking how light enters the eye, producing a signal for the eyeball to stop lengthening. Atropine, in contrast, is a drug that seems to chemically alter the growth pathway of the eye when used at low doses. (It also dilates the pupil; Cleopatra reportedly used it to make her eyes more beautiful.) These treatments slow myopia progression on average by about 50 percent. The original clinical trials validating them were mostly conducted in Asia starting in the mid-2000s. And the American Optometric Association’s evidence-based committee published a report advising its members on how to use myopia control last year. Until quite recently, though, none of these treatments had been approved by the FDA for myopia control. Any optometrists who wanted to offer them had to go off label. And any patient who wanted to use them had to find the right doctor. [Yascha Mounk: The great American eye-exam scam] It’s not a coincidence that Liu’s clinic found early success in the Bay Area, which has a large Asian population. Eye doctors I spoke with in multiple cities across the U.S. said it was usually Asian parents who came in asking for myopia control. The parents I met at the clinic skewed Asian and, on that Saturday, particularly Chinese—first-generation immigrants who speak Mandarin seek Liu out on the days she is personally in the clinic. Many of them heard about myopia control from fellow immigrants or friends in Asia. George Tsai, whose 8-year-old son was at the clinic for an OrthoK appointment, told me that his wife, who grew up in China, had learned of myopia control through WeChat, the messaging app popular in the country and among the Chinese diaspora. Liu has a second phone, which she uses to manage three WeChat groups full of parents with kids in myopia control across North America. The questions flood in day and night. “First thing in the morning, I look at this WeChat group. Who has lost a lens? Who has red eyes? Who has other problems?” she said. “And again, before I go to bed.” She started the first group with a parent of one of her patients. When it hit the maximum number of members allowed on WeChat, they created a second, and then a third. The groups now contain a total of 1,500 parents. In general, Liu told me, Asian parents tend to be a lot more motivated because myopia “is much better perceived or accepted as a disease in Asian culture.” I know this firsthand, as the child of Chinese immigrants. Distressed about my worsening vision in elementary school, my mother would regularly admonish me, standing my pencil case upright to measure the distance between my head and my desk. She also made me do eye exercises developed in China, which I was vindicated to finally learn, in the course of reporting this story, do not work. This was the late ’90s, when there really was nothing to be done about myopia progression. But in the parents I met at the Berkeley clinic, I saw the same determination I once saw in my own. They had uprooted their lives and come to a foreign country and now here they were, hoping to bestow upon their kids any advantage, any edge that modern science could give. There is another reason that the Bay Area, with its high median income, has been fertile ground for myopia control: The treatments are expensive. Many of the parents I met at the clinic were engineers or doctors. At Berkeley, OrthoK costs more than $450 for one pair of lenses, plus $1,600 for the initial fitting, not including the fees for several follow-up appointments a year. Soft contact lenses can run from several hundred to more than $1,000 a year. And a year’s supply of atropine eye drops costs hundreds of dollars. Kids are typically in myopia control until their mid-teens to early 20s. Vision insurance does not cover any of these treatments. Multinational eye-care companies now see myopia control as a hot potential market. They’re vying for FDA approval of new lenses and improved formulations of atropine, which can be patented rather than sold as a cheaper generic. The business case is obvious: If half of the world is myopic by 2050, that’s a huge pool of would-be customers. “How often do you have an opportunity to have an impact on a condition that will affect one out of two people? There’s nothing else on the planet that I’m aware of,” says Joe Rappon, the former chief medical officer of SightGlass Vision, a small California company whose myopia-control technology was jointly acquired by the eye-care giants CooperVision and Essilor. In November 2019, the FDA green-lighted the first—and currently only—treatment specifically designed to slow the progression of myopia in the U.S., a soft contact lens from CooperVision called MiSight. Many more treatments, though, are in trials in the U.S., including several types of spectacles that tweak the way light enters the eye in order to slow its growth. Some are already on the market in Europe and Canada. Once those glasses get approved in the U.S., “that’s going to open the floodgates of myopia management,” Barry Eiden, an optometrist in Deerfield, Illinois, told me. The earlier you can start slowing myopia progression in kids, the better the outcome, he explained, but parents sometimes balk at the idea of putting drugs or contacts into the eyes of their young children. They don’t have the same problem with glasses. In the future, Liu told me, she hopes FDA approvals will spur vision insurance to cover myopia control at least partially, making the treatments affordable to more parents. Meanwhile, CooperVision has already revved up its MiSight marketing machine. It’s targeting exactly the parents you would expect: In my own Brooklyn neighborhood of Park Slope, where you regularly see toddlers in $1,000-plus Uppababy strollers, an optometry shop recently hung a big banner advertising MiSight with two smiling kids. An optometrist in downtown San Francisco told me that parents who have seen MiSight’s ads are now coming into her office asking for it by name. The word-of-mouth era of myopia control is ending; the mass-advertising era is beginning. Within the optometry business, myopia control often gets compared to braces—another treatment for which middle- and upper-class parents who want the best for their kids will dutifully shell out thousands of dollars. This comparison feels apt in a different way, too. Braces are also a modern solution to a relatively modern affliction. The teeth of cavemen, anthropologists have marveled, were incredibly straight. Crooked teeth appear in the archaeological record only when our ancestors transitioned from chewing raw meat and vegetables to eating cooked and processed grains. Our jaws are now smaller and weaker from disuse, our teeth more crowded and crooked. Today, braces are the way we retrofit our ill-adapted bodies for contemporary life. We may not know exactly how ogling screens all day and spending so much time indoors are affecting us, or which is doing more damage, but we do know that myopia is a clear consequence of living at odds with our biology. The optometrists I spoke with all said they try to push better vision habits, such as limiting screen time and playing outside. But this only goes so far. Today, taking a phone away from a teenager may be no more practical than feeding a toddler a raw hunter-gatherer diet. So this is where we’ve ended up, for those of us who can even afford it: adding chemicals and putting pieces of plastic in our eyes every day, in hopes of tricking them back to their natural state. This article appears in the October 2022 print edition with the headline “The Myopia Generation.” from https://ift.tt/uB3y1cm Check out http://natthash.tumblr.com On March 25, 2020, Hannah Davis was texting with two friends when she realized that she couldn’t understand one of their messages. In hindsight, that was the first sign that she had COVID-19. It was also her first experience with the phenomenon known as “brain fog,” and the moment when her old life contracted into her current one. She once worked in artificial intelligence and analyzed complex systems without hesitation, but now “runs into a mental wall” when faced with tasks as simple as filling out forms. Her memory, once vivid, feels frayed and fleeting. Former mundanities—buying food, making meals, cleaning up—can be agonizingly difficult. Her inner world—what she calls “the extras of thinking, like daydreaming, making plans, imagining”—is gone. The fog “is so encompassing,” she told me, “it affects every area of my life.” For more than 900 days, while other long-COVID symptoms have waxed and waned, her brain fog has never really lifted. Of long COVID’s many possible symptoms, brain fog “is by far one of the most disabling and destructive,” Emma Ladds, a primary-care specialist from the University of Oxford, told me. It’s also among the most misunderstood. It wasn’t even included in the list of possible COVID symptoms when the coronavirus pandemic first began. But 20 to 30 percent of patients report brain fog three months after their initial infection, as do 65 to 85 percent of the long-haulers who stay sick for much longer. It can afflict people who were never ill enough to need a ventilator—or any hospital care. And it can affect young people in the prime of their mental lives. Long-haulers with brain fog say that it’s like none of the things that people—including many medical professionals—jeeringly compare it to. It is more profound than the clouded thinking that accompanies hangovers, stress, or fatigue. It is not ADHD, which Davis was once diagnosed with. It is not psychosomatic, and involves real changes to the structure and chemistry of the brain. It is not a mood disorder: “If anyone is saying that this is due to depression and anxiety, they have no basis for that, and data suggest it might be the other direction,” Joanna Hellmuth, a neurologist at UC San Francisco, told me. And despite its nebulous name, brain fog is not an umbrella term for every possible mental problem. At its core, Hellmuth said, it is almost always a disorder of “executive function”—the set of mental abilities that includes focusing attention, holding information in mind, and blocking out distractions. These skills are so foundational that when they crumble, much of a person’s cognitive edifice collapses. Anything involving concentration, multitasking, and planning—that is, almost everything important—becomes absurdly arduous. “It raises what are unconscious processes for healthy people to the level of conscious decision making,” Fiona Robertson, a writer based in Aberdeen, Scotland, told me. For example, Robertson’s brain often loses focus mid-sentence, leading to what she jokingly calls “so-yeah syndrome”: “I forget what I’m saying, tail off, and go, ‘So, yeah …’” she said. Brain fog stopped Kristen Tjaden from driving, because she’d forget her destination en route. For more than a year, she couldn’t read, either, because making sense of a series of words had become too difficult. Angela Meriquez Vázquez told me it once took her two hours to schedule a meeting over email: She’d check her calendar, but the information would slip in the second it took to bring up her inbox. At her worst, she couldn’t unload a dishwasher, because identifying an object, remembering where it should go, and putting it there was too complicated. Memory suffers, too, but in a different way from degenerative conditions like Alzheimer’s. The memories are there, but with executive function malfunctioning, the brain neither chooses the important things to store nor retrieves that information efficiently. Davis, who is part of the Patient-Led Research Collaborative, can remember facts from scientific papers, but not events. When she thinks of her loved ones, or her old life, they feel distant. “Moments that affected me don’t feel like they’re part of me anymore,” she said. “It feels like I am a void and I’m living in a void.” Most people with brain fog are not so severely affected, and gradually improve with time. But even when people recover enough to work, they can struggle with minds that are less nimble than before. “We’re used to driving a sports car, and now we are left with a jalopy,” Vázquez said. In some professions, a jalopy won’t cut it. “I’ve had surgeons who can’t go back to surgery, because they need their executive function,” Monica Verduzco-Gutierrez, a rehabilitation specialist at UT Health San Antonio, told me. Robertson, meanwhile, was studying theoretical physics in college when she first got sick, and her fog occluded a career path that was once brightly lit. “I used to sparkle, like I could pull these things together and start to see how the universe works,” she told me. “I’ve never been able to access that sensation again, and I miss it, every day, like an ache.” That loss of identity was as disruptive as the physical aspects of the disease, which “I always thought I could deal with … if I could just think properly,” Robertson said. “This is the thing that’s destabilized me most.” Robertson predicted that the pandemic would trigger a wave of cognitive impairment in March 2020. Her brain fog began two decades earlier, likely with a different viral illness, but she developed the same executive-function impairments that long-haulers experience, which then worsened when she got COVID last year. That specific constellation of problems also befalls many people living with HIV, epileptics after seizures, cancer patients experiencing so-called chemo brain, and people with several complex chronic illnesses such as fibromyalgia. It’s part of the diagnostic criteria for myalgic encephalomyelitis, also known as chronic fatigue syndrome, or ME/CFS—a condition that Davis and many other long-haulers now have. Brain fog existed well before COVID, affecting many people whose conditions were stigmatized, dismissed, or neglected. “For all of those years, people just treated it like it’s not worth researching,” Robertson told me. “So many of us were told, Oh, it’s just a bit of a depression.” Several clinicians I spoke with argued that the term brain fog makes the condition sound like a temporary inconvenience and deprives patients of the legitimacy that more medicalized language like cognitive impairment would bestow. But Aparna Nair, a historian of disability at the University of Oklahoma, noted that disability communities have used the term for decades, and there are many other reasons behind brain fog’s dismissal beyond terminology. (A surfeit of syllables didn’t stop fibromyalgia and myalgic encephalomyelitis from being trivialized.) For example, Hellmuth noted that in her field of cognitive neurology, “virtually all the infrastructure and teaching” centers on degenerative diseases like Alzheimer’s, in which rogue proteins afflict elderly brains. Few researchers know that viruses can cause cognitive disorders in younger people, so few study their effects. “As a result, no one learns about it in medical school,” Hellmuth said. And because “there’s not a lot of humility in medicine, people end up blaming patients instead of looking for answers,” she said. People with brain fog also excel at hiding it: None of the long-haulers I’ve interviewed sounded cognitively impaired. But at times when her speech is obviously sluggish, “nobody except my husband and mother see me,” Robertson said. The stigma that long-haulers experience also motivates them to present as normal in social situations or doctor appointments, which compounds the mistaken sense that they’re less impaired than they claim—and can be debilitatingly draining. “They’ll do what is asked of them when you’re testing them, and your results will say they were normal,” David Putrino, who leads a long-COVID rehabilitation clinic at Mount Sinai, told me. “It’s only if you check in on them two days later that you’ll see you’ve wrecked them for a week.” “We also don’t have the right tools for measuring brain fog,” Putrino said. Doctors often use the Montreal Cognitive Assessment, which was designed to uncover extreme mental problems in elderly people with dementia, and “isn’t validated for anyone under age 55,” Hellmuth told me. Even a person with severe brain fog can ace it. More sophisticated tests exist, but they still compare people with the population average rather than their previous baseline. “A high-functioning person with a decline in their abilities who falls within the normal range is told they don’t have a problem,” Hellmuth said. This pattern exists for many long-COVID symptoms: Doctors order inappropriate or overly simplistic tests, whose negative results are used to discredit patients’ genuine symptoms. It doesn’t help that brain fog (and long COVID more generally) disproportionately affects women, who have a long history of being labeled as emotional or hysterical by the medical establishment. But every patient with brain fog “tells me the exact same story of executive-function symptoms,” Hellmuth said. “If people were making this up, the clinical narrative wouldn’t be the same.” Earlier this year, a team of British researchers rendered the invisible nature of brain fog in the stark black-and-white imagery of MRI scans. Gwenaëlle Douaud at the University of Oxford and her colleagues analyzed data from the UK Biobank study, which had regularly scanned the brains of hundreds of volunteers for years prior to the pandemic. When some of those volunteers caught COVID, the team could compare their after scans to the before ones. They found that even mild infections can slightly shrink the brain and reduce the thickness of its neuron-rich gray matter. At their worst, these changes were comparable to a decade of aging. They were especially pronounced in areas such as the parahippocampal gyrus, which is important for encoding and retrieving memories, and the orbitofrontal cortex, which is important for executive function. They were still apparent in people who hadn’t been hospitalized. And they were accompanied by cognitive problems. Although SARS-CoV-2, the coronavirus that causes COVID, can enter and infect the central nervous system, it doesn’t do so efficiently, persistently, or frequently, Michelle Monje, a neuro-oncologist at Stanford, told me. Instead, she thinks that in most cases the virus harms the brain without directly infecting it. She and her colleagues recently showed that when mice experience mild bouts of COVID, inflammatory chemicals can travel from the lungs to the brain, where they disrupt cells called microglia. Normally, microglia act as groundskeepers, supporting neurons by pruning unnecessary connections and cleaning unwanted debris. When inflamed, their efforts become overenthusiastic and destructive. In their presence, the hippocampus—a region crucial for memory—produces fewer fresh neurons, while many existing neurons lose their insulating coats, so electric signals now course along these cells more slowly. These are the same changes that Monje sees in cancer patients with “chemo fog.” And although she and her team did their COVID experiments in mice, they found high levels of the same inflammatory chemicals in long-haulers with brain fog. Monje suspects that neuro-inflammation is “probably the most common way” that COVID results intriggers brain fog, but that there are likely many such routes. COVID could possibly trigger autoimmune problems in which the immune system mistakenly attacks the nervous system, or reactivate dormant viruses such as Epstein-Barr virus, which has been linked to conditions including ME/CFS and multiple sclerosis. By damaging blood vessels and filling them with small clots, COVID also throttles the brain’s blood supply, depriving this most energetically demanding of organs of oxygen and fuel. This oxygen shortfall isn’t stark enough to kill neurons or send people to an ICU, but “the brain isn’t getting what it needs to fire on all cylinders,” Putrino told me. (The severe oxygen deprivation that forces some people with COVID into critical care causes different cognitive problems than what most long-haulers experience.) None of these explanations is set in stone, but they can collectively make sense of brain fog’s features. A lack of oxygen would affect sophisticated and energy-dependent cognitive tasks first, which explains why executive function and language “are the first ones to go,” Putrino said. Without insulating coats, neurons work more slowly, which explains why many long-haulers feel that their processing speed is shot: “You’re losing the thing that facilitates fast neural connection between brain regions,” Monje said. These problems can be exacerbated or mitigated by factors such as sleep and rest, which explains why many people with brain fog have good days and bad days. And although other respiratory viruses can wreak inflammatory havoc on the brain, SARS-CoV-2 does so more potently than, say, influenza, which explains both why people such as Robertson developed brain fog long before the current pandemic and why the symptom is especially prominent among COVID long-haulers. Perhaps the most important implication of this emerging science is that brain fog is “potentially reversible,” Monje said. If the symptom was the work of a persistent brain infection, or the mass death of neurons following severe oxygen starvation, it would be hard to undo. But neuroinflammation isn’t destiny. Cancer researchers, for example, have developed drugs that can calm berserk microglia in mice and restore their cognitive abilities; some are being tested in early clinical trials. “I’m hopeful that we’ll find the same to be true in COVID,” she said. Biomedical advances might take years to arrive, but long-haulers need help with brain fog now. Absent cures, most approaches to treatment are about helping people manage their symptoms. Sounder sleep, healthy eating, and other generic lifestyle changes can make the condition more tolerable. Breathing and relaxation techniques can help people through bad flare-ups; speech therapy can help those with problems finding words. Some over-the-counter medications such as antihistamines can ease inflammatory symptoms, while stimulants can boost lagging concentration. “Some people spontaneously recover back to baseline,” Hellmuth told me, “but two and a half years on, a lot of patients I see are no better.” And between these extremes lies perhaps the largest group of long-haulers—those whose brain fog has improved but not vanished, and who can “maintain a relatively normal life, but only after making serious accommodations,” Putrino said. Long recovery periods and a slew of lifehacks make regular living possible, but more slowly and at higher cost. Kristen Tjaden can read again, albeit for short bursts followed by long rests, but hasn’t returned to work. Angela Meriquez Vázquez can work but can’t multitask or process meetings in real time. Julia Moore Vogel, who helps lead a large biomedical research program, can muster enough executive function for her job, but “almost everything else in my life I’ve cut out to make room for that,” she told me. “I only leave the house or socialize once a week.” And she rarely talks about these problems openly because “in my field, your brain is your currency,” she said. “I know my value in many people’s eyes will be diminished by knowing that I have these cognitive challenges.” Patients struggle to make peace with how much they’ve changed and the stigma associated with it, regardless of where they end up. Their desperation to return to normal can be dangerous, especially when combined with cultural norms around pressing on through challenges and post-exertional malaise—severe crashes in which all symptoms worsen after even minor physical or mental exertion. Many long-haulers try to push themselves back to work and instead “push themselves into a crash,” Robertson told me. When she tried to force her way to normalcy, she became mostly housebound for a year, needing full-time care. Even now, if she tries to concentrate in the middle of a bad day, “I end up with a physical reaction of exhaustion and pain, like I’ve run a marathon,” she said. Post-exertional malaise is so common among long-haulers that “exercise as a treatment is inappropriate for people with long COVID,” Putrino said. Even brain-training games—which have questionable value but are often mentioned as potential treatments for brain fog—must be very carefully rationed because mental exertion is physical exertion. People with ME/CFS learned this lesson the hard way, and fought hard to get exercise therapy, once commonly prescribed for the condition, to be removed from official guidance in the U.S. and U.K. They’ve also learned the value of pacing—carefully sensing and managing their energy levels to avoid crashes. Vogel does this with a wearable that tracks her heart rate, sleep, activity, and stress as a proxy for her energy levels; if they feel low, she forces herself to rest--cognitively as well as physically. Checking social media or responding to emails do not count. In those moments, “you have to accept that you have this medical crisis and the best thing you can do is literally nothing,” she said. When stuck in a fog, sometimes the only option is to stand still. from https://ift.tt/ACkuMHw Check out http://natthash.tumblr.com School is in session, pumpkin spice is in season, and Americans are heading to pharmacies for what may soon become another autumn standby: your annual COVID shot. On Tuesday, the White House announced the start of a “new phase” of the pandemic response, one in which “most Americans” will receive a COVID-19 vaccine just “once a year, each fall.” In other words, your pandemic booster is about to become as routine as your physical exam or—more to the point—your flu shot. One more health-related task has been added to your calendar, and it’s likely to remain there for the rest of your life. From a certain standpoint, this regimen makes a lot of sense. The pandemic’s biggest surges so far have come in the winter, and a fall booster could go a long way toward mitigating the next of those surges. What’s more, the new plan greatly simplifies COVID-vaccination regimens, both for the public and for providers. “It has been bewildering in many cases to understand who is eligible for a booster, how many boosters, when, which boosters, how far apart,” Jason Schwartz, a vaccine-policy expert at Yale, told me. “I think that has held down booster uptake in some really discouraging ways.” In a sense, White House COVID-19 Response Coordinator Ashish Jha told me, the new plan just codifies the way things already worked: The last time low-risk Americans became eligible for another shot was last fall. (The elderly and immunocompromised have operated on a different schedule and will likely continue to do so, Jha said.) Still, some public-health experts worry that the White House is jumping the gun. Back in April, a number of them told Stat News’s Helen Branswell they were concerned that the U.S. would adopt such a policy without the data needed to support it. When the White House made its announcement on Tuesday, many felt their concerns had been vindicated. “We’ve had twists and turns and surprises every single step of the way with COVID, and the idea that we’re going to have one shot and then we’re done is not really consistent with how things have worked in the past,” Walid Gellad, a professor at the University of Pittsburgh School of Medicine, told me. The plan, in his view, glosses over considerable uncertainties. [Read: America’s fall booster plan has a fatal paradox] For one thing, it assumes that the virus will follow an annual schedule with peaks in the fall and winter—not unlikely, but also not a given. For another, we still don’t have a firm grasp on the magnitude or duration of the benefits offered by the new Omicron-specific vaccine. For all we know, Gellad told me, the added protection afforded to someone who gets the shot tomorrow may have largely dissipated by New Year’s Eve. And that’s not to mention the massive uncertainty presented by the specter of future variants. In a briefing Tuesday, Jha acknowledged that “new variant curveballs” could change the government’s plans. But the announcement itself includes no such caveats, which some public-health experts worry could cause problems if course corrections are needed down the line. For all we know, new variants could necessitate more frequent updates, or, if viral mutation slows, we might not even need annual shots, Paul Thomas, an immunologist at St. Jude Children’s Research Hospital, in Tennessee, told me. If the routine the White House describes sounds a lot like flu shots, that’s no accident. The announcement explicitly recommends that COVID vaccines be taken between Labor Day and Halloween—“just like your annual flu shot.” That comparison, though, is part of what concerns critics, who worry that the shift into a more flu-like framework will entail the adoption of a vaccines-only approach to COVID prevention. Many of the interventions that have proved so effective over the past two and a half years—masking, distancing, widespread testing—have not traditionally been a major part of our flu-season protocols. If we treat COVID like flu, the thinking goes, such interventions risk falling even further by the wayside. The announcement, which makes no mention of any other prevention tactics, doesn’t offer much reassurance to the contrary. [Read: A simple rule for planning your fall booster shot] But that reading, Jha told me, is “just clearly wrong.” Although vaccines are “the central pillar of our strategy,” he said, testing, masking, and improving indoor air quality are all important as well. But as my colleague Katherine Wu has written, the country has been relying more and more on vaccines—and less and less on the other interventions at our disposal—for some time. Even if you do read the new policy as an abnegation of masking, ventilation, and the like, it may not functionally be much of a departure from the status quo For now, Thomas said, the White House’s plan makes sense—as long as it stays sensitive to changing circumstances. “We keep learning new things about this virus,” he told me. “The rate of mutation is changing. The spread through the population is changing.” And as such, he said, our response must be flexible. The White House announcement seems like a good-faith attempt to balance competing priorities: on the one hand, the need to communicate uncertainty and acknowledge complexity; on the other, the need to keep the message from getting so complex that it confuses people to the point they tune it out entirely. In this case, the administration seems to have come down on the side of simplicity. That could be a mistake, Gellad says—one that public-health authorities have made over and over throughout the pandemic. “When you try and make things simple and understandable and present them without sufficient uncertainty,” he told me, “you get into trouble when things change.” from https://ift.tt/k4uEN7z Check out http://natthash.tumblr.com Sometime in the spring of 2020, after centuries, perhaps millennia, of tumultuous coexistence with humans, influenza abruptly went dark. Around the globe, documented cases of the viral infection completely cratered as the world tried to counteract SARS-CoV-2. This time last year, American experts began to fret that the flu’s unprecedented sabbatical was too bizarre to last: Perhaps the group of viruses that cause the disease would be poised for an epic comeback, slamming us with “a little more punch” than usual, Richard Webby, an influenza expert at St. Jude Children’s Research Hospital, in Tennessee, told me at the time. But those fears did not not come to pass. Flu’s winter 2021 season in the Southern Hemisphere was once again eerily silent; in the north, cases sneaked up in December—only to peter out before a lackluster reprise in the spring. Now, as the weather once again chills in this hemisphere and the winter holidays loom, experts are nervously looking ahead. After skipping two seasons in the Southern Hemisphere, flu spent 2022 hopping across the planet’s lower half with more fervor than it’s had since the COVID crisis began. And of the three years of the pandemic that have played out so far, this one is previewing the strongest signs yet of a rough flu season ahead.
[Read: The pandemic broke the flu] That does not bode terribly well for those of us up north. The same viruses that seed outbreaks in the south tend to be the ones that sprout epidemics here as the seasons do their annual flip. “I take the south as an indicator,” says Seema Lakdawala, a flu-transmission expert at Emory University. And should flu return here, too, with a vengeance, it will collide with a population that hasn’t seen its likes in years, and is already trying to marshal responses to several dangerous pathogens at once. The worst-case scenario won’t necessarily pan out. What goes on below the equator is never a perfect predictor for what will occur above it: Even during peacetime, “we’re pretty bad in terms of predicting what a flu season is going to look like,” Webby, of St. Jude, told me. COVID, and the world’s responses to it, have put experts’ few forecasting tools further on the fritz. But the south’s experiences can still be telling. In South Africa and Australia, for instance, many COVID-mitigation measures, such as universal masking recommendations and post-travel quarantines, lifted as winter arrived, allowing a glut of respiratory viruses to percolate through the population. The flu flood also began after two essentially flu-less years—which is a good thing at face value, but also represents many months of missed opportunities to refresh people’s anti-flu defenses, leaving them more vulnerable at the season’s start. Some of the same factors are working against those of us north of the equator, perhaps to an even greater degree. Here, too, the population is starting at a lower defensive baseline against flu—especially young children, many of whom have never tussled with the viruses. It’s “very, very likely” that kids may end up disproportionately hit, Webby said, as they appear to have been in Australia—though Subbarao notes that this trend may have been driven by more cautious behaviors among older populations, skewing illness younger. Interest in inoculations has also dropped during the pandemic: After more than a year of calls for booster after booster, “people have a lot of fatigue,” says Helen Chu, a physician and flu expert at the University of Washington, and that exhaustion may be driving already low interest in flu shots even further down. (During good years, flu-shot uptake in the U.S. peaks around 50 percent.) And the few protections against viruses that were still in place last winter have now almost entirely vanished. In particular, schools—a fixture of flu transmission—have loosened up enormously since last year. There’s also just “much more flu around,” all over the global map, Webby said. With international travel back in full swing, the viruses will get that many more chances to hopscotch across borders and ignite an outbreak. And should such an epidemic emerge, with its health infrastructure already under strain from simultaneous outbreaks of COVID, monkeypox, and polio, America may not handle another addition well. “Overall,” Chu told me, “we are not well prepared.” [Read: The most important vaccine I’ll get this fall] At the same time, though, countries around the world have taken such different approaches to COVID mitigation that the pandemic may have further uncoupled their flu-season fate. Australia’s experience with the flu, for instance, started, peaked, and ended early this year; the new arrival of more relaxed travel policies likely played a role in the outbreak’s beginning, before a mid-year BA.5 surge potentially hastened the sudden drop. It’s also very unclear whether the U.S. may be better or worse off because its last flu season was wimpy, weirdly shaped, and unusually late. South Africa saw an atypical summer bump in flu activity as well; those infections may have left behind a fresh dusting of immunity and blunted the severity of the following season, Cohen told me. But it’s always hard to tell. “I was quite strong in saying that I really believed that South Africa was going to have a severe season,” she said. “And it seems that I was wrong.” The long summer tail of the Northern Hemisphere’s most recent flu season could also exacerbate the intensity of the coming winter season, says John McCauley, the director of the Worldwide Influenza Centre at the Francis Crick Institute, in London. Kept going in their off-season, the viruses may have an easier vantage point from which to reemerge this winter. COVID’s crush has shifted flu dynamics on the whole as well. The pandemic “squeezed out” a lot of diversity from the influenza-virus population, Webby told me; some lineages may have even entirely blipped out. But others could also still be stewing and mutating, potentially in animals or unmonitored pockets of the world. That these strains—which harbor especially large pandemic potential—could emerge into the general population is “my bigger concern,” Lakdawala, of Emory, told me. And although the particular strains of flu that are circulating most avidly seem reasonably well matched to this year’s vaccines, the dominant strains that attack the north could yet shift, says Florian Krammer, a flu virologist at Mount Sinai’s Icahn School of Medicine. Viruses also tend to wobble and hop when they return from long vacations; it may take a season or two before the flu finds its usual rhythm. Another epic SARS-CoV-2 variant could also quash a would-be influenza peak. Flu cases rose at the end of 2021, and the dreaded “twindemic” loomed. But then, Omicron hit—and flu “basically disappeared for one and a half months,” Krammer told me, only tiptoeing back onto the scene after COVID cases dropped. Some experts suspect that the immune system may have played a role in this tag-team act: Although co-infections or sequential infections of SARS-CoV-2 and flu viruses are possible, the aggressive spread of a new coronavirus variant may have set people’s defenses on high alert, making it that much harder for another pathogen to gain a foothold. [Read: Don’t worry, it’s not the flu] No matter the odds we enter flu season with, human behavior can still alter winter’s course. One of the main reasons that flu viruses have been so absent the past few years is because mitigation measures have kept them at bay. “People understand transmission more than they ever did before,” Lakdawala told me. Subbarao thinks COVID wisdom is what helped keep Australian flu deaths down, despite the gargantuan swell in cases: Older people took note of the actions that thwarted the coronavirus and applied those same lessons to flu. Perhaps populations across the Northern Hemisphere will act in similar ways. “I would hope that we’ve actually learned how to deal with infectious disease more seriously,” McCauley told me. But Webby isn’t sure that he’s optimistic. “People have had enough hearing about viruses in general,” he told me. Flu, unfortunately, does not feel similarly about us. from https://ift.tt/HyMKY7V Check out http://natthash.tumblr.com When I heard that my patient was back in the ICU, my heart sank. But I wasn’t surprised. Her paycheck usually runs short at the end of the month, so her insulin does too. As she stretches her supply, her blood sugar climbs. Soon the insatiable thirst and constant urination follow. And once her keto acids build up, her stomach pains and vomiting start. She always manages to make it to the hospital before the damage reaches her brain and heart. But we both worry that someday, she won’t. The Inflation Reduction Act, passed last month, aims to help people like her by lowering the cost of insulin across America. Although efforts to expand protections to privately insured Americans were blocked in the Senate, Democrats succeeded in capping expenses for the drug among Americans on Medicare at $35 a month, offering meaningful savings for our seniors, some of whom will save hundreds of dollars a month thanks to the measure. In theory, the policy (and similar ones at the state level) will help the estimated 25 percent of Americans on insulin who have been forced to ration the drug because of cost, and will prevent some of the 600 annual American deaths from diabetic ketoacidosis, the fate from which I’m trying to save my patient. Indeed, laws capping co-payments for insulin are welcome news both financially and medically to patients who depend on the drug for survival. However, in their current version, such laws might backfire, leading to even more diabetes-related deaths overall. How could that be true? Thanks to the development of new drugs, insulin’s role in diabetes treatment has been declining over the past decade. It remains essential to the small percent of patients with type 1 diabetes, including my patient. But for the 90 percent of Americans with diabetes who have type 2, it should not routinely be the first-, second-, or even third-line treatment. The reasons for this are many: Of all diabetes medications, insulin carries the highest risk of causing dangerously low blood sugar. The medication most commonly comes in injectable form, so administering it usually means painful needle jabs. All of this effort is rewarded with (usually unwanted) weight gain. Foremost and finally, although insulin is excellent at tamping down high blood sugar—the hallmark of diabetes and the driver of some of its complications—it is not as impressive as other medications at mitigating the most deadly and debilitating consequences of the disease: heart attacks, kidney disease, and heart failure. [Read: People are clamoring to buy old insulin pumps] Large clinical trials have shown that two newer classes of diabetes medicines, SGLT2 inhibitors and GLP-1 receptor agonists, outperform alternatives (including insulin) in reducing the risk of these disabling or deadly outcomes. Giving patients these drugs instead of older options over a period of three years prevents, on average, one death for about every 100 treated. And SGLT2 inhibitors and GLP-1 receptor agonists pose less risk of causing dangerously low blood sugar, generally do not require frequent injections, and help patients lose weight. Based on these data, the American Diabetes Association now recommends SGLT2 inhibitors and GLP-1 receptor agonists be used before insulin for most patients with type 2 diabetes. When a young person dies from diabetic ketoacidosis because they rationed insulin, the culprit is clear. But when a patient with diabetes dies of a heart attack, the absence of an SGLT2 inhibitor or GLP-1 receptor agonist doesn’t get blamed, because other explanations abound: their uncontrolled blood pressure, the cholesterol medication they didn’t take, the cigarettes they continued to smoke, bad genes, bad luck. But every year, more than 1,000 times more Americans die of heart disease than DKA, and of those 700,000 deaths, a good chunk are diabetes-related. (The exact number remains murky.) Diabetes is a major reason that more than half a million Americans depend on dialysis to manage their end-stage kidney disease, and that about 6 million live with congestive heart failure. The data are clear--SGLT2 inhibitors and GLP-1 receptor agonists could help reduce these numbers. Still, uptake of these lifesaving drugs is sluggish. Only about one in 10 people with type 2 diabetes is taking them (fewer still among patients who are not wealthy or white). The main cause is simple and stupid: American laws prioritize profits and patents over patients. Because SGLT2 inhibitors and GLP-1 receptor agonists remain under patent protections, drug companies can charge exorbitant rates for them: hundreds if not thousands of dollars a month, sometimes even more than insulin. Doctors spend hours completing arduous paperwork in the hopes of persuading insurers to help our patients, but we’re frequently denied anyway. And even when we do succeed, many patients are left with painful co-payments and deductibles. The most maddening part is that despite their substantial up-front expense, these medications are quite cost-effective in the long run because they prevent pricey complications down the road. [Read: The risks of over-the-counter diabetes treatments] This is where addressing the cost of insulin—and only insulin—becomes problematic. Doctors are forced daily to decide between the best medication for our patients and the medication that our patients can afford. Katie Shaw, a primary-care physician with a bustling practice at Johns Hopkins, where I’m a senior resident, told me that plenty of her patients can’t afford SGLT2 inhibitors and GLP-1 receptor agonists. In such instances, Shaw is forced to use older oral alternatives and occasionally insulin. “They’re better than nothing at all,” she said. If the cost of insulin is capped on its own, insulin will be more likely to jump in front of SGLT2 inhibitors and GLP-1 receptor agonists in treatment plans. That will mean more disease, more disability, and more death from diabetes. Medicare patients might avoid some of these effects thanks to provisions in the IRA allowing Medicare to negotiate drug prices and capping out-of-pocket spending on prescriptions at $2,000 a year. The law also guarantees price negotiations for a handful of medications, but SGLT2 inhibitors and GLP-1 receptor agonists won’t necessarily be on the list. And most Americans are not on Medicare. Already, Shaw said, the patients in her practice who tend to be least able to afford SGLT2 inhibitors and GLP-1 receptor agonists are working-class people with private insurance. Some health centers, including the one Shaw and I work at, enjoy access to a federal drug-discount program that can make patent-protected medications, including SGLT2 inhibitors and GLP-1 receptor agonists, more affordable for the uninsured. But most Americans without insurance aren’t so lucky. It would be cruel to choose between a world in which more people with type 2 diabetes are nudged toward a drug that won’t stave off the most dangerous complications, and one in which those with type 1 diabetes are priced out of life. In place of capping the out-of-pocket cost of just insulin, lawmakers should cap the out-of-pocket cost of all diabetes medications. This will both protect Americans dependent on insulin and smooth SGLT2 inhibitors’ and GLP-1 receptor agonists’ path to their revolutionary public-health potential. [Read: Big Pharma’s go-to defense of soaring drug prices doesn’t add up] The argument for lowering the cost of these drugs for patients is the same as the argument for insulin affordability: that it is both foolish and inhumane to make lifesaving diabetes medications unaffordable when their use prevents costly and deadly downstream complications. Patients like mine need affordable access to insulin. But even more need access to SGLT2 inhibitors and GLP-1 receptor agonists. If the laws stop at insulin, many Americans could die unnecessarily—not from inadequate access to insulin, but from preferential access to it. from https://ift.tt/2OItR6n Check out http://natthash.tumblr.com |
Authorhttp://natthash.tumblr.com Archives
April 2023
Categories |