When Kathleen Walker-Meikle, a historian at the University of Basel, in Switzerland, ponders the Middle Ages, her mind tends to drift not to religious conquest or Viking raids, but to squirrels. Tawny-backed, white-bellied, tufted-eared red squirrels, to be exact. For hundreds of years, society’s elites stitched red-squirrel pelts into luxurious floor-length capes and made the animals pets, cradling them in their lap and commissioning gold collars festooned with pearls. Human lives were so intertwined with those of red squirrels that one of history’s most cursed diseases likely passed repeatedly between our species and theirs, according to new research that Walker-Meikle contributed to. Uncomfortable questions about medieval squirrels first came up about a decade ago, after another group of researchers stumbled upon three populations of red squirrels—one in Scotland, two on different English islands—with odd-looking features: swollen lips, warty noses, skin on their ears that had grown thick and crusty. A search for microbial DNA in some of those squirrels’ tissues revealed that they had leprosy. “What’s it doing in red squirrels?” John Spencer, a microbiologist at Colorado State University, recalled thinking at the time. Scientists had long thought that leprosy affected only humans, until the 1970s, when they began to find the bacterium that causes it in armadillos too, Daniel Romero-Alvarez, an infectious-disease ecologist and epidemiologist at Universidad Internacional SEK, in Ecuador, told me. But that was in the Americas; in Europe, dogma held that leprosy had essentially vanished by about the 16th century. The most plausible explanation for the pathogen’s presence in modern squirrels, Spencer told me, was that strains of it had been percolating in the rodents unnoticed for hundreds of years. Bacterial genomes extracted from several of the infected British squirrels suggested that this was the case: Those sequences bore a strong resemblance to others previously pulled out of medieval human remains. The next step was proving that medieval squirrels carried the bacterium too, Verena Schünemann, a paleogeneticist at the University of Zurich, in Switzerland, and one of the new study’s authors, told me. If those microbes were also genetically similar to ones found in medieval people, they’d show that leprosy had probably regularly jumped between rodents and humans. [Read: Tuberculosis got to South America through … seals?] Schünemann teamed up with Sarah Inskip, an archaeologist at the University of Leicester, in the U.K., and set out to find an archaeological site in Britain with both human and squirrel remains. They zeroed in on the medieval city of Winchester, once famous for its fur-obsessed market patrons, as well as a large leprosarium. After analyzing dozens of samples from around Winchester, the team was able to extract just four leprosy genomes—three from humans, one from the tiny foot bone of a squirrel. But those turned out to be enough. All four samples dated to about the High Middle Ages—the oldest detection so far of leprosy in a nonhuman animal, Inskip told me. The genomes also all budded from the same branch of the leprosy family tree, sharing enough genetic similarities that they strongly indicated that medieval humans and squirrels were swapping the disease-causing bugs, Schünemann told me. Still, Schünemann wasn’t sure exactly how that would have happened, given that transmitting a leprosy infection generally requires prolonged and close contact. So, hoping to fill in the blanks, she reached out to Walker-Meikle, who has extensively studied medieval pets. Walker-Meikle already had the exact type of evidence that Schünemann and her colleagues were looking for: medieval artwork depicting people cradling the animals, documents describing women taking them out for walks, financial accounts detailing purchases of flashy, rodent-size accessories and enclosures of the sort people today might buy for pet dogs, Walker-Meikle told me. Squirrels were so popular at the time that she found written references to the woes of a 13th-century archbishop who, despite years of pleading, couldn’t get the nuns in his district to stop doting on the creatures. They were essentially akin, she said, to tiny lapdogs. Fur processing, too, would have provided ample opportunity for spread. In the High and Late Middle Ages, squirrel fur was the most popular fur used to trim and line garments, and clothes made with it were considered as high fashion as a Prada bag now, Schünemann told me. In a single year in the 14th century, the English royal household purchased nearly 80,000 squirrel-belly skins. Contact between squirrels and humans was so intimate that, throughout much of the Middle Ages, leprosy likely ping-ponged back and forth between the two species, Inskip told me. [Read: Admit it, squirrels are just tree rats] But the team’s work doesn’t say anything about the origins of leprosy, which entered humans at least thousands of years ago. It also can’t prove whether leprosy infiltrated humans or red squirrels first. It does further dispel the notion that leprosy is a problem only for humans, Romero-Alvarez told me. Armadillos may have picked up leprosy from humans relatively recently, after Europeans imported the pathogen to South America. The scaly mammals are now “giving it back to humans,” Spencer told me, especially, it seems, in parts of South America and the southern United States, where some communities hunt and eat the animals or keep them as pets. Human-to-human transmission still accounts for the majority of leprosy spread, which remains uncommon overall. But Romero-Alvarez pointed out that the mere existence of the bacterium in another species, from which we and other creatures can catch it, makes the disease that much more difficult to control. “Everybody believes that leprosy is gone,” Claudio Guedes Salgado, an immunologist at Pará Federal University, in Brazil, told me. “But we have more leprosy than the world believes.” The barriers between species are porous. And once a pathogen crosses over, that jump is impossible to fully undo. from https://ift.tt/zZPB5sI Check out http://natthash.tumblr.com
0 Comments
Earlier this week, news leaked of the biggest change in federal drug policy in more than half a century. The Associated Press reported—and the Department of Justice later confirmed—that the Drug Enforcement Administration plans to recategorize marijuana under the Controlled Substances Act. Since the 1970s, it’s been placed in Schedule I, a highly controlled group that includes drugs like heroin, with a high potential for abuse and no medical use. But cannabis will soon be moved to the much less restrictive Schedule III, which includes prescription drugs such as ketamine and Tylenol with codeine that have a moderate-to-low risk of addiction. Currently, recreational cannabis is legal for adults over the age of 21 in 24 states, which are home to more than half of the U.S. population. According to a recent Harris poll, about 40 percent of Americans use cannabis, and a quarter do so on at least a weekly basis. And yet, researchers and physicians told me, scientific consensus on the drug’s precise effects—especially on the heart and lungs, mental health, and developing adolescent brains—is still lacking. Rescheduling marijuana will broaden access further still, which makes finding better answers to those questions even more crucial. Conveniently, rescheduling marijuana is also likely to spur in-depth study, in part by expanding research opportunities that were previously limited or nonexistent. Easing restrictions will ultimately mean learning a lot more about the potential harms and benefits of a drug that for decades has been both popular and demonized. Historically, the scope of cannabis research has been fairly limited. The National Institute on Drug Abuse, a major federal research funder, has a directive to study the harms of cannabis use rather than any potential benefits, says Amanda Reiman, the chief knowledge officer of New Frontier Data. (New Frontier is an analytics firm focused on the legal cannabis industry.) In 2018, research on the potential harms of cannabis use received more than double the funding that research on its medicinal or therapeutic use did in the U.S., U.K., and Canada. In 2020, a spokesperson for NIDA told Science that although the agency’s traditional focus was on marijuana addiction, it has started exploring the therapeutic potential of compounds in cannabis to treat addiction to other substances. U.S. policy has also made marijuana research of any sort very difficult. Until recently, scientists had to obtain their supply from NIDA’s high-security Mississippi facility. (Six more sources were approved last year.) Researchers regularly complained that the marijuana was moldy, and far from the quality that regular consumers could purchase legally at their local dispensary, with less THC and CBD. [Read: The government’s weed is terrible] Most existing research on how cannabis affects our hearts, our brains, and our society at large is based on self-reported survey data, Peter Grinspoon, a physician at Massachusetts General Hospital and a medical-cannabis expert, told me. Such data are “notoriously inaccurate,” he said. But researchers have been forced to rely on these methods because cannabis is a Schedule I drug, so no studies that receive federal funding can simply give marijuana from state-approved dispensaries to people and record what happens. As a result, the field lacks the number of high-quality studies necessary for researchers to agree on their implications, says Nick Cioe, an associate professor at Assumption University in Massachusetts who has studied the effects of marijuana on traumatic brain injuries. Randomized controlled trials are the gold standard of determining a given drug’s harms and benefits, but for weed, they’ve been nearly impossible. The FDA has approved a handful of cannabis-derived products to treat conditions such as seizures and chemotherapy-induced nausea, but that’s not the same as understanding the effects of recreational weed. After marijuana is officially rescheduled, researchers will have a far easier time studying the drug’s effects. Researching any federally controlled substance is difficult, but obtaining the proper licenses for using Schedule III drugs in the lab is much less arduous than for Schedule I. Scientists will also have far more opportunities to obtain federal grant funding from all sorts of governmental bodies—the National Institutes of Health, the EPA, even the National Highway Traffic Safety Administration—as policy makers rush to understand the implications of legalization. Human trials won’t start the second that the DEA makes marijuana’s new status official. Researchers will have to wait for guidance from federal agencies like the FDA and the NIH, says R. Lorraine Collins, the director of the University at Buffalo’s Center for Cannabis and Cannabinoid Research. And given the limitations around Schedule III drugs, scientists still won’t be able to simply purchase the same cannabis that millions of Americans are consuming from their local dispensary. [Read: Almost no one is happy with legal weed] Schedule III won’t “magically alleviate the bureaucratic headaches” associated with researching cannabis, Grinspoon said. But “it’s going to be a lot easier to say, ‘Let’s give this person cannabis and see what happens to their blood pressure.’” from https://ift.tt/8OQBKGv Check out http://natthash.tumblr.com Milk is defined by its percentages: nonfat, 2 percent, whole. Now there is a different kind of milk percentage to keep in mind. Last week, the FDA reported that 20 percent of the milk it had sampled from retailers across the country contained fragments of bird flu, raising concerns that the virus, which is spreading among animals, might be on its way to sickening humans too. The agency reassured the public that milk is still safe to drink because the pasteurization process inactivates the bird-flu virus. Still, the mere association with bird flu has left some people uneasy and led others to avoid milk altogether. That is, if they weren’t already avoiding it. Milk can’t seem to catch a break: For more than 70 years, consumption of the white liquid has steadily declined. It is no longer a staple of balanced breakfasts and bedtime routines, and milk alternatives offer the same creaminess in a latte or an iced coffee as the original stuff does. Milk was once seen as so integral to health that Americans viewed it as “almost sacred,” but much of that mythos is gone, Melanie Dupuis, an environmental-studies professor at Pace University and the author of Nature’s Perfect Food, a history of milk, told me. In 2022, the previous time the Department of Agriculture measured average milk consumption, it had reached an all-time low of 15 gallons a person. If concerns around bird flu persist, milk’s relevance may continue to slide. Even the slightest bit of consumer apprehension could cause already-struggling dairy farms to shut down. “An additional contributing factor really doesn’t bode well,” Leonard Polzin, a dairy expert at the University of Wisconsin at Madison’s Division of Extension, told me. For the rest of us, there is now yet another reason to avoid milk—and even less left to the belief that milk is special. The risks of bird flu in milk can be simplified to this: Thank God for pasteurization. Straight from the udder, in its raw form, milk is “a substance that’s very much open to contamination if not managed well,” Dupuis said. Milk is like a petri dish of microorganisms, and before pasteurization became the norm, milk regularly caused deadly diseases such as tuberculosis, scarlet fever, and typhoid fever. The pasteurization process, which involves blasting milk with high temperatures and then rapidly cooling it, is “intended to kill just about anything a cow could have,” Meghan Schaeffer, an epidemiologist and a bird-flu expert who now works at the analytics firm SAS, told me. That includes the bird flu. Yesterday, the FDA reported new results from ongoing studies reaffirming that the bird-flu fragments it found in milk and other dairy products aren’t active, meaning they can’t spread disease. The agency confirmed this using a gold-standard test that involved injecting samples into chicken eggs to see if any active virus would grow. None was detected afterward. “That process really saves us,” Schaeffer said. There is never a good time to drink unpasteurized milk, but now is an especially bad one. A number of states have legalized the sale of raw milk in recent years, part of a right-wing embrace of the beverage. Raw milk from sick cows contains bird-flu virus in high concentrations, and the FDA has warned against drinking it. There are no reports of people getting bird flu from drinking unpasteurized milk, but “it is possible” to become infected from it, Schaeffer said. Already, this has been shown in animals: This week, researchers reported that cats who drank raw milk from sick cows got bird flu and died within days. But much about bird flu and milk is unknown, because the virus has never been found in cattle before now. That one in five milk samples tested by the FDA had remnants of bird flu doesn’t mean one in five cows tested positive; milk sold in stores is pooled from many different animals. Rather, it suggests that many cows may be infected beyond those currently accounted for. It may also mean that asymptomatic cows, which are not being tested, shed virus in their milk. (Milk from symptomatic cows, which can be yellow and viscous, is routinely discarded.) Although it isn’t clear how the virus is circulating among cows, a leading explanation is that it’s transmitted via contact with surfaces that have touched raw milk, including milking equipment, vehicles, and other animals. Bird flu is widespread among poultry, but it isn’t clear how long it will keep circulating among cattle. The USDA is doing only limited testing of cows and has not shared all of its data publicly, making the full extent of the outbreak impossible to know. Even if milk is still safe to drink, the thought of bird-flu fragments swimming around in it is unappetizing for a country that has already turned away from milk. Just how much milk Americans used to drink can be hard to grasp. Consumption peaked in 1945 at 45 gallons a person annually, enough to overfill a standard-size bathtub. Americans believed that “more milk makes us healthier” and drank accordingly, DuPuis said. Government marketing pushed milk as a necessary, perfect food that could solve virtually all nutrition problems, especially in children; milk-derived healthiness eventually became associated with strength, affluence, and patriotism. Holes in the health narrative have since appeared: Consuming too much milk and other dairy products is now considered unhealthy because of the fat content. And long-standing myths about milk, such as that its calcium is required for strengthening bones and growing taller, have largely been debunked. Today, drinking milk can get you “milk-shamed” by people who think it’s disgusting. It’s particularly unpopular with younger people, who are grossed out by the milk served in schools. Where dairy once reigned supreme, milk alternatives made of oats, almonds, soy, peas, and countless other things have found a foothold. The FDA even lets plant-based milk call itself “milk,” as I wrote last year. Less demand for milk would have consequences. “I suspect the dairy industry is on the edge of their seat,” DuPuis said. Outbreaks are expected to take a financial toll on farmers, who will not only sell less milk but also have to care for sick animals, and the costs may be passed on to consumers. In rural areas that once thrived on milk production, such as upstate New York, abandoned small farms are now overgrown with trees, DuPuis observed. “Are we going to end up with fewer farms and more trees because of this latest problem? I can imagine so,” she said. The myth of milk has been eroded from many fronts: nutrition research, shifting societal norms, an abundance of new beverages. With bird flu, it has never seemed less like the magic health elixir it was once thought to be. But the turn against milk might have gone too far. Pasteurization was invented in the 19th century, yet it works to kill modern-day pathogens. Dairy has a great track record when it comes to safety, Polzin said. And it is still a decently healthy choice, with some significant advantages over plant-based alternatives, such as having more vitamins and minerals, less sugar, and more protein. Even during the bird-flu outbreak, milk may still have some magic to it. from https://ift.tt/AY36KvX Check out http://natthash.tumblr.com It takes a certain amount of confidence to call your biotech company Grail. According to its website, the Menlo Park–based firm got its name because its “co-founders believed a simple blood test could be the ‘holy GRAIL’ of cancer detection.” Now the company claims that its “first-of-its-kind” screening tool, called Galleri, “redefines what’s possible.” At the cost of a needle stick and $949, the company can check your blood for more than 50 forms of cancer all at once. The Galleri test, as well as many others of its type that are in development, is meant to sniff out malignant DNA floating in a person’s veins, including bits of tumors that otherwise might not be identified until they’ve spread. But the rapid introduction of this new technology, which is now available through major U.S. health systems, isn’t really guaranteed to help patients. Indeed, a contentious debate about its potential benefits has been playing out in the scientific literature for the past few years. Multi-cancer-screening tools—or “cancer-finding supertests,” as Galleri has been called--aren’t yet endorsed by the U.S. Preventive Services Task Force, or formally approved by the Food and Drug Administration. For the moment, health-care providers can offer Galleri only through a commonly used regulatory loophole that the government is desperately trying to close. Being able to distribute the company’s “prescription-only, well-validated test” in advance of full FDA approval is a good thing, Kristen Davis, a Grail spokesperson told me, because it gives patients “timely access to an important tool in the detection of unscreened cancers and allows for important real-world evidence collection.” That’s one way to look at it. Here’s another: The rush to get Galleri and related products into doctors’ offices skips right over the most important step in clinical development: proving that they really work. “The status quo for cancer screening remains unacceptable,” Davis said. She’s right. Even traditional early-detection tests are controversial within the medical community. As a hospital pathologist who diagnoses cancer daily, I’ve seen firsthand how mammograms and Pap smears, among other traditional procedures, save some people’s lives—and also how they cause a lot of overtreatment. (They miss many lethal cancers, too.) Blood-based cancer screening, in particular, had an ignominious start. Most men middle-aged and older in the U.S. get PSA tests, which look for abnormal levels of a protein secreted from the prostate gland that may indicate malignancy. But many of the tumors those tests identify are slow-growing, harmless ones; their discovery leads to an epidemic of unnecessary surgery and radiation—and a subsequent epidemic of incontinence and impotence. Recognizing this harm, the scientist who first identified PSA more than half a century ago expressed his regret in 2010, calling widespread screening “a profit-driven public health disaster.” Modern blood-based cancer tests (or “liquid biopsies”), which look for a tumor’s genetic material, have been more promising. The first was approved by the FDA in 2016. It allows patients who already know that they have lung cancer to avoid an invasive tissue-collection process while still receiving the right, targeted therapy for their particular disease. Today, liquid biopsies exist for other kinds of cancer, too, and are used to tailor treatment for people who are aware of being sick. Unleashing the same technique on the general population, in an effort to find hidden cancers in healthy-seeming people, is in principle a reasonable idea. But in 2020, when Grail started trying its technology on thousands of adults without cancer symptoms, the company found that a majority of positive signals—the signs of potential tumors that it identified—weren’t real. Dozens of healthy participants were flagged as possibly having cancer; most suffered through unnecessary laboratory and imaging follow-up. One unlucky subject described in the published study even had his testicle removed in the hunt for a malignancy that didn’t exist. Another blood-based supertest called CancerSeek—which forms the basis of a multi-cancer test now under commercial development—had shown the same problem when an early iteration of it was studied in some 10,000 women: Registered blood “abnormalities” led to confirmed cancer diagnoses less than half of the time. False positives with CancerSeek caused some patients to have operations on their ovaries, colon, or appendix. No form of cancer screening will be perfect, and Davis pointed out that “when used as recommended, in addition to current single-cancer screenings, the Galleri test can help screen for some of the deadliest cancers that often come with no warning today.” For cancers of the pancreas, ovaries, esophagus, and liver, she suggested, any form of screening will be better than what we currently have: nothing. Grail researchers have also noted that its technology “compares favourably” to other, more familiar single-cancer tests in the sense that a smaller proportion of patients end up with spurious results. (One in 200 people will experience a false positive with Galleri, while the same is true for about one in 10 women who get a mammogram.) But an imperfect screening tool is not always better than no screening tool at all. We already have reasonably accurate early-detection tests for pancreatic and ovarian cancer, for example, but experts recommend against their widespread use because—counterintuitively—screening healthy patients does little to extend their lives and comes with its own harms. And although it is true that Galleri’s false-positive rate is quite good in comparison to those of mammograms, PSA tests, and Pap smears, that’s only half the story. A glitchy answer from a cancer supertest like Grail’s may well be worse than the equivalent mistake in, say, a breast exam. The latter would only lead to further hunting for a tumor in the breast—perhaps with an ultrasound or MRI. In contrast, the follow-up for a suspect finding from a screen for 50 different cancers could be body-wide, producing yet more ambiguous results—such as the discovery of kidney cysts or lung nodules—that generate their own tests and surgeries. When Galleri finds a potential tumor, it does provide doctors with some hints as to where that tumor might be located. In practice, though, doctors will likely err on the side of running lots of tests. Positive signals are often followed by a PET-CT scan, for example, which costs about $2,500 and exposes people to 62 times the radiation of a mammogram. In Grail’s own research, participants who received a false-positive result were generally subjected to multiple additional lab and imaging tests—sometimes as many as 16 laboratory studies and 10 clinic visits. [Read: When cancer screening stopped] More thorough and extensive testing takes longer, too. An errant mammogram might be resolved fairly quickly, with conclusive follow-up testing done a few weeks later. The equivalent delay after an abnormal Pap smear is less than two months, generally speaking. In the aftermath of multi-cancer blood-test screenings, though, worried patients may have to bide their time for almost half a year before a doctor reassures them that they do not, in fact, have cancer. Subjects in Grail’s study who received a false-positive result spent an average of 162 days in suspense before being cleared. When I asked Grail about potential harms of the test, including this delay, the spokesperson told me that Galleri offers diagnostic guidance for doctors and patients who test positive through “a suite of services, including direct support from our medical science liaisons.” Grail has also presented data suggesting that the distress of patients who receive false positives tends to go away over time. Some people, however, may never feel completely at ease knowing that cancer-related genetic code is circulating in their veins. The medical system is very good at puncturing patients’ confidence in their own health. Some anxiety may be worth experiencing for the opportunity to catch an actual cancer before it turns fatal. But that exchange would only work if curable cancers could be consistently picked up in our blood. Galleri is much better at detecting advanced malignancies—which shed more genetic material, and many of which are incurable—than small ones that are worth finding sooner. Galleri is billed as an early-detection test, but just one out of five cancers it finds are identified at Stage 1, which is the earliest stage. At this point, the same is true for other blood-based screening strategies, as well. [Read: Theranos and COVID-19 testing are mirror-image cautionary tales] The only way to know for sure whether cancer-finding supertests truly save lives is to evaluate them in a large randomized, controlled trial. The U.K.’s National Health Service has enrolled 140,000 participants in such a study of Galleri; the main results, on whether the test can find cancers before they spread, are expected in a year or two. Then researchers will keep track of whether participants have their lives extended in the years that follow. In the meantime, U.S. efforts are running far behind. The National Cancer Institute is planning for a 24,000-person pilot study of multi-cancer screening, but any bigger and more useful randomized trial won’t begin for a long time. The fact that all of this research is ongoing hasn’t stopped Grail from offering its wares to the public. The company recently sponsored a PGA Champions Tour event in California, where players and fans were offered cancer-screening blood tests on the golf course at a $100 discount; more than 100,000 Galleri tests have been performed in the U.S. since they first became commercially available. Meanwhile, hundreds of advocacy groups are lobbying the government to pay for multi-cancer-screening tests through Medicare. By one estimate, widespread adoption could cost Americans more than $100 billion annually—dwarfing the $7.8 billion spent on mammograms as of 2010, or the $6.6 billion spent on Pap smears. It’s hard to miss the scientific challenge that still remains. In what might be a bit of corporate retconning, when Barron’s spoke with one of Grail’s co-founders about the story behind the company’s name in 2021, he wasn’t quoted saying that the company thought its blood test could be the holy grail of cancer screening. Rather, he said the name was chosen “out of humility,” because “the Holy Grail was never found.” That humility isn’t in the pitch to patients, though. Most people who use the product today will have no idea that they are generating “real-world evidence” for a technology that may yet be found unable to extend their lives. They’ll assume that if cancer-finding supertests are available in clinics, then we must already know that they’re worth using. We don’t. from https://ift.tt/4NAsFrU Check out http://natthash.tumblr.com The ongoing outbreak of H5N1 avian flu virus looks a lot like a public-health problem that the United States should be well prepared for. Although this version of flu is relatively new to the world, scientists have been tracking H5N1 for almost 30 years. Researchers know the basics of how flu spreads and who tends to be most at risk. They have experience with other flus that have jumped into us from animals. The U.S. also has antivirals and vaccines that should have at least some efficacy against this pathogen. And scientists have had the advantage of watching this particular variant of the virus spread and evolve in an assortment of animals—including, most recently, dairy cattle in the United States—without it transmitting in earnest among us. “It’s almost like having the opportunity to catch COVID-19 in the fall of 2019,” Nahid Bhadelia, the founding director of Boston University Center on Emerging Infectious Diseases, told me. Yet the U.S. is struggling to mount an appropriate response. Because of the coronavirus pandemic, the nation’s alertness to infectious disease remains high. But both federal action and public attention are focusing on the wrong aspects of avian flu and other pressing infectious dangers, including outbreaks of measles within U.S. borders and epidemics of mosquito-borne pathogens abroad. To be fair, the United States (much like the rest of the world) was not terribly good at gauging such threats before COVID, but now “we have had our reactions thrown completely out of whack,” Bill Hanage, an infectious-disease epidemiologist and a co-director of the Center for Communicable Disease Dynamics at Harvard’s School of Public Health, told me. Despite all that COVID put us through—perhaps because of it—our infectious-disease barometer is broken. H5N1 is undoubtedly concerning: No version of this virus has ever before spread this rampantly across this many mammal species, or so thoroughly infiltrated American livestock, Jeanne Marrazzo, the director of the National Institute of Allergy and Infectious Diseases, told me. But she and other experts maintain that the likelihood of H5N1 becoming our next pandemic remains quite low. No evidence currently suggests that the virus can spread efficiently between people, and it would still likely have to accumulate several more mutations to do so. That’s been a difficult message for the public to internalize—especially with the continued detection of fragments of viral genetic material in milk. Every expert I asked maintained that pasteurized dairy products—which undergo a heat-treatment process designed to destroy a wide range of pathogens—are very unlikely to pose imminent infectious threat. Yet the fear that dairy could sicken the nation simply won’t die. “When I see people talking about milk, milk, milk, I think maybe we’ve lost the plot a little bit,” Anne Sosin, a public-health researcher at Dartmouth, told me. Experts are far more worried about still-unanswered questions: “How did it get into the milk?” Marrazzo said. “What does that say about the environment supporting that?” During this outbreak, experts have called for better testing and surveillance—first of avian and mammalian wildlife, now of livestock. But federal agencies have been slow to respond. Testing of dairy cows was voluntary until last week. Now groups of lactating dairy cows must be screened for the virus before they move across state lines, but by testing just 30 animals, often out of hundreds. Michael Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota, told me he would also like to see more testing of other livestock, especially pigs, which have previously served as mixing vessels for flu viruses that eventually jumped into humans. More sampling would give researchers a stronger sense of where the virus has been and how it’s spreading within and between species. And it could help reveal the genomic changes that the virus may be accumulating. The U.S. Department of Agriculture and other federal agencies could also stand to shift from “almost this paternalistic view of, ‘We’ll tell you if you need to know,’” Osterholm said, to greater data transparency. (The USDA did not respond to a request for comment.) Testing and other protections for people who work with cows have been lacking, too. Many farm workers in the U.S. are mobile, uninsured, and undocumented; some of their employers may also fear the practical and financial repercussions of testing workers. All of that means a virus could sicken farm workers without being detected—which is likely already the case—then spread to their networks. Regardless of whether this virus sparks a full-blown pandemic, “we are completely ignoring the public-health threat that is happening right now,” Jennifer Nuzzo, the director of the Pandemic Center at the Brown University School of Public Health, told me. The fumbles of COVID’s early days should have taught the government how valuable proactive testing, reporting, and data sharing are. What’s more, the pandemic could have taught us to prioritize high-risk groups, Sosin told me. Instead, the United States is repeating its mistakes. In response to a request for comment, a CDC spokesperson pointed me to the agency’s published guidance on how farmworkers can shield themselves with masks and other personal protective equipment, and argued that the small number of people with relevant exposures who are displaying symptoms has been adequately monitored or tested. Other experts worry that the federal government hasn’t focused enough on what the U.S. will do if H5N1 does begin to rapidly spread among people. The country’s experience with major flu outbreaks is an advantage, especially over newer threats such as COVID, Luciana Borio, a former acting chief scientist at the FDA and former member of the National Security Council, told me. But she worries that leaders are using that notion “to comfort ourselves in a way that I find to be very delusional.” The national stockpile, for instance, includes only a limited supply of vaccines developed against H5 flu viruses. And they will probably require a two-dose regimen, and may not provide as much protection as some people hope, Borio said. Experience alone cannot solve those challenges. Nor do the nation’s leaders appear to be adequately preparing for the wave of skepticism that any new shots might meet. (The Department of Health and Human Services did not respond to a request for comment.) In other ways, experts told me, the U.S. may have overlearned certain COVID lessons. Several researchers imagine that wastewater could again be a useful tool to track viral spread. But, Sosin pointed out, that sort of tracking won’t work as well for a virus that may currently be concentrated in rural areas, where private septic systems are common. Flu viruses, unlike SARS-CoV-2, also tend to be more severe for young children than adults. Should H5N1 start spreading in earnest among humans, closing schools “is probably one of the single most effective interventions that you could do,” Bill Hanage said. Yet many politicians and members of the public are now dead set on never barring kids from classrooms to control an outbreak again. These misalignments aren’t limited to H5N1. In recent years, as measles and polio vaccination rates have fallen among children, cases—even outbreaks—of the two dangerous illnesses have been reappearing in the United States. The measles numbers are now concerning and persistent enough that Nahid Bhadelia worries that the U.S. could lose its elimination status for the disease within the next couple of years, undoing decades of progress. And yet public concern is low, Helen Chu, an immunologist and respiratory-virus expert at the University of Washington, told me. Perhaps even less thought is going toward threats abroad—among them, the continued surge of dengue in South America and a rash of cholera outbreaks in Africa and southern Asia. “We’re taking our eye off the ball,” Anthony Fauci, NIAID’s former director, told me. That lack of interest feels especially disconcerting to public-health experts as public fears ignite over H5N1. “We don’t put nearly enough emphasis on what is it that really kills us and hurts us,” Osterholm told me. If anything, our experience with COVID may have taught people to further fixate on novelty. Even then, concern over newer threats, such as mpox, quickly ebbs if outbreaks become primarily restricted to other nations. Many people brush off measles outbreaks as a problem for the unvaccinated, or dismiss spikes in mpox as an issue mainly for men who have sex with men, Ajay Sethi, an infectious-disease epidemiologist at the University of Wisconsin at Madison, told me. And they shrug off just about any epidemic that happens abroad. The intensity of living through the early years of COVID split Americans into two camps: one overly sensitized to infectious threats, and the other overly, perhaps even willfully, numbed. Many people fear that H5N1 will be “the next big one,” while others tend to roll their eyes, Hanage told me. Either way, public trust in health authorities has degraded. Now, “no matter what happens, you could be accused of not sounding the alarm, or saying, ‘Oh my God, here we go again,’” Jeanne Marrazzo told me. As long as infectious threats to humanity are growing, however, recalibrating our sense of infectious danger is imperative to keeping those perils in check. If a broken barometer fails to detect a storm and no one prepares for the impact, the damage might be greater, but the storm itself will still resolve as it otherwise would. But if the systems that warn us about infectious threats are on the fritz, our neglect may cause the problem to grow. from https://ift.tt/Tb2D9Hp Check out http://natthash.tumblr.com Reading, while not technically medicine, is a fundamentally wholesome activity. It can prevent cognitive decline, improve sleep, and lower blood pressure. In one study, book readers outlived their nonreading peers by nearly two years. People have intuitively understood reading’s benefits for thousands of years: The earliest known library, in ancient Egypt, bore an inscription that read The house of healing for the soul. But the ancients read differently than we do today. Until approximately the tenth century, when the practice of silent reading expanded thanks to the invention of punctuation, reading was synonymous with reading aloud. Silent reading was terribly strange, and, frankly, missed the point of sharing words to entertain, educate, and bond. Even in the 20th century, before radio and TV and smartphones and streaming entered American living rooms, couples once approached the evening hours by reading aloud to each other. But what those earlier readers didn’t yet know was that all of that verbal reading offered additional benefits: It can boost the reader’s mood andability to recall. It can lower parents’ stress and increase their warmth and sensitivity toward their children. To reap the full benefits of reading, we should be doing it out loud, all the time, with everyone we know. Reading aloud is a distinctive cognitive process, more complex than simply reading silently, speaking, or listening. Noah Forrin, who researched memory and reading at the University of Waterloo, in Canada, told me that it involves several operations—motor control, hearing , and self-reference (the fact that you said it)—all of which activate the hippocampus, a brain region associated with episodic memory. Compared with reading silently, the hippocampus is more active while reading aloud, which might help explain why the latter is such an effective memory tool. In a small 2012 study, students who studied a word list remembered 90 percent of the words they’d read aloud immediately afterward, compared with 71 percent of those they’d read silently. (One week later, participants remembered 59 percent of the spoken words and 48 percent of the words read silently.) So although you might enjoy an audiobook narrated by Meryl Streep, you would remember it better if you read parts of it out loud—especially if you did so in small chunks, just a short passage at a time, Forrin said. The same goes for a few lines of a presentation that you really want to nail. Those memory benefits hold true whether or not anyone is around to hear your performance. Verbal reading without an audience is, in fact, surprisingly common. While studying how modern British people read aloud, Sam Duncan, a professor of adult literacies at University College London, found that they read aloud—and alone—for a variety of reasons. One woman recited Welsh poetry to remember her mother, with whom she spoke Welsh as a girl. One young man read the Quran out loud before work to better understand its meaning. Repeating words aloud isn’t just key to memorization, Duncan told me—it can be key to identity formation too. [From the August 1904 issue: On reading aloud] Plenty of solitary vocal reading no doubt consists of deciphering recipes and proofreading work emails, but if you want to reap the full perks, the best selections are poetry and literature. These genres provide access to facets of human experience that can be otherwise unreachable, which helps us process our own emotions and memories, says Philip Davis, an emeritus professor of literature and psychology at the University of Liverpool. Poetry, for example, caninduce peak emotional responses, a strong reaction that might include goose bumps or chills. It can help you locate an emotion within yourself, which is important to health as a form of emotional processing. Poetry also contains complex, unexpected elements, like when Shakespeare uses god as a verb in Coriolanus: “This last old man … godded me.” In anfMRI study that Davis co-authored in 2015, such literary surprise was shown to be stimulating to the brain. Davis told me that literature, with its “mixture of memory and imagination,” can cause us to recall our most complex experiences and derive meaning from them. A poem or story read aloud is particularly enthralling, he said, because it becomes a live presence in the room, with a more direct and penetrative quality, akin to live music. Davis likens the role of literature and live reading to a spark or renewal, “a bringing of things back to life.” Discussing the literature that you read aloud can be particularly valuable. Davis told me doing so helps penetrate rigid thinking and can dislodge dysfunctional thought patterns. A qualitative study he co-authored in 2017 found that, for those who have chronic pain and the depression that tends to come with it, such discussion expands emotional vocabulary—a key tenet of psychological well-being--perhaps even more so than cognitive behavioral therapy. (The allure of an audience has one notable exception: If you’re anxious, reading aloud can actuallyreduce memory andcomprehension. To understand this effect, one need only harken back to fifth grade when it was your turn to read a paragraph on Mesopotamia in class.) [Read: How to keep your book club from becoming a wine club] The health benefits of reading aloud are so profound that some doctors in England now refer their chronic-pain patients to read-aloud groups. Helen Cook, a 45-year-old former teacher in England, joined one of these groups in 2013. Cook had a pelvic tumor that had sent anguish ricocheting through her hip and back for a decade, and medication never seemed to help. Before she joined the reading group, Cook had trouble sleeping, lost her job, and “had completely lost myself,” she told me. Then, she and nine other adults began working their way through some 300 pages of Hard Times, by Charles Dickens. Cook told me she recognized her experience in the characters’ travails, and within months, she “rediscovered a love for life,” even returning to college for a master’s degree in literature. She’s not the only one who found relief: In Davis’s 2017 study, everyone who read aloud in a group felt emotionally better and reported less pain for two days afterward. Hearing words read aloud to you also has unique advantages, especially for kids. Storytelling has been shown to increase hospitalized children’s levels ofoxytocin while decreasing cortisol and pain. Julie Hunter, who for more than 20 years has taught preschool kids (including my daughter), told me that interactive reading increases young children’s comprehension, builds trust, and enhances social-emotional skills. A recent study by researchers at the Brookings Institution found that children smiled and laughed more when being read to by a parent than when listening to an automatically narrated book alone. [Read: An ode to being read to] Anecdotal evidence suggests that adults, too, can benefit from such listening. For 25 years, Hedrick and Susan Smith, ages 90 and 84, respectively, have read more than 170 books aloud. They started by reading in the car, to pass the time, but it was so much fun that they started reading every night before they turned out the light, Hedrick told me. Together, they tried to comprehend One Hundred Years of Solitude, narrated Angela’s Ashes in four different Irish accents, and deduced clues in John le Carré thrillers. They felt more connected, and went to sleep in brighter moods. If they liked the book, they couldn’t wait for the other to read the next chapter aloud—even, and perhaps especially, when the sound of the other’s voice sent them off to sleep. from https://ift.tt/VLgvrGb Check out http://natthash.tumblr.com A few months ago, my doctor uttered a phrase I’d long dreaded: Your blood sugar is too high. With my family history of diabetes, and occasional powerful cravings for chocolate, I knew this was coming and what it would mean: To satisfy my sweet fix, I’d have to turn to sugar substitutes. Ughhhh. Dupes such as aspartame, stevia, and sucralose (the main ingredient in Splenda) are sweet and have few or zero calories, so they typically don’t spike your blood sugar like the real thing. But while there are now more sugar alternatives than ever, many people find that they taste terrible. The aspartame in Diet Coke leaves the taste of pennies in my mouth. And in large amounts, substitutes are bad for you: Last year, the World Health Organization warned that artificial sweeteners could raise the risk of certain diseases, singling out aspartame as “possibly carcinogenic.” But last week, I sipped a can of Arnold Palmer with a brand-new sweetener that promised to be unlike all the rest. The drink’s strong lemon flavor was mellowed by a light, unremarkable sweetness that came from brazzein, a sugar substitute green-lighted by the FDA last month. Oobli, a California-based company that sells the lemonade-iced tea and manufactures brazzein (which occurs naturally in West Africa’s oubli fruit), has billed it as a “revolution in sweetness.” Yet like everything that came before it, brazzein is far from perfect: To help mask its off taste, the can had some real sugar in it too. For now, Eric Walters, a sweetener expert at Rosalind Franklin University, told me, brazzein is just “an alternative” to the many options that already exist. None has come even close to the real deal. The ideal sugar alternative is more than just sweet. It must also be safe, taste good, and replicate the distinct way sugar’s sweetness develops on the tongue. In addition to aspartame and other synthetic sugar alternatives that have existed for more than a century, the past two decades have brought “natural” ones that are plant-derived: sweeteners made with stevia or monk fruit, which the FDA first approved in 2008 and 2010, respectively, can now be readily found in beverages such as Truly hard seltzer and Fairlife protein shakes. Stevia and monk fruit have been used “for hundreds of years by the people who live in the regions where they grow, so I don’t have huge worries about their safety,” Walters told me. All of these sweeteners work in basically the same way. Chemically, molecules other than just sugar can bind to the tongue’s sweet receptors, signaling to the brain that something sweet has landed. But the brain can tell when that something is not sugar. So far, no sweetener has accomplished that trick; off flavors that sometimes linger always give away the ruse. The problem is that sugar alternatives are like celebrity impersonators: aesthetically similar, reasonably satisfying, but consistently disappointing. Take stevia and monk fruit: By weight, they’re intensely sweet relative to table sugar—monk fruit by a factor of up to 250 and stevia by a factor ofup to 400. Because only a tiny amount is needed to impart a sweet taste, those sweeteners must be bulked up with another substance so they more closely resemble sugar granules. Manufacturers used to add carbohydrates such as corn starch—which are eventually broken down into sugars—but they now use erythritol, a calorie-free sugar alcohol, which “doesn’t count as sugar at all,” Walters said. The end products look and feel similar to sugar, but not without downsides. Erythritol has been linked to an increased risk of heart attack and stroke. And stevia and monk-fruit sweeteners come with an aftertaste that has been described as “bitter,” “unpleasant,” and “disastrous.” When Walters first helped produce stevia 35 years ago, “the taste quality was so awful that we thought no one would buy it,” he said. “But we underestimated how much people would put up with it because it was ‘natural.’” Brazzein is yet another natural option. Unlike other sugar substitutes, brazzein is a protein, but it is still intensely sweet and low in calories. It is so sweet—about 1,000 times sweeter than sugar—that some gorillas in the wild have learned not to waste their time eating it. That protein has become a health buzzword certainly won’t hurt Oubli’s sales, but its products won’t bolster any biceps: Its teas contain very little—about 1 percent—because brazzein’s sweetness is so potent. Last month, Oobli received a “no questions asked” letter from the FDA, which means that the agency isn’t concerned about the product’s safety. Oobli’s iced teas and chocolates are the first brazzein-sweetened products to be sold in the U.S., although the sweet protein was identified three decades ago. Thaumatin, another member of the sweet-protein family, has been in use since the 1970s, though mostly as a flavor enhancer. One reason it took brazzein so long to be marketable is that it occurs at such low levels in the oubli fruit that mass-producing it was inefficient. Instead of harvesting brazzein from fruit, Oubli grows the protein in yeast cells, which is more scalable and affordable, Jason Ryder, Oobli’s co-founder and chief technology officer, told me. One distinction between brazzein and other sweeteners is its chemical size. Compared with sugar, stevia, and monk fruit, brazzein molecules are relatively large because they are proteins, which means they aren’t metabolized in the same way, Ryder said. The effects of existing sweeteners on the body are still being investigated; although they are generally thought to not hike blood sugar or insulin, recent research suggests that they may in fact do so. That may never be a concern with brazzein, Grant DuBois, a sweetener expert and the chief science officer at Almendra, a stevia manufacturer, told me. The most compelling upside of brazzein may be that it tastes pretty good. My palate, which is extra sensitive to artificial sweeteners, wasn’t offended by the taste. Would drink again, I thought. But the glaring caveat with Oobli’s teas is that they do contain some actual sugar—just less than you’d expect from a regular drink. The sugar helps mitigate a feature of brazzein’s sweetness, Ryder said. One of the enduring problems with brazzein and many other popular sugar alternatives is that their sweetness takes more time than usual to develop, then lingers longer than expected. Indeed, although I liked the Arnold Palmer as it went down, I felt a peculiar sensation afterward: a trace of sweetness at the back of my throat that intensified, and felt oddly cool, as I exhaled. It was not unpleasant, but it was also reminiscent of having accidentally swallowed minty gum. If Diet Coke were made with brazzein instead of aspartame, Walters explained, you’d taste caffeine’s bitterness and the tartness of phosphoric acid before any sweetness, and when all of those flavors dissipated, the sweetness would hang around. “It’s just not what you want your beverage to be,” he said. Balancing brazzein with a touch of sugar achieves the goal of reducing sugar intake. But most of the time, people who seek out products sweetened with sugar alternatives want “zero sugar,” DuBois said, “so that’s not really a great solution to the problem.” The perfect sweetener would wholly replace all of the sugar in a food, but brazzein probably won’t get there unless the peculiarities of its sweetness can be fully addressed. “If I knew how, I could probably make millions of dollars,” Walters said. The future of sugar substitutes may soon offer improvements rather than alternatives. Last year, DuBois and his colleagues at Almendra published a peer-reviewed paper describing a method to speed up slow-moving sweetness by adding a pinch of mineral salts to sweeteners, which helps them quickly travel through the thick mucus of the tongue, resulting in a vastly improved experience of sweetness. “It works with stevia, but also aspartame, sucralose, monk fruit—it works very well with everything we’ve tried,” Dubois said, noting that it would probably also work with brazzein. With the right technology, sweeteners, he said, can become “remarkably sugarlike.” Yet searching for the perfect sugar alternative is a fool’s errand. No matter how good they get, a single substance is unlikely to satisfy all tastes and expectations about health. As my colleague Amanda Mull wrote when aspartame was deemed carcinogenic over the summer, there’s always something. Much is left to be learned about the health effects of natural sweeteners, which may not be as natural as they seem; some stevia products, for example, are chemically modified to taste better, Walters told me More than anything, sweeteners exist so that people can indulge in sweet treats without needing to worry about the consequences. They can address most of sugar’s problems—but they can’t do everything. “If you pick one sweetener and put it in everything, and drink and eat it all day long, that’s probably not a good thing for you,” Walters said. A sugar-free, flawlessly sweet chocolate may someday come to exist, but I’ll probably never be able to gorge on it without dreading my next blood test. from https://ift.tt/iJ7yQ1G Check out http://natthash.tumblr.com In the fall of 2021, Gabriel Arias felt like his body was “rotting from the inside.” He was diagnosed with acute myeloid leukemia, a form of blood cancer so aggressive that doctors had him hospitalized the day of his biopsy. In cases like his, the ideal treatment is a transplant. Arias’s cancer-prone blood cells needed to be destroyed and replaced with healthy ones taken from the bone marrow or blood of a donor who matched him biologically. Fortunately, doctors found him a match in the volunteer-donor registries—a man in Poland. Unfortunately, Arias’s single match in the entire world was no longer available to donate. In the past, the road to transplant might have ended here, but a medical advance had dramatically expanded the pool of donors for patients such as Arias. With the right drug, Arias could now get a transplant from his brother, a partial match, or, as he ultimately chose, he could join a clinical trial in which his donor would be a stranger who shared just eight of 10 markers used in bone-marrow transplants. Under this looser standard, Arias’s registry matches multiplied from one to more than 200. “It really is a game changer,” says Steve Devine, the chief medical officer of NMDP, a nonprofit that runs a donor registry. Today, agonizing searches for a matched donor are largely a thing of the past. The drug powering this breakthrough is actually very old. Cyclophosphamide was first developed in the 1950s for chemotherapy. Fifty years later, researchers at Johns Hopkins began studying whether it could be repurposed to prevent a common and sometimes deadly complication of bone-marrow transplants called graft-versus-host disease, where the donor’s white blood cells—which form the recipient’s new immune system—attack the rest of the body as foreign. The bigger the mismatch between donor and recipient, the more likely this was to happen. Cyclophosphamide worked stunningly well against graft-versus-host disease: The drug cut rates of acute and severe complications by upwards of 80 percent. Cyclophosphamide has now enabled more patients than ever to get bone-marrow transplants —more than 7,000 last year, according to NMDP. (Bone-marrow transplant is still used as an umbrella term, though many of these procedures now use cells collected from the blood rather than bone marrow, which can be done without surgery. Both versions are also known, more accurately, as hematopoietic or blood stem-cell transplants.) The field has essentially surmounted the problem of matching donors, a major barrier to transplants, Ephraim Fuchs, an oncologist at Johns Hopkins University, told me. Fuchs couldn’t remember the last time a patient failed to get a blood stem-cell transplant because they couldn’t find a donor. It wasn’t obvious that cyclophosphamide would work so well. “I’m just going to come clean,” Devine told me. “Back in 2003 and 2005, I thought it was crazy.” Derived from a relative of mustard gas, the drug is known to be highly toxic to a variety of blood cells; in fact, doctors had long used it to kill the diseased bone marrow in patients before transplant. Why would you want to give such a drug after transplant, when the new donor cells are still precious and few? It defied a certain logic. But as far back as the 1960s, researchers also noticed that high doses of post-transplant cyclophosphamide could prevent graft-versus-host disease in mice, even if they did not know why. Over the next few decades, scientists working away in labs learned that cyclophosphamide isn’t quite carpet-bombing the blood. It actually spares the stem cells most important to successful transplant. (Blood stem cells differentiate into all the types of red and white blood cells that a patient will need.) Why cyclophosphamide works so well against graft-versus-host disease is still unclear, but the drug also seems to selectively kill white blood cells active in the disease while sparing those that quell the immune system. By the late ’90s, doctors saw a clear need to expand the search for donors. Bone-marrow transplants are most successful when donor and recipient share the same markers, known as HLA, which are protein tags our cells use to distinguish self from nonself. We inherit HLA markers from our parents, so siblings have about a one-in-four chance of being perfectly matched. As families got smaller in the 20th century, though, the likelihood of a sibling match fell. Donor registries such as NMDP were created to fill the gap, however imperfectly. Doctors soon began coalescing around the idea of using family members who were only haploidentical, or half matched, meaning they shared at least five out of 10 HLA markers. Every child is a half match to their parents, and every parent to their child; siblings also have a 50 percent chance of being half matches. But when doctors first tried these transplants, the “outcomes were horrible,” Leo Luznik, an oncologist at Johns Hopkins, told me. Patients had frighteningly high rates of graft-versus-host disease, and more than half died within three years. Based on the lab findings, Luznik, Fuchs, and other colleagues at Johns Hopkins wondered if post-transplant cyclophosphamide could help. The pharmaceutical companies that made it were uninterested in funding any research, Luznik said, because “it was an old, very cheap drug." With government grants, however, the team was able to prove that cyclophosphamide got the rate of graft-versus-disease as low as in matched sibling transplants. By the late 2000s, transplants with half-matched family members were becoming routine. Still, not every patient will have a sibling or parent or child who can donate. Doctors began wondering if cyclophosphamide could work for unrelated donors too. If only eight of the 10 markers have to be matched, then almost everyone would find a donor, even multiple donors. This was especially important for patients of mixed or non-European ancestry, who have a harder time finding unrelated donors, because people of those backgrounds make up a smaller proportion of registry donors and because they can carry a more diverse set of HLA markers. Two-thirds of white people can find a fully matched registry donor, but that number drops to 23 percent for Black Americans and 41 percent for Asians or Pacific Islanders. Amelia Johnson, who is half Indian and half Black, was one of the first children to get a transplant from a mismatched unrelated donor in a clinical trial in 2022. Her mom, Salome Sookdieopersad, remembers being told, “You guys need to start recruiting bone-marrow donors to help increase your chances.” When that still didn’t turn up an ideal match, Sookdieopersad prepared to donate to her daughter as a half match. But then Amelia was offered a spot in the clinical trial, and they decided to take it. Transplants with mismatched unrelated donors had already been tried in adults—that was Arias’s trial—and they offered other potential benefits. A younger donor, for example, has younger cells, which fare noticeably better than older ones. Amelia did end up with a bout of graft-versus-host disease; cyclophosphamide lowers the risk but not to zero. Still, the transplant was necessary to save her life, and her mom pointed out that some risk was unavoidable, no matter the type of donor: A friend of Amelia’s got graft-versus-host even with a perfectly matched one. Doctors were able to treat Amelia’s complications, and she returned to school last August. The pediatric trial she was part of is ongoing. In adults, where more data are available, doctors are already moving ahead with mismatched, unrelated donors. Between this and half-matched family members, patients who once might have had zero donors are now finding themselves with multiple possibilities. Doctors can be choosier too: They can select the youngest donor, for example, or match on characteristics such as blood type. The larger pool of donors also prevents situations like Arias’s, in which a single matched donor who signed up years ago is no longer available, which happens with some regularity. Cyclophosphamide is now routinely used in matched transplants too, because it lowers the risk of graft-versus-host disease even further. Arias’s mismatched unrelated donor in the trial was an anonymous 22-year-old man who lives somewhere in the United States. When Arias and I spoke last month, it had been almost exactly two years since his transplant. He’s cancer free. He and his wife just welcomed a baby girl. None of this would have likely been possible without the transplant, without the donor, without a 70-year-old drug that had been smartly repurposed. from https://ift.tt/naTiNs2 Check out http://natthash.tumblr.com After a decade working as an obstetrician-gynecologist, Marci Bowers thought she understood menopause. Whenever she saw a patient in her 40s or 50s, she knew to ask about things such as hot flashes, vaginal dryness, mood swings, and memory problems. And no matter what a patient’s concern was, Bowers almost always ended up prescribing the same thing. “Our answer was always estrogen,” she told me. Then, in the mid-2000s, Bowers took over a gender-affirmation surgical practice in Colorado. In her new role, she began consultations by asking each patient what they wanted from their body—a question she’d never been trained to ask menopausal women. Over time, she grew comfortable bringing up tricky topics such as pleasure, desire, and sexuality, and prescribing testosterone as well as estrogen. That’s when she realized: Women in menopause were getting short shrift. Menopause is a body-wide hormonal transition that affects virtually every organ, from skin to bones to brain. The same can be said of gender transition, which, like menopause, is often referred to by doctors and transgender patients as “a second puberty”: a roller coaster of physical and emotional changes, incited by a dramatic shift in hormones. But medicine has only recently begun connecting the dots. In the past few years, some doctors who typically treat transgender patients—urologists, gender-affirmation surgeons, sexual-medicine specialists—have begun moving into menopause care and bringing with them a new set of tools. “In many ways, trans care is light years ahead of women’s care,” Kelly Casperson, a urologist and certified menopause provider in Washington State, told me. Providers who do both are well versed in the effects of hormones, attuned to concerns about sexual function, and empathetic toward people who have had their symptoms dismissed by providers. If the goal of menopause care isn’t just to help women survive but also to allow them to live their fullest life, providers would do well to borrow some insights from a field that has been doing just that for decades. [From the October 2019 issue: The secret power of menopause] American women’s relationship with estrogen has been a rocky one. In the 1960s, books such as Feminine Forever, written by the gynecologist Robert A. Wilson, framed estrogen as a magical substance that could make women once again attractive and sexually available, rendering the menopausal “much more pleasant to live with.” (The New York Times later reported that Wilson was paid by the manufacturer of Premarin, the most popular estrogen treatment at the time.) Later, the pitch switched to lifelong health. By 1992, Premarin was the most prescribed drug in the United States. By the end of the decade, 15 million women were on estrogen therapy, with or without progesterone, to treat their menopause symptoms. Then, in 2002, a large clinical trial concluded that oral estrogen plus progesterone treatment was linked to an increased risk of stroke, heart disease, and breast cancer. The study was an imperfect measure of safety—it focused on older women rather than on the newly menopausal, and it tested only one type of estrogen—but oral-estrogen prescriptions still plummeted, from nearly a quarter of women over 40 to roughly 5 percent. Despite this blow to the hormone’s reputation, evidence has continued to pile up confirming that oral estrogen can help prevent bone loss and treat hot flashes and night sweats, though it can increase the risk of strokes for women over 60. Topical estrogen helps address genital symptoms, including vaginal dryness, irritation, and thinning of the tissues, as well as urinary issues such as chronic UTIs and incontinence. But estrogen alone can’t address every menopause symptom, in part because estrogen is not the only hormone that’s in short supply during menopause; testosterone is too. Although researchers lack high-quality research on the role of testosterone in women over age 65, they know that in premenopausal women, it plays a role in bone density, heart health, metabolism, cognition, and the function of the ovaries and bladder. A 2022 review concluded, “Testosterone is a vital hormone in women in maintaining sexual health and function” after menopause. Yet for decades, standard menopause care mostly ignored testosterone. Part of the reason is regulatory: Although estrogen has enjoyed FDA approval for menopausal symptoms since 1941, the agency has never green-lit a testosterone treatment for women, largely because of scant research. That means doctors have to be familiar enough with the hormone to prescribe it off-label. And unlike estrogen, testosterone is a Schedule III controlled substance, which means more red tape. Some of Casperson’s female patients have had their testosterone prescription withheld by pharmacists; one was asked if she was undergoing gender transition. [Helen Lewis: Capitalism has plans for menopause] The other hurdle is cultural. These days, providers such as Casperson, as well as menopause-trained gynecologists, might prescribe testosterone to menopausal women experiencing difficulty with libido, arousal, and orgasm. Many women see improvements in these areas after a few months. But first, they have to get used to the idea of taking a hormone they’ve been told all their lives is for men, at just the time when their femininity can feel most tenuous (see: Feminine Forever). Here, too, experience in trans care can help: Casperson has talked many transmasculine patients through similar hesitations about using genital estrogen cream to balance out the side effects of their high testosterone doses. Taking estrogen, she tells those patients, “doesn’t mean you’re not who you want to be,” just as taking testosterone wouldn’t change a menopause patient’s gender identity. Many trans-health providers have also honed their skills in speaking frankly about sexuality. That’s especially true for those who do surgeries that will affect a patient’s future sex life, Blair Peters, a plastic surgeon at Oregon Health & Science University who performs phalloplasties and vaginoplasties, told me. Experts I spoke with, including urologists and gynecologists with training in sexual health, said that gynecologists can often fall short in this regard. Despite treating vaginas for a living, they can often be uncomfortable bringing up sexual concerns with patients or inexperienced at treating issues beyond vaginal dryness. They can also assume, inaccurately, that concerns about vaginal discomfort always center on penetrative sex with a male partner, Tania Glyde, an LGBTQ+ therapist in London and the founder of the website Queer Menopause, told me. A 2022 survey of OB-GYN residency programs found that less than a third had a dedicated menopause curriculum. Bowers, who is herself transgender, told me she got comfortable talking about sexuality in a clinical setting only after moving into trans care. If she were to return to gynecology today, she said, she would add some frank questions to her conversations with midlife patients who share that they’re having sexual issues: “Tell me about your sexuality. Tell me, are you happy with that? How long does it take you to orgasm? Do you masturbate? What do you use?” Menopause care has already benefited from decades of effort by queer people, who have pushed doctors to pay more attention to a diversity of experiences. Research dating as far back as the 2000s that included lesbians going through menopause helped show that common menopause stereotypes, such as anxiety over remaining attractive to men and disconnect between members of a couple, were far from universal. Trans people, too, have benefited from advances in menopause care. Because both gender transition and menopause involve a sharp drop in estrogen, many transmasculine men who take testosterone also lose their period, and experience a similar (though more extreme) version of the genital dryness and irritation. That means they can benefit from treatments developed for menopausal women, as Tate Smith, a 25-year-old trans activist in the U.K., realized when he experienced genital pain and spotting after starting testosterone at 20. After he found relief with topical estrogen cream, he made an Instagram post coining the term trans male menopause to make sure more trans men were aware of the connection. [Read: What menopause does to women’s brains] The more menopause and gender care are considered together in medical settings, the better the outcomes will be for everyone involved. Yet menopause studies rarely consider trans men and nonbinary people, along with younger women and girls who experience menopause due to cancer treatment, surgery, or health conditions that affect ovarian function. Although these patient populations represent a small proportion of the patients going through menopause, their experiences can help researchers understand the effects of low estrogen across a range of bodies. Siloing off menopause from other relevant fields of medicine means menopausal women and trans people alike can miss out on knowledge and treatments that already exist. Unlike gender transition, menopause is generally not chosen. But it too can be an opportunity for a person to make choices about what they want out of their changing body. Not all women in menopause are worried about their libido or interested in taking testosterone. Like trans patients, they deserve providers who listen to what they care about and then offer them a full range of options, not just a limited selection based on outdated notions of what menopause is supposed to be. from https://ift.tt/l6rPhkd Check out http://natthash.tumblr.com In December 1921, Leonard Thompson was admitted to Toronto General Hospital so weak and emaciated that his father had to carry him inside. Thompson was barely a teenager, weighing all of 65 pounds, dying of diabetes. With so little to lose, he was an ideal candidate to be patient No. 1 for a trial of the pancreatic extract that would come to be called insulin. The insulin did what today we know it can. “The boy became brighter, more active, looked better and said he felt stronger,” the team of Toronto researchers and physicians reported in March 1922 in The Canadian Medical Association Journal. The article documented their use of insulin on six more patients; it had seemingly reversed the disease in every case. As John Williams, a diabetes specialist in Rochester, New York, wrote of the first patient on whom he tried insulin later that year, “The restoration of this patient to his present state of health is an achievement difficult to record in temperate language. Certainly few recoveries from impending death more dramatic than this have ever been witnessed by a physician.” Of all the wonder drugs in the history of medicine, insulin may be the closest parallel, in both function and purpose, to this century’s miracle of a metabolic drug: the GLP-1 agonist. Sold under now-familiar brand names including Ozempic, Wegovy, and Mounjaro, these new medications for diabetes and obesity have been hailed as a generational breakthrough that may one day stand with insulin therapy among “the greatest advances in the annals of chronic disease,” as The New Yorker put it in December. But if that analogy is apt—and the correspondences are many—then a more complicated legacy for GLP-1 drugs could be in the works. Insulin, for its part, may have changed the world of medicine, but it also brought along a raft of profound, unintended consequences. By 1950, the new therapy had tripled the number of years that patients at a major diabetes center could expect to live after diagnosis. It also kept those patients alive long enough for them to experience a wave of long-term complications. Leonard Thompson would die at 27 of pneumonia. Other young men and women who shared his illness also died far too young, their veins and arteries ravaged by the disease, and perhaps—there was no way to tell—by the insulin therapy and associated dietary protocols that had kept them alive in the first place. In the decades that followed, diabetes, once a rare disorder, would become so common that entire drug-store aisles are now dedicated to its treatment-related paraphernalia. Roughly one in 10 Americans is afflicted. And despite a remarkable, ever-expanding armamentarium of drug therapies and medical devices, the disease—whether in its type 1 or type 2 form—is still considered chronic and progressive. Patients live far longer than ever before, yet their condition is still anticipated to get worse with time, requiring ever more aggressive therapies to keep its harms in check. One in every seven health dollars is now spent on diabetes treatment, amounting to $800 million every day. The advent of insulin therapy also changed—I would even say distorted—the related medical science. In my latest book, Rethinking Diabetes, I document how clinical investigators in the 1920s abruptly shifted their focus from trying to understand the relationship between diet and disease to that between drug and disease. Physicians who had been treating diabetes with either fat-rich diets absent carbohydrates (which had been the accepted standard of care in both the U.S. and Europe) or very low-calorie “starvation” diets came to rely on insulin instead. Physicians would still insist that diet is the cornerstone of therapy, but only as an adjunct to the insulin therapy and in the expectation that any dietary advice they gave to patients would be ignored. With the sudden rise of GLP-1 drugs in this decade, I worry that a similar set of transformations could occur. Dietary therapy for obesity and diabetes may be sidelined in favor of powerful pharmaceuticals—with little understanding of how the new drugs work and what they really tell us about the mechanisms of disease. And all of that may continue despite the fact that the long-term risks of taking the drugs remain uncertain. “The ebullience surrounding GLP-1 agonists is tinged with uncertainty and even some foreboding,” Science reported in December, in its article declaring these obesity treatments the journal’s Breakthrough of the Year. “Like virtually all drugs, these blockbusters come with side effects and unknowns.” Yet given the GLP-1 agonists’ astounding popularity, such cautionary notes tend to sound like lip service. After all, the FDA has deemed these drugs safe for use, and doctors have been prescribing products in this class to diabetes patients for 20 years with little evidence of long-term harm. Yet the GLP-1 agonists’ side effects have been studied carefully only out to seven years of use, and that was in a group of patients on exenatide—an early, far less potent product in this class. The study offered no follow-up on the many participants in that trial who had discontinued use. Other long-term studies have followed patients on the drugs for at least as many years, but they’ve sought (and failed to find) only very specific harms, such as pancreatic cancer and breast cancer. In the meantime, a 2023 survey found that more than two-thirds of patients prescribed the newer GLP-1 agonists for weight loss had stopped using them within a year. Why did they quit? What happened to them when they did? The stories of Leonard Thompson and the many diabetes patients on insulin therapy who came after may be taken as a warning. The GLP-1 drugs have many traits in common with insulin. Both treatments became very popular very quickly. Within years of its discovery, insulin was being prescribed for essentially every diabetic patient whose physician could obtain the drug. Both insulin and GLP-1 agonists were originally developed as injectable treatments to control blood sugar. Both affect appetite and satiety, and both can have remarkable effects on body weight and composition. The GLP-1s, like insulin, treat only the symptoms of the disorders for which they are prescribed. Hence, the benefits of GLP-1s, like those of insulin, are sustained only with continued use. The two treatments are also similar in that they work, directly or indirectly, by manipulating an unimaginably complex physiological system. When present in their natural state—as insulin secreted from the pancreas, or GLP-1 secreted from the gut (and perhaps the brain)—they’re both involved in the regulation of fuel metabolism and storage, what is technically known as fuel partitioning. This system tells our bodies what to do with the macronutrients (protein, fat, and carbohydrates) in the foods we eat. Chris Feudtner, a pediatrician, medical historian, and medical ethicist at the University of Pennsylvania, has described this hormonal regulation of fuel partitioning as that of a “Council of Food Utilization.” Organs communicate with one another “via the language of hormones,” he wrote in Bittersweet, his history of the early years of insulin therapy and the transformation of type 1 diabetes from an acute to a chronic disease. “The rest of the body’s tissues listen to this ongoing discussion and react to the overall pattern of hormonal messages. The food is then used—for burning, growing, converting, storing, or retrieving.” Perturb that harmonious discourse, and the whole physiological ensemble of the human body reverberates with corrections and counter-corrections. This is why the long-term consequences of using these drugs can be so difficult to fathom. Insulin therapy, for instance, did not just lower patients’ blood sugar; it restored their weight and then made them fatter still (even as it inhibited the voracious hunger that was a symptom of uncontrolled diabetes). Insulin therapy may also be responsible, at least in part, for diabetic complications—atherosclerosis and high blood pressure, for instance. That possibility has been acknowledged in textbooks and journal articles but never settled as a scientific matter. With the discovery of insulin and its remarkable efficacy for treating type 1 diabetes, diabetologists came to embrace a therapeutic philosophy that is still ascendant today: Treat the immediate symptoms of the disease with drug therapy and assume that whatever the future complications, they can be treated by other drug or surgical therapies. Patients with diabetes who develop atherosclerosis may extend their lives with stents; those with hypertension may go on blood-pressure-lowering medications. A similar pattern could emerge for people taking GLP-1s. (We see it already in the prospect of drug therapies for GLP-1-related muscle loss.) But the many clinical trials of the new obesity treatments do not and cannot look at what might happen over a decade or more of steady use, or what might happen if the injections must be discontinued after that long. We take for granted that if serious problems do emerge, far down that distant road, or if the drugs have to be discontinued because of side effects, newer treatments will be available to solve the problems or take over the job of weight maintenance. In the meantime, young patients who stick with treatment can expect to be on their GLP-1s for half a century. What might happen during those decades—and what might happen if and when they have to discontinue use—is currently unknowable, although, at the risk of sounding ominous, we will find out. Pregnancy is another scenario that should generate serious questions. A recently published study found no elevated risk of birth defects among women taking GLP-1 agonists for diabetes right before or during early pregnancy, as compared with those taking insulin, but birth defects are just one obvious and easily observable effect of a drug taken during pregnancy. Children of a mother with diabetes or obesity tend to be born larger and have a higher risk of developing obesity or diabetes themselves later in life. The use of GLP-1 agonists during pregnancy may reduce—or exacerbate—that risk. Should the drugs be discontinued before or during pregnancy, any sudden weight gain (or regain) by the mother could similarly affect the health of her child. The consequences cannot be foreseen and might not manifest themselves until these children reach their adult years. The rise of GLP-1 drugs may also distort our understanding of obesity itself, in much the way that insulin therapy distorted the thinking in diabetes research. With insulin’s discovery, physicians assumed that all diabetes was an insulin-deficiency disorder, even though this is true today for only 5 to 10 percent of diabetic patients, those with type 1. It took until the 1960s for specialists to accept that type 2 diabetes was a very different disorder—a physiological resistance to insulin, inducing the pancreas to respond by secreting too much of the hormone rather than not enough. And although the prognosis today for a newly diagnosed patient with type 2 diabetes is better than ever, physicians have yet to establish whether the progression and long-term complications of the disease are truly inevitable, or whether they might be, in fact, a consequence of the insulin and other drug therapies that are used to control blood sugar, and perhaps even of the diets that patients are encouraged to eat to accommodate these drug therapies. Already, assumptions are being made about the mechanisms of GLP-1 agonists without the rigorous testing necessary to assess their validity. They’re broadly understood to work by inhibiting hunger and slowing the passage of food from the stomach—effects that sound benign, as if the drugs were little more than pharmacological versions of a fiber-rich diet. But changes to a patient’s appetite and rate of gastric emptying only happen to be easy to observe and study; they do not necessarily reflect the drugs’ most important or direct actions in the body. When I spoke with Chris Feudtner about these issues, we returned repeatedly to the concept that Donald Rumsfeld captured so well with his framing of situational uncertainty: the known unknowns and the unknown unknowns. “This isn’t a you-take-it-once-and-then-you’re-done drug,” Feudtner said. “This is a new lifestyle, a new maintenance. We have to look down the road a bit with our patients to help them think through some of the future consequences.” Patients, understandably, may have little time for a lecture on all that we don’t know about these drugs. Obesity itself comes with so many burdens—health-related, psychological, and social—that deciding, after a lifetime of struggle, to take these drugs in spite of potential harms can always seem a reasonable choice. History tells us, though, that physicians and their patients should be wary as they try to balance known benefits against a future, however distant, of unknown risk. from https://ift.tt/LQpIlwt Check out http://natthash.tumblr.com |
Authorhttp://natthash.tumblr.com Archives
April 2023
Categories |