Carlos Jared discovered the first known venomous frog by accident. And it took him a long time to connect his pain with tree frogs that head-butted his hand.
Jared, now at the Butantan Institute in São Paulo, got his first hint of true venom when collecting yellow-skinned frogs (Corythomantis greeningi) among cacti and scrubby trees in Brazil’s dry Caatinga region. For hours after grabbing the frogs, intense pain radiated up his arm for no obvious reason. He knew frogs have no fangs to deliver toxin. Many frog species can poison an animal that touches them, but they’re poisonous. True venomous animals actively deliver toxins.
Jared realized head-butting delivers venom only when he saw the frogs’ upper lips under a microscope. Bone spikes erupted near venom glands that looked “giant,” he says. As a frog’s lips curl back, glands dribble toxins onto spikes sticking out from the skull and the frog pokes them against foes.
Gram for gram, the frog venom is almost twice as dangerous to mammals as typical venom of the feared Bothrops pit vipers, Jared, Edmund Brodie Jr. of Utah State University in Logan and their colleagues report online August 6 in Current Biology. The researchers also report a second spiky-skulled venomous frog, Aparasphenodon brunoi, which is a forest species not very closely related to yellow-skinned frogs. It head-butts toxins 25 times as powerful as typical pit viper venom, a phenomenon luckily not discovered by handling.
Accidents are how most venomous animals first come to scientific notice, Brodie says. Early in his career, he discovered details of fire salamander venom by tickling a new specimen with a piece of grass. He was showing students how toxins ooze from its skin and “it sprayed me right in the eye,” he says. “I was immediately blinded.”
“I ran to the sink and ran water in my eye for about 20 minutes,” he says. “The toxin isn’t water soluble, so it didn’t help much. It was extraordinarily painful,” he notes in mild tones. Also, “the first time you observe something like that, you’re not sure it’s temporary blindness.” It was.
Venomous amphibians may be more common than people expect, Brodie says. Now that the researchers know about bone points for venom delivery, they want to investigate some salamanders with ribs that punch through the skin. And at least three more frogs grow suspicious spines around their heads. “It’s not Kermit anymore,” he says.
Editor’s Note: This story was updated on August 13, 2015, to clarify the habitat differences between the two venomous frogs.
The James Webb Space Telescope has spotted the earliest known galaxy to abruptly stop forming stars.
The galaxy, called GS-9209, quenched its star formation more than 12.5 billion years ago, researchers report January 26 at arXiv.org. That’s only a little more than a billion years after the Big Bang. Its existence reveals new details about how galaxies live and die across cosmic time.
“It’s a remarkable discovery,” says astronomer Mauro Giavalisco of the University of Massachusetts Amherst, who was not involved in the new study. “We really want to know when the conditions are ripe to make quenching a widespread phenomenon in the universe.” This study shows that at least some galaxies quenched when the universe was young. GS-9209 was first noticed in the early 2000s. In the last few years, observations with ground-based telescopes identified it as a possible quenched galaxy, based on the wavelengths of light it emits. But Earth’s atmosphere absorbs the infrared wavelengths that could confirm the galaxy’s distance and that its star-forming days were behind it, so it was impossible to know for sure.
So astrophysicist Adam Carnall and colleagues turned to the James Webb Space Telescope, or JWST. The observatory is very sensitive to infrared light, and it’s above the blockade of Earth’s atmosphere (SN: 1/24/22). “This is why JWST exists,” says Carnall, of the University of Edinburgh. JWST also has much greater sensitivity than earlier telescopes, letting it see fainter, more distant galaxies. While the largest telescopes on the ground could maybe see GS-9209 in detail after a month of observing, “JWST can pick this stuff up in a few hours.”
Using JWST observations, Carnall and colleagues found that GS-9209 formed most of its stars during a 200-million-year period, starting about 600 million years after the Big Bang. In that cosmically brief moment, it built about 40 billion solar masses’ worth of stars, about the same as the Milky Way has.
That quick construction suggests that GS-9209 formed from a massive cloud of gas and dust collapsing and igniting stars all at once, Carnall says. “It’s pretty clear that the vast majority of the stars that are currently there formed in this big burst.”
Astronomers used to think this mode of galaxy formation, called monolithic collapse, was the way that most galaxies formed. But the idea has fallen out of favor, replaced by the notion that large galaxies form from the slow merging of many smaller ones (SN: 5/17/21).
“Now it looks like, at least for this object, monolithic collapse is what happened,” Carnall says. “This is probably the clearest proof yet that that kind of galaxy evolution happens.” As to what caused the galaxy’s star-forming frenzy to suddenly stop, the culprit appears to be an actively feeding black hole. The JWST observations detected extra emission of infrared light associated with a rapidly swirling mass of energized hydrogen, which is a sign of an accreting black hole. The black hole appears to be up to a billion times the mass of the sun.
To reach that mass in less than a billion years after the birth of the universe, the black hole must have been feeding even faster earlier on in its life, Carnall says (SN: 3/16/18). As it gorged, it would have collected a glowing disk of white-hot gas and dust around it.
“If you have all that radiation spewing out of the black hole, any gas that’s nearby is going to be heated up to an incredible extent, which stops it from falling into stars,” Carnall says.
More observations with future telescopes, like the planned Extremely Large Telescope in Chile, could help figure out more details about how the galaxy was snuffed out.
MIT chemist Admir Masic really hoped his experiment wouldn’t explode.
Masic and his colleagues were trying to re-create an ancient Roman technique for making concrete, a mix of cement, gravel, sand and water. The researchers suspected that the key was a process called “hot mixing,” in which dry granules of calcium oxide, also called quicklime, are mixed with volcanic ash to make the cement. Then water is added.
Hot mixing, they thought, would ultimately produce a cement that wasn’t completely smooth and mixed, but instead contained small calcium-rich rocks. Those little rocks, ubiquitous in the walls of the Romans’ concrete buildings, might be the key to why those structures have withstood the ravages of time. That’s not how modern cement is made. The reaction of quicklime with water is highly exothermic, meaning that it can produce a lot of heat — and possibly an explosion.
“Everyone would say, ‘You are crazy,’” Masic says.
But no big bang happened. Instead, the reaction produced only heat, a damp sigh of water vapor — and a Romans-like cement mixture bearing small white calcium-rich rocks.
Researchers have been trying for decades to re-create the Roman recipe for concrete longevity — but with little success. The idea that hot mixing was the key was an educated guess.
Masic and colleagues had pored over texts by Roman architect Vitruvius and historian Pliny, which offered some clues as to how to proceed. These texts cited, for example, strict specifications for the raw materials, such as that the limestone that is the source of the quicklime must be very pure, and that mixing quicklime with hot ash and then adding water could produce a lot of heat.
The rocks were not mentioned, but the team had a feeling they were important. “In every sample we have seen of ancient Roman concrete, you can find these white inclusions,” bits of rock embedded in the walls. For many years, Masic says, the origin of those inclusions was unclear — researchers suspected incomplete mixing of the cement, perhaps. But these are the highly organized Romans we’re talking about. How likely is it that “every operator [was] not mixing properly and every single [building] has a flaw?”
What if, the team suggested, these inclusions in the cement were actually a feature, not a bug? The researchers’ chemical analyses of such rocks embedded in the walls at the archaeological site of Privernum in Italy indicated that the inclusions were very calcium-rich.
That suggested the tantalizing possibility that these rocks might be helping the buildings heal themselves from cracks due to weathering or even an earthquake. A ready supply of calcium was already on hand: It would dissolve, seep into the cracks and re-crystallize. Voila! Scar healed.
But could the team observe this in action? Step one was to re-create the rocks via hot mixing and hope nothing exploded. Step two: Test the Roman-inspired cement. The team created concrete with and without the hot mixing process and tested them side by side. Each block of concrete was broken in half, the pieces placed a small distance apart. Then water was trickled through the crack to see how long it took before the seepage stopped.
“The results were stunning,” Masic says. The blocks incorporating hot mixed cement healed within two to three weeks. The concrete produced without hot mixed cement never healed at all, the team reports January 6 in Science Advances.
Cracking the recipe could be a boon to the planet. The Pantheon and its soaring, detailed concrete dome have stood nearly 2,000 years, for instance, while modern concrete structures have a lifespan of perhaps 150 years, and that’s a best case scenario (SN: 2/10/12). And the Romans didn’t have steel reinforcement bars shoring up their structures.
More frequent replacements of concrete structures means more greenhouse gas emissions. Concrete manufacturing is a huge source of carbon dioxide to the atmosphere, so longer-lasting versions could reduce that carbon footprint. “We make 4 gigatons per year of this material,” Masic says. That manufacture produces as much as 1 metric ton of CO2 per metric ton of produced concrete, currently amounting to about 8 percent of annual global CO2 emissions.
Still, Masic says, the concrete industry is resistant to change. For one thing, there are concerns about introducing new chemistry into a tried-and-true mixture with well-known mechanical properties. But “the key bottleneck in the industry is the cost,” he says. Concrete is cheap, and companies don’t want to price themselves out of competition.
The researchers hope that reintroducing this technique that has stood the test of time, and that could involve little added cost to manufacture, could answer both these concerns. In fact, they’re banking on it: Masic and several of his colleagues have created a startup they call DMAT that is currently seeking seed money to begin to commercially produce the Roman-inspired hot-mixed concrete. “It’s very appealing simply because it’s a thousands-of-years-old material.”
In Appalachia’s coal country, researchers envision turning toxic waste into treasure. The pollution left behind by abandoned mines is an untapped source of rare earth elements.
Rare earths are a valuable set of 17 elements needed to make everything from smartphones and electric vehicles to fluorescent bulbs and lasers. With global demand skyrocketing and China having a near-monopoly on rare earth production — the United States has only one active mine — there’s a lot of interest in finding alternative sources, such as ramping up recycling. Pulling rare earths from coal waste offers a two-for-one deal: By retrieving the metals, you also help clean up the pollution.
Long after a coal mine closes, it can leave a dirty legacy. When some of the rock left over from mining is exposed to air and water, sulfuric acid forms and pulls heavy metals from the rock. This acidic soup can pollute waterways and harm wildlife.
Recovering rare earths from what’s called acid mine drainage won’t single-handedly satisfy rising demand for the metals, acknowledges Paul Ziemkiewicz, director of the West Virginia Water Research Institute in Morgantown. But he points to several benefits.
Unlike ore dug from typical rare earth mines, the drainage is rich with the most-needed rare earth elements. Plus, extraction from acid mine drainage also doesn’t generate the radioactive waste that’s typically a by-product of rare earth mines, which often contain uranium and thorium alongside the rare earths. And from a practical standpoint, existing facilities to treat acid mine drainage could be used to collect the rare earths for processing. “Theoretically, you could start producing tomorrow,” Ziemkiewicz says.
From a few hundred sites already treating acid mine drainage, nearly 600 metric tons of rare earth elements and cobalt — another in-demand metal — could be produced annually, Ziemkiewicz and colleagues estimate.
Currently, a pilot project in West Virginia is taking material recovered from an acid mine drainage treatment site and extracting and concentrating the rare earths.
If such a scheme proves feasible, Ziemkiewicz envisions a future in which cleanup sites send their rare earth hauls to a central facility to be processed, and the elements separated. Economic analyses suggest this wouldn’t be a get-rich scheme. But, he says, it could be enough to cover the costs of treating the acid mine drainage.
Penicillin, effective against many bacterial infections, is often a first-line antibiotic. Yet it is also one of the most common causes of drug allergies. Around 10 percent of people say they’ve had an allergic reaction to penicillin, according to the U.S. Centers for Disease Control and Prevention.
Now researchers have found a genetic link to the hypersensitivity, which, while rarely fatal, can cause hives, wheezing, arrythmias and more.
People who report penicillin allergies can have a genetic variation on an immune system gene that helps the body distinguish between our own cells and harmful bacteria and viruses. That hot spot is on the major histocompatibility complex gene HLA-B, said Kristi Krebs, a pharmacogenomics researcher for the Estonian Genome Center at the University of Tartu. She presented the finding October 26 at the American Society of Human Genetics 2020 virtual meeting. The research was also published online October 1 in the American Journal of Human Genetics.
Several recent studies have connected distinct differences in HLA genes to bad reactions to specific drugs. For example, studies have linked an HLA-B variant to adverse reactions to an HIV/AIDS medication called abacavir, and they’ve linked a different HLA-B variant to allergic reactions to the gout medicine allopurinol. “So it’s understandable that this group of HLA variants can predispose us to higher risk of allergic drug reactions,” says Bernardo Sousa-Pinto, a researcher in drug allergies and evidence synthesis at the University of Porto in Portugal, who was not involved in the study.
For the penicillin study, the team hunted through more than 600,000 electronic health records that included genetic information for people who self-reported penicillin allergies. The researchers used several genetic search tools, which comb through DNA in search of genetic variations that may be linked to a health problem. Their search turned up a specific spot on chromosome 6, on a variant called HLA-B*55:01.
The group then checked its results against 1.12 million people of European ancestry in the research database of the genetic-testing company 23andMe and found the same link. A check of smaller databases including people with East Asian, Middle Eastern and African ancestries found no similar connection, although those sample sizes were too small to be sure, Krebs said
It’s too soon to tell if additional studies will “lead to better understanding of penicillin allergy and also better prediction,” she said.
Penicillin allergies often begin in childhood, but can wane over time, making the drugs safer to use some years later, Sousa-Pinto says. In this study, self-reported allergies were not confirmed with a test, so there’s a chance that some participants were misclassified. This is very common, Sousa-Pinto says. “It would be interesting to replicate this study in … participants with confirmed penicillin allergy.”
The distinction matters, because about 90 percent of patients who claim to be allergic to penicillin can actually safely take the drug (SN: 12/11/16). Yet, Sousa-Pinto says, those people may be given a more-expensive antibiotic that may not work as well. Less-effective antibiotics can make patients more prone to infections with bacteria that are resistant to the drugs. “This … is something that has a real impact on health care and on health services,” he says.
The fate of a potential new Alzheimer’s drug is still uncertain. Evidence that the drug works isn’t convincing enough for it to be approved, outside experts told the U.S. Food and Drug Administration during a Nov. 6 virtual meeting that at times became contentious.
The scientists and clinicians were convened at the request of the FDA to review the evidence for aducanumab, a drug that targets a protein called amyloid-beta that accumulates in the brains of people with Alzheimer’s. The drug is designed to stick to A-beta and stop it from forming larger, more dangerous clumps. That could slow the disease’s progression but not stop or reverse it.
When asked whether a key clinical study provided strong evidence that the drug effectively treated Alzheimer’s, eight of 11 experts voted no. One expert voted yes, and two were uncertain.
The FDA is not bound to follow the recommendations of the guidance committee, though it has historically done so. If ultimately approved, the drug would be a milestone, says neurologist and neuroscientist Arjun Masurkar of New York University Langone’s Alzheimer’s Disease Research Center. Aducanumab “would be the first therapy that actually targets the underlying disease itself and slows progression.”
Developed by the pharmaceutical company Biogen, which is based in Cambridge, Mass., the drug is controversial. That’s because two large clinical trials of aducanumab have yielded different outcomes, one positive and one negative (SN: 12/5/19). The trials were also paused at one point, based on analyses that suggested the drug didn’t work.
Those unusual circumstances created gaps in the evidence, leaving big questions in some scientists’ minds about whether the drug is effective. Aducanumab’s ability to treat Alzheimer’s “cannot be proven by clinical trials with divergent outcomes,” researchers wrote in a perspective article published November 1 in Alzheimer’s & Dementia. The drug should be tested again with a different clinical trial, those researchers say.
But other groups, including the Alzheimer’s Association, are rooting for the drug. In a letter sent to the FDA on October 23, the nonprofit health organization urged aducanumab’s approval, along with longer-term studies of the drug.
“While the trial data has led to some uncertainty among the scientific community, this must be weighed against the certainty of what this disease will do to millions of Americans absent a treatment,” Joanne Pike, chief strategy officer of the Alzheimer’s Association, wrote in the letter. She noted that by 2050, more than 13 million Americans 65 and older may have Alzheimer’s. More than 5 million Americans currently have the disease.
Even with an eventual approval, questions would remain for patients and their caregivers, says Zaldy Tan, a geriatric memory specialist at Cedars-Sinai Medical Center in Los Angeles. “Cost and logistics are going to be complex issues to tackle,” he says. One estimate puts aducanumab’s price tag at $40,000 annually, and treatment would require injections, for instance, which would require regular visits to a health care facility.
Bacteria go to extremes to handle hard times: They hunker down, building a fortress-like shell around their DNA and turning off all signs of life. And yet, when times improve, these dormant spores can rise from the seeming dead.
But “you gotta be careful when you decide to come back to life,” says Peter Setlow, a biochemist at UConn Health in Farmington. “Because if you get it wrong, you die.” How is a spore to tell?
For spores of the bacterium Bacillus subtilis, the solution is simple: It counts.
These “living rocks” sense it’s time to revive, or germinate, by essentially counting how often they encounter nutrients, researchers report in a new study in the Oct. 7 Science. “They appear to have literally no measurable biological activity,” says Gürol Süel, a microbiologist at the University of California, San Diego. But Süel and his colleagues knew that spores’ cores contain positively charged potassium atoms, and because these atoms can move around without the cell using energy, the team suspected that potassium could be involved in shocking the cells awake.
So the team exposed B. subtilis spores to nutrients and used colorful dyes to track the movement of potassium out of the core. With each exposure, more potassium left the core, shifting its electrical charge to be more negative. Once the spores’ cores were negatively charged enough, germination was triggered, like a champagne bottle finally popping its cork. The number of exposures it took to trigger germination varied by spore, just like some corks require more or less twisting to pop. Spores whose potassium movement was hamstrung showed limited change in electric charge and were less likely to “pop” back to life no matter how many nutrients they were exposed to, the team’s experiments showed.
Changes in the electrical charge of a cell are important across the tree of life, from determining when brain cells zip off messages to each other, to the snapping of a Venus flytrap (SN: 10/14/20). Finding that spores also use electrical charges to set their wake-up calls excites Süel. “You want to find principles in biology,” he says, “processes that cross systems, that cross fields and boundaries.”
Spores are not only interesting for their unique and extreme biology, but also for practical applications. Some “can cause some rather nasty things” from food poisoning to anthrax, says Setlow, who was not involved in the study. Since spores are resistant to most antibiotics, understanding germination could lead to a way to bring them back to life in order to kill them for good.
Still, there are many unanswered questions about the “black box” of how spores start germination, like whether it’s possible for the spores to “reset” their potassium count. “We really are in the beginnings of trying to fill in that black box,” says Kaito Kikuchi, a biologist now at Reveal Biosciences in San Diego who conducted the work while at University of California, San Diego. But discovering how spores manage to track their environment while more dead than alive is an exciting start.
Giving revamped silkworm silk a metallic bath may make the strands both strong and stiff, scientists report October 6 in Matter. Some strands were up to 70 percent stronger than silk spun by spiders, the team found.
The work is the latest in a decades-long quest to create fibers as strong, lightweight and biodegradable as spider silk. If scientists could mass-produce such material, the potential uses range from the biomedical to the athletic. Sutures, artificial ligaments and tendons — even sporting equipment could get an arachnid enhancement. “If you’ve got a climbing rope that weighs half of what it normally does and still has the same mechanical properties, then obviously you’re going to be a happy climber,” says Randy Lewis, a silk scientist at Utah State University in Logan who was not involved with the study.
Scrounging up enough silky material to make these super strong products has been a big hurdle. Silk from silkworms is simple to harvest, but not all that strong. And spider silk, the gold-standard for handspun strength and toughness, is not exactly easy to collect. “Unlike silkworms, spiders cannot be farmed due to their territorial and aggressive nature,” write study coauthor Zhi Lin, a structural biologist at Tianjin University in China, and colleagues.
Scientists around the world have tried to spin sturdy strands in the lab using silkworm cocoons as a starting point. The first step is to strip off the silk’s gummy outer coating. Scientists can do this by boiling the fibers in a chemical bath, but that can be like taking a hatchet to silk proteins. If the proteins get too damaged, it’s hard for scientists to respin them into high-quality strands, says Chris Holland, a materials scientist at the University of Sheffield in England who was not involved in the study.
Lin’s team tried gentler approaches, one of which used lower temperatures and a papaya enzyme, to help dissolve the silk’s coating. That mild-mannered method seemed to work. “They don’t have little itty-bitty pieces of silk protein,” Lewis says. “That’s huge because the bigger the proteins that remain, the stronger the fibers are going to be.” After some processing steps, the researchers forced the resulting silk sludge through a tiny tube, like squeezing out toothpaste. Then, they bathed the extruded silk in a solution containing zinc and iron ions, eventually stretching the strands like taffy to make long, skinny fibers. The metal dip could be why some of the strands were so strong — Lin’s team detected zinc ions in the finished fibers. But Holland and Lewis aren’t so sure.
The team’s real innovation may be that “they’ve managed to unspin silk in a less damaging way,” Holland says. Lewis agrees. “In my mind,” he says, “that’s a major step forward.”
Humankind is seeing Neptune’s rings in a whole new light thanks to the James Webb Space Telescope.
In an infrared image released September 21, Neptune and its gossamer diadems of dust take on an ethereal glow against the inky backdrop of space. The stunning portrait is a huge improvement over the rings’ previous close-up, which was taken more than 30 years ago.
Unlike the dazzling belts encircling Saturn, Neptune’s rings appear dark and faint in visible light, making them difficult to see from Earth. The last time anyone saw Neptune’s rings was in 1989, when NASA’s Voyager 2 spacecraft, after tearing past the planet, snapped a couple grainy photos from roughly 1 million kilometers away (SN: 8/7/17). In those photos, taken in visible light, the rings appear as thin, concentric arcs.
As Voyager 2 continued to interplanetary space, Neptune’s rings once again went into hiding — until July. That’s when the James Webb Space Telescope, or JWST, turned its sharp, infrared gaze toward the planet from roughly 4.4 billion kilometers away (SN: 7/11/22). Neptune itself appears mostly dark in the new image. That’s because methane gas in the planet’s atmosphere absorbs much of its infrared light. A few bright patches mark where high-altitude methane ice clouds reflect sunlight.
And then there are the ever-elusive rings. “The rings have lots of ice and dust in them, which are extremely reflective in infrared light,” says Stefanie Milam, a planetary scientist at NASA’s Goddard Space Flight Center in Greenbelt, Md., and one of JWST’s project scientists. The enormity of the telescope’s mirror also makes its images extra sharp. “JWST was designed to look at the first stars and galaxies across the universe, so we can really see fine details that we haven’t been able to see before,” Milam says.
Upcoming JWST observations will look at Neptune with other scientific instruments. That should provide new intel on the rings’ composition and dynamics, as well as on how Neptune’s clouds and storms evolve, Milam says. “There’s more to come.”
As people around the world marveled in July at the most detailed pictures of the cosmos snapped by the James Webb Space Telescope, biologists got their first glimpses of a different set of images — ones that could help revolutionize life sciences research.
The images are the predicted 3-D shapes of more than 200 million proteins, rendered by an artificial intelligence system called AlphaFold. “You can think of it as covering the entire protein universe,” said Demis Hassabis at a July 26 news briefing. Hassabis is cofounder and CEO of DeepMind, the London-based company that created the system. Combining several deep-learning techniques, the computer program is trained to predict protein shapes by recognizing patterns in structures that have already been solved through decades of experimental work using electron microscopes and other methods. The AI’s first splash came in 2021, with predictions for 350,000 protein structures — including almost all known human proteins. DeepMind partnered with the European Bioinformatics Institute of the European Molecular Biology Laboratory to make the structures available in a public database.
July’s massive new release expanded the library to “almost every organism on the planet that has had its genome sequenced,” Hassabis said. “You can look up a 3-D structure of a protein almost as easily as doing a key word Google search.”
These are predictions, not actual structures. Yet researchers have used some of the 2021 predictions to develop potential new malaria vaccines, improve understanding of Parkinson’s disease, work out how to protect honeybee health, gain insight into human evolution and more. DeepMind has also focused AlphaFold on neglected tropical diseases, including Chagas disease and leishmaniasis, which can be debilitating or lethal if left untreated. The release of the vast dataset was greeted with excitement by many scientists. But others worry that researchers will take the predicted structures as the true shapes of proteins. There are still things AlphaFold can’t do — and wasn’t designed to do — that need to be tackled before the protein cosmos completely comes into focus.
Having the new catalog open to everyone is “a huge benefit,” says Julie Forman-Kay, a protein biophysicist at the Hospital for Sick Children and the University of Toronto. In many cases, AlphaFold and RoseTTAFold, another AI researchers are excited about, predict shapes that match up well with protein profiles from experiments. But, she cautions, “it’s not that way across the board.”
Predictions are more accurate for some proteins than for others. Erroneous predictions could leave some scientists thinking they understand how a protein works when really, they don’t. Painstaking experiments remain crucial to understanding how proteins fold, Forman-Kay says. “There’s this sense now that people don’t have to do experimental structure determination, which is not true.” Plodding progress Proteins start out as long chains of amino acids and fold into a host of curlicues and other 3-D shapes. Some resemble the tight corkscrew ringlets of a 1980s perm or the pleats of an accordion. Others could be mistaken for a child’s spiraling scribbles.
A protein’s architecture is more than just aesthetics; it can determine how that protein functions. For instance, proteins called enzymes need a pocket where they can capture small molecules and carry out chemical reactions. And proteins that work in a protein complex, two or more proteins interacting like parts of a machine, need the right shapes to snap into formation with their partners.
Knowing the folds, coils and loops of a protein’s shape may help scientists decipher how, for example, a mutation alters that shape to cause disease. That knowledge could also help researchers make better vaccines and drugs.
For years, scientists have bombarded protein crystals with X-rays, flash frozen cells and examined them under highpowered electron microscopes, and used other methods to discover the secrets of protein shapes. Such experimental methods take “a lot of personnel time, a lot of effort and a lot of money. So it’s been slow,” says Tamir Gonen, a membrane biophysicist and Howard Hughes Medical Institute investigator at the David Geffen School of Medicine at UCLA. Such meticulous and expensive experimental work has uncovered the 3-D structures of more than 194,000 proteins, their data files stored in the Protein Data Bank, supported by a consortium of research organizations. But the accelerating pace at which geneticists are deciphering the DNA instructions for making proteins has far outstripped structural biologists’ ability to keep up, says systems biologist Nazim Bouatta of Harvard Medical School. “The question for structural biologists was, how do we close the gap?” he says.
For many researchers, the dream has been to have computer programs that could examine the DNA of a gene and predict how the protein it encodes would fold into a 3-D shape.
Here comes AlphaFold Over many decades, scientists made progress toward that AI goal. But “until two years ago, we were really a long way from anything like a good solution,” says John Moult, a computational biologist at the University of Maryland’s Rockville campus.
Moult is one of the organizers of a competition: the Critical Assessment of protein Structure Prediction, or CASP. Organizers give competitors a set of proteins for their algorithms to fold and compare the machines’ predictions against experimentally determined structures. Most AIs failed to get close to the actual shapes of the proteins. Then in 2020, AlphaFold showed up in a big way, predicting the structures of 90 percent of test proteins with high accuracy, including two-thirds with accuracy rivaling experimental methods.
Deciphering the structure of single proteins had been the core of the CASP competition since its inception in 1994. With AlphaFold’s performance, “suddenly, that was essentially done,” Moult says.
Since AlphaFold’s 2021 release, more than half a million scientists have accessed its database, Hassabis said in the news briefing. Some researchers, for example, have used AlphaFold’s predictions to help them get closer to completing a massive biological puzzle: the nuclear pore complex. Nuclear pores are key portals that allow molecules in and out of cell nuclei. Without the pores, cells wouldn’t work properly. Each pore is huge, relatively speaking, composed of about 1,000 pieces of 30 or so different proteins. Researchers had previously managed to place about 30 percent of the pieces in the puzzle. That puzzle is now almost 60 percent complete, after combining AlphaFold predictions with experimental techniques to understand how the pieces fit together, researchers reported in the June 10 Science.
Now that AlphaFold has pretty much solved how to fold single proteins, this year CASP organizers are asking teams to work on the next challenges: Predict the structures of RNA molecules and model how proteins interact with each other and with other molecules.
For those sorts of tasks, Moult says, deep-learning AI methods “look promising but have not yet delivered the goods.”
Where AI falls short Being able to model protein interactions would be a big advantage because most proteins don’t operate in isolation. They work with other proteins or other molecules in cells. But AlphaFold’s accuracy at predicting how the shapes of two proteins might change when the proteins interact are “nowhere near” that of its spot-on projections for a slew of single proteins, says Forman-Kay, the University of Toronto protein biophysicist. That’s something AlphaFold’s creators acknowledge too.
The AI trained to fold proteins by examining the contours of known structures. And many fewer multiprotein complexes than single proteins have been solved experimentally. Forman-Kay studies proteins that refuse to be confined to any particular shape. These intrinsically disordered proteins are typically as floppy as wet noodles (SN: 2/9/13, p. 26). Some will fold into defined forms when they interact with other proteins or molecules. And they can fold into new shapes when paired with different proteins or molecules to do various jobs.
AlphaFold’s predicted shapes reach a high confidence level for about 60 percent of wiggly proteins that Forman-Kay and colleagues examined, the team reported in a preliminary study posted in February at bioRxiv.org. Often the program depicts the shapeshifters as long corkscrews called alpha helices.
Forman-Kay’s group compared AlphaFold’s predictions for three disordered proteins with experimental data. The structure that the AI assigned to a protein called alpha-synuclein resembles the shape that the protein takes when it interacts with lipids, the team found. But that’s not the way the protein looks all the time.
For another protein, called eukaryotic translation initiation factor 4E-binding protein 2, AlphaFold predicted a mishmash of the protein’s two shapes when working with two different partners. That Frankenstein structure, which doesn’t exist in actual organisms, could mislead researchers about how the protein works, Forman-Kay and colleagues say. AlphaFold may also be a little too rigid in its predictions. A static “structure doesn’t tell you everything about how a protein works,” says Jane Dyson, a structural biologist at the Scripps Research Institute in La Jolla, Calif. Even single proteins with generally well-defined structures aren’t frozen in space. Enzymes, for example, undergo small shape changes when shepherding chemical reactions.
If you ask AlphaFold to predict the structure of an enzyme, it will show a fixed image that may closely resemble what scientists have determined by X-ray crystallography, Dyson says. “But [it will] not show you any of the subtleties that are changing as the different partners” interact with the enzyme.
“The dynamics are what Mr. AlphaFold can’t give you,” Dyson says.
A revolution in the making The computer renderings do give biologists a head start on solving problems such as how a drug might interact with a protein. But scientists should remember one thing: “These are models,” not experimentally deciphered structures, says Gonen, at UCLA.
He uses AlphaFold’s protein predictions to help make sense of experimental data, but he worries that researchers will accept the AI’s predictions as gospel. If that happens, “the risk is that it will become harder and harder and harder to justify why you need to solve an experimental structure.” That could lead to reduced funding, talent and other resources for the types of experiments needed to check the computer’s work and forge new ground, he says. Harvard Medical School’s Bouatta is more optimistic. He thinks that researchers probably don’t need to invest experimental resources in the types of proteins that AlphaFold does a good job of predicting, which should help structural biologists triage where to put their time and money.
“There are proteins for which AlphaFold is still struggling,” Bouatta agrees. Researchers should spend their capital there, he says. “Maybe if we generate more [experimental] data for those challenging proteins, we could use them for retraining another AI system” that could make even better predictions.
He and colleagues have already reverse engineered AlphaFold to make a version called OpenFold that researchers can train to solve other problems, such as those gnarly but important protein complexes.
Massive amounts of DNA generated by the Human Genome Project have made a wide range of biological discoveries possible and opened up new fields of research (SN: 2/12/22, p. 22). Having structural information on 200 million proteins could be similarly revolutionary, Bouatta says.
In the future, thanks to AlphaFold and its AI kin, he says, “we don’t even know what sorts of questions we might be asking.”