www.sapiensman.com 

 

 


TigerDirect Business is your one-stop-shop for everything related to computers and electronics. Browse our gigantic selection of deals on PCs, networking gear, computer accessories, consumer electronics and so much more. | Computers, Electronics, Tablets, Phones, Office Supplies, Video Games, TigerDirect Business 



MIT . Massachusetts Institute of Technology. In the News ...


----------------------------------------------------------------------------------------------------------------

Thu, 30 Nov 2023 00:00:00 -0500


With a quantum “squeeze,” clocks could keep even more precise time, MIT researchers propose
Posted on Thursday November 30, 2023


Category : Light

Author : Jennifer Chu | MIT News

More stable clocks could measure quantum phenomena, including the presence of dark matter.


Read more about this article :

The practice of keeping time hinges on stable oscillations. In a grandfather clock, the length of a second is marked by a single swing of the pendulum. In a digital watch, the vibrations of a quartz crystal mark much smaller fractions of time. And in atomic clocks, the world’s state-of-the-art timekeepers, the oscillations of a laser beam stimulate atoms to vibrate at 9.2 billion times per second. These smallest, most stable divisions of time set the timing for today’s satellite communications, GPS systems, and financial markets.

A clock’s stability depends on the noise in its environment. A slight wind can throw a pendulum’s swing out of sync. And heat can disrupt the oscillations of atoms in an atomic clock. Eliminating such environmental effects can improve a clock’s precision. But only by so much.

A new MIT study finds that even if all noise from the outside world is eliminated, the stability of clocks, laser beams, and other oscillators would still be vulnerable to quantum mechanical effects. The precision of oscillators would ultimately be limited by quantum noise.

But in theory, there’s a way to push past this quantum limit. In their study, the researchers also show that by manipulating, or “squeezing,” the states that contribute to quantum noise, the stability of an oscillator could be improved, even past its quantum limit.

“What we’ve shown is, there’s actually a limit to how stable oscillators like lasers and clocks can be, that’s set not just by their environment, but by the fact that quantum mechanics forces them to shake around a little bit,” says Vivishek Sudhir, assistant professor of mechanical engineering at MIT. “Then, we’ve shown that there are ways you can even get around this quantum mechanical shaking. But you have to be more clever than just isolating the thing from its environment. You have to play with the quantum states themselves.”

The team is working on an experimental test of their theory. If they can demonstrate that they can manipulate the quantum states in an oscillating system, the researchers envision that clocks, lasers, and other oscillators could be tuned to super-quantum precision. These systems could then be used to track infinitesimally small differences in time, such as the fluctuations of a single qubit in a quantum computer or the presence of a dark matter particle flitting between detectors.

“We plan to demonstrate several instances of lasers with quantum-enhanced timekeeping ability over the next several years,” says Hudson Loughlin, a graduate student in MIT’s Department of Physics. “We hope that our recent theoretical developments and upcoming experiments will advance our fundamental ability to keep time accurately, and enable new revolutionary technologies.”

Loughlin and Sudhir detail their work in an open-access paper published in the journal Nature Communications.

Laser precision

In studying the stability of oscillators, the researchers looked first to the laser — an optical oscillator that produces a wave-like beam of highly synchronized photons. The invention of the laser is largely credited to physicists Arthur Schawlow and Charles Townes, who coined the name from its descriptive acronym: light amplification by stimulated emission of radiation.

A laser’s design centers on a “lasing medium” — a collection of atoms, usually embedded in glass or crystals. In the earliest lasers, a flash tube surrounding the lasing medium would stimulate electrons in the atoms to jump up in energy. When the electrons relax back to lower energy, they give off some radiation in the form of a photon. Two mirrors, on either end of the lasing medium, reflect the emitted photon back into the atoms to stimulate more electrons, and produce more photons. One mirror, together with the lasing medium, acts as an “amplifier” to boost the production of photons, while the second mirror is partially transmissive and acts as a “coupler” to extract some photons out as a concentrated beam of laser light.

Since the invention of the laser, Schawlow and Townes put forth a hypothesis that a laser’s stability should be limited by quantum noise. Others have since tested their hypothesis by modeling the microscopic features of a laser. Through very specific calculations, they showed that indeed, imperceptible, quantum interactions among the laser’s photons and atoms could limit the stability of their oscillations.

“But this work had to do with extremely detailed, delicate calculations, such that the limit was understood, but only for a specific kind of laser,” Sudhir notes. “We wanted to enormously simplify this, to understand lasers and a wide range of oscillators."

Putting the “squeeze” on

Rather than focus on a laser’s physical intricacies, the team looked to simplify the problem.

“When an electrical engineer thinks of making an oscillator, they take an amplifier, and they feed the output of the amplifier into its own input,” Sudhir explains. “It’s like a snake eating its own tail. It’s an extremely liberating way of thinking. You don’t need to know the nitty gritty of a laser. Instead, you have an abstract picture, not just of a laser, but of all oscillators.”

In their study, the team drew up a simplified representation of a laser-like oscillator. Their model consists of an amplifier (such as a laser’s atoms), a delay line (for instance, the time it takes light to travel between a laser’s mirrors), and a coupler (such as a partially reflective mirror).

The team then wrote down the equations of physics that describe the system’s behavior, and carried out calculations to see where in the system quantum noise would arise.

“By abstracting this problem to a simple oscillator, we can pinpoint where quantum fluctuations come into the system, and they come in in two places: the amplifier and the coupler that allows us to get a signal out of the oscillator,” Loughlin says. “If we know those two things, we know what the quantum limit on that oscillator’s stability is.”

Sudhir says scientists can use the equations they lay out in their study to calculate the quantum limit in their own oscillators.

What’s more, the team showed that this quantum limit might be overcome, if quantum noise in one of the two sources could be “squeezed.” Quantum squeezing is the idea of minimizing quantum fluctuations in one aspect of a system at the expense of proportionally increasing fluctuations in another aspect. The effect is similar to squeezing air from one part of a balloon into another.

In the case of a laser, the team found that if quantum fluctuations in the coupler were squeezed, it could improve the precision, or the timing of oscillations, in the outgoing laser beam, even as noise in the laser’s power would increase as a result.

“When you find some quantum mechanical limit, there’s always some question of how malleable is that limit?” Sudhir says. “Is it really a hard stop, or is there still some juice you can extract by manipulating some quantum mechanics? In this case, we find that there is, which is a result that is applicable to a huge class of oscillators.”

This research is supported, in part, by the National Science Foundation.



----------------------------------------------------------------------------------------------------------------

Thu, 30 Nov 2023 00:00:00 -0500


Q&A: Phillip Sharp and Amy Brand on the future of open-access publishing
Posted on Thursday November 30, 2023


Category : Open access

Author : Peter Dizikes | MIT News

An MIT-based white paper identifies leading questions in the quest to make open-access publications sustainable.


Read more about this article :

Providing open access to scholarly publications is a long-running issue with new developments on the horizon. Last year, the U.S. federal government’s Office of Science and Technology Policy mandated that starting in 2026 publishers must provide open access to publications stemming from federal funding. That provides more impetus for the open-access movement in academia.

Meanwhile, other trends are changing academic publishing, including consolidation of journal titles and provision of access by having authors (and their home institutions) pay for publication costs. With these developments unfolding, a group of MIT scholars is releasing a new white paper about academic open-access publishing. The paper gathers information, identifies outstanding questions, and calls for further research and data to inform policy on the subject.

The group was chaired by Institute Professor Emeritus Phillip A. Sharp, of the Department of Biology and Koch Institute of Integrative Cancer Research, who co-authored the report along with William B. Bonvillian, senior director of special projects at MIT Open Learning; Robert Desimone, director of the McGovern Institute for Brain Research; Barbara Imperiali, the Class of 1922 Professor of Biology; David R. Karger, professor of electrical engineering; Clapperton Chakanetsa Mavhunga, professor of science, technology, and society; Amy Brand, director and publisher of the MIT Press; Nick Lindsay, director for journals and open access at MIT Press; and Michael Stebbins of Science Advisors, LLC.

MIT News spoke with Sharp and Brand about the state of open-access publishing.

Q: What are the key benefits of open access, as you see it?

Amy Brand: As an academic publisher running the MIT Press, we have embraced open access in both books and journals for a long time because it is our mission to support our authors and get their research out into the world. Whether it’s completely removing paywalls and barriers, or keeping prices low, we do whatever we can to disseminate the content that we publish. Even before we were talking about federal policies, this was a priority at the MIT Press.

Phillip Sharp: As a scientist, I’m interested in having my research make the largest impact it can, to help solve some of the challenges of society. And open access, making research available to people around the world, is an important aspect of that. But the quality of research is dependent upon peer review. So, I think open access policies need to be considered and promoted in the context of a very valuable and vigorous peer-review publication process.

Q: What are the key elements of this report?

Brand: The first part of the report is a history of open access, and the second part is a list of questions driving toward evidence-based policy. On the one hand, there are questions such as: How does policy impact the day-to-day work of researchers and their students? What are the impacts on the lab? Other questions have to do with the impacts on the publishing industry. One reason I was invested in doing this is concerns about the impact on nonprofit publishers, on university presses, on scientific societies that publish. Some of the questions we raise have to do with understanding the impact on smaller, nonprofit publishers and ultimately knowing how to protect their viability.

Sharp: The current policies for open access being required by OSTP’s Nelson Memo dramatically change who is paying for publication, where the resources come from for publication. It puts a lot of emphasis on the research institute or other sources to cover that. And that raises another issue in open access: Will this limit publications from researchers at institutes that cannot afford the charge? The scientific community is very international, and the impact of science in many countries is incredibly important. So dealing with the [impact of] open access is something that needs to be developed with evidence and policy.

The report notes that if open access was covered by an institution for all publications at $3,000 per article, MIT’s total cost would be $25 million per year. That’s going to be a challenge. And if it’s a challenge for MIT, it’s going to be an enormous challenge in a number of other places.

Q: What are some additional points about open access that we should keep in mind?

Brand: The Nelson Memo also provides that self-archiving is one of the ways to comply with the policy — which means authors can take an earlier version of an article and put it in an institutional repository. Here at MIT we have the DSpace repository that contains many of the papers that faculty publish. The economics of that are very different, and it’s also a little unclear how that’s going to play out. We recently saw one scientific society decide to implement a charge around that, something the community has never seen before.

But as we essentially have a system that already creates incentives for publishers to increase these article processing charges, the publication charges, there are a lot of questions about how publishers who do high-quality peer review will be sustained, and where that money is going to come from.

Sharp: When you come to the data side of the issue, it’s complicated because of the value of the data itself. It’s important that data is collected and has metadata about the research process that’s been made available to others. It’s also time to talk about this in the academic community.

Q: The report makes clear that there are multiple trends here: consolidation in for-profit publishing, growth of open-access publications, fiscal pressure on university libraries, and now the federal mandate. Complicated as the present may be, it does seem that MIT wants to look ahead on this issue.

Brand: I do think in the publishing community, and certainly in the university press community, we’ve been way out in front on this for a while, and with some of the business models we helped implement and test and create, we’re finding other publishers are following suit and they are interested. But right now, with the new federal policy, most publishers have no choice but to begin asking: What does sustainable high-quality publishing mean if, as a publisher, I have to distribute all or some of this content in open digital form?

Sharp: The purpose of this report is to stimulate that conversation: more numbers, every bit of evidence. Communities have been responsible for the quality of science in different disciplines, and sharing the repsonsbility of peer review is something that motivates a lot of engagement. Sustaining that is important for the discipline. Without that sustainability, there will be slower progress in science, in my opinion.



----------------------------------------------------------------------------------------------------------------

Wed, 29 Nov 2023 11:00:00 -0500


Elly Nedivi receives 2023 Kreig Cortical Kudos Discoverer Award
Posted on Wednesday November 29, 2023


Category : School of Science

Author : David Orenstein | The Picower Institute for Learning and Memory

The neuroscientist is recognized for her ongoing work to understand molecular and cellular mechanisms that enable the brain to adapt to experience.


Read more about this article :

The Cajal Club has named Elly Nedivi, William R. and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory, the 2023 recipient of the Krieg Cortical Kudos Discoverer Award.

The club’s award, first bestowed in 1987, honors outstanding established investigators studying the cerebral cortex, the brain’s outer layers where circuits of neurons enable functions ranging from sensory processing to cognition. These circuits can constantly remodel their connections to adapt the brain to experience, a phenomenon called plasticity, that underlies learning and memory.

With a focus on the visual cortex, Nedivi’s lab investigates the molecular and cellular mechanisms that enable plasticity in the developing and adult brain, including identification of the genes whose expression is involved, characterization of the cellular functions of the proteins those genes encode, and studies of synaptic and neuronal remodeling as it happens in live, behaving animals. To enable those observations, Nedivi and longtime collaborator Peter So, professor of mechanical engineering, have developed advanced microscopy systems that can image multiple components of neural connections in the cortex of live rodents.

In a message to Nedivi notifying her of the honor, Cajal Club president Leah Kurbitzer, professor of psychology at the University of California at Davis, said: “This award recognizes your outstanding and continuous contributions to our understanding of fundamental aspects of cortical connectivity in the mammalian brain, and the cellular and molecular mechanisms underlying adult visual experience plasticity. Your work examining both the effects of visual experience manipulations and the functions of activity-induced candidate plasticity genes, by using advanced state-of-the-art in vivo multiphoton imaging technologies and sophisticated molecular genetic manipulations to expose fundamental mechanisms of brain plasticity, has made you a leader in the field, and an exceptional Krieg Cortical Discoverer award winner.”

Nedivi said she was honored to receive the award. The club conferred it Nov. 12 at its annual social during the Society for Neuroscience Annual Meeting in Washington.

“I am honored to be recognized with this award and to be following in the footsteps of many previous recipients whose work I admire and respect,” says Nedivi, a faculty member of MIT’s departments of Biology and of Brain and Cognitive Sciences.

Previous honorees with Picower Institute ties include Newton Professor of Neuroscience Mriganka Sur and Picower Institute Scientific Advisory Board member Carla Shatz, a professor at Stanford University. Nedivi’s former trainee Jerry Chen, now an associate professor at Boston University, and Sur’s former trainee Anna Majewska, now a professor at the University of Rochester, have each won Krieg Cortical Explorer awards, which are given to researchers at an earlier career stage.



----------------------------------------------------------------------------------------------------------------

Tue, 28 Nov 2023 11:00:00 -0500


A new way to see the activity inside a living cell
Posted on Tuesday November 28, 2023


Category : Research

Author : Anne Trafton | MIT News

Using fluorescent labels that switch on and off, MIT engineers can study how molecules in a cell interact to control the cell’s behavior.


Read more about this article :

Living cells are bombarded with many kinds of incoming molecular signal that influence their behavior. Being able to measure those signals and how cells respond to them through downstream molecular signaling networks could help scientists learn much more about how cells work, including what happens as they age or become diseased.

Right now, this kind of comprehensive study is not possible because current techniques for imaging cells are limited to just a handful of different molecule types within a cell at one time. However, MIT researchers have developed an alternative method that allows them to observe up to seven different molecules at a time, and potentially even more than that.

“There are many examples in biology where an event triggers a long downstream cascade of events, which then causes a specific cellular function,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology. “How does that occur? It’s arguably one of the fundamental problems of biology, and so we wondered, could you simply watch it happen?”

The new approach makes use of green or red fluorescent molecules that flicker on and off at different rates. By imaging a cell over several seconds, minutes, or hours, and then extracting each of the fluorescent signals using a computational algorithm, the amount of each target protein can be tracked as it changes over time.

Boyden, who is also a professor of biological engineering and of brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as the co-director of the K. Lisa Yang Center for Bionics, is the senior author of the study, which appears today in Cell. MIT postdoc Yong Qian is the lead author of the paper.

Fluorescent signals

Labeling molecules inside cells with fluorescent proteins has allowed researchers to learn a great deal about the functions of many cellular molecules. This type of study is often done with green fluorescent protein (GFP), which was first deployed for imaging in the 1990s. Since then, several fluorescent proteins that glow in other colors have been developed for experimental use.

However, a typical light microscope can only distinguish two or three of these colors, allowing researchers only a tiny glimpse of the overall activity that is happening inside a cell. If they could track a greater number of labeled molecules, researchers could measure a brain cell’s response to different neurotransmitters during learning, for example, or investigate the signals that prompt a cancer cell to metastasize.

“Ideally, you would be able to watch the signals in a cell as they fluctuate in real time, and then you could understand how they relate to each other. That would tell you how the cell computes,” Boyden says. “The problem is that you can’t watch very many things at the same time.”

In 2020, Boyden’s lab developed a way to simultaneously image up to five different molecules within a cell, by targeting glowing reporters to distinct locations inside the cell. This approach, known as “spatial multiplexing,” allows researchers to distinguish signals for different molecules even though they may all be fluorescing the same color.

In the new study, the researchers took a different approach: Instead of distinguishing signals based on their physical location, they created fluorescent signals that vary over time. The technique relies on “switchable fluorophores” — fluorescent proteins that turn on and off at a specific rate. For this study, Boyden and his group members identified four green switchable fluorophores, and then engineered two more, all of which turn on and off at different rates. They also identified two red fluorescent proteins that switch at different rates, and engineered one additional red fluorophore.

Each of these switchable fluorophores can be used to label a different type of molecule within a living cell, such an enzyme, signaling protein, or part of the cell cytoskeleton. After imaging the cell for several minutes, hours, or even days, the researchers use a computational algorithm to pick out the specific signal from each fluorophore, analogous to how the human ear can pick out different frequencies of sound.

“In a symphony orchestra, you have high-pitched instruments, like the flute, and low-pitched instruments, like a tuba. And in the middle are instruments like the trumpet. They all have different sounds, and our ear sorts them out,” Boyden says.

The mathematical technique that the researchers used to analyze the fluorophore signals is known as linear unmixing. This method can extract different fluorophore signals, similar to how the human ear uses a mathematical model known as a Fourier transform to extract different pitches from a piece of music.

Once this analysis is complete, the researchers can see when and where each of the fluorescently labeled molecules were found in the cell during the entire imaging period. The imaging itself can be done with a simple light microscope, with no specialized equipment required.

Biological phenomena

In this study, the researchers demonstrated their approach by labeling six different molecules involved in the cell division cycle, in mammalian cells. This allowed them to identify patterns in how the levels of enzymes called cyclin-dependent kinases change as a cell progresses through the cell cycle.

The researchers also showed that they could label other types of kinases, which are involved in nearly every aspect of cell signaling, as well as cell structures and organelles such as the cytoskeleton and mitochondria. In addition to their experiments using mammalian cells grown in a lab dish, the researchers showed that this technique could work in the brains of zebrafish larvae.

This method could be useful for observing how cells respond to any kind of input, such as nutrients, immune system factors, hormones, or neurotransmitters, according to the researchers. It could also be used to study how cells respond to changes in gene expression or genetic mutations. All of these factors play important roles in biological phenomena such as growth, aging, cancer, neurodegeneration, and memory formation.

“You could consider all of these phenomena to represent a general class of biological problem, where some short-term event — like eating a nutrient, learning something, or getting an infection — generates a long-term change,” Boyden says.

In addition to pursuing those types of studies, Boyden’s lab is also working on expanding the repertoire of switchable fluorophores so that they can study even more signals within a cell. They also hope to adapt the system so that it could be used in mouse models.

The research was funded by an Alana Fellowship, K. Lisa Yang, John Doerr, Jed McCaleb, James Fickel, Ashar Aziz, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Howard Hughes Medical Institute, and the National Institutes of Health.



----------------------------------------------------------------------------------------------------------------

Tue, 28 Nov 2023 11:00:00 -0500


A new way to deliver drugs more efficiently
Posted on Tuesday November 28, 2023


Category : School of Engineering

Author : Department of Chemical Engineering

Core-shell structures made of hydrogel could enable more efficient uptake in the body.


Read more about this article :

Many of the most promising new pharmaceuticals coming along in the drug development pathway are hydrophobic by nature — that is, they repel water, and are thus hard to dissolve in order to make them available to the body. But now, researchers at MIT have found a more efficient way of processing and delivering these drugs that could make them far more effective.

The new method, which involves initially processing the drugs in a liquid solution rather than in solid form, is reported in a paper in the Dec. 15 print issue of the journal Advanced Healthcare Materials, written by MIT graduate student Lucas Attia, recent graduate Liang-Hsun Chen PhD ’22, and professor of chemical engineering Patrick Doyle.

Currently, much drug processing is done through a long series of sequential steps, Doyle explains. “We think we can streamline the process, but also get better products, by combining these steps and leveraging our understanding of soft matter and self-assembly processes,” he says.

Attia adds that “a lot of small-molecule active ingredients are hydrophobic, so they don’t like being in water and they have very poor dissolution in water, which leads to their poor bioavailability.” Giving such drugs orally, which patients prefer over injections, presents real challenges in getting the material into the patient’s bloodstream. Up to 90 percent of the candidate drug molecules being developed by pharmaceutical companies actually are hydrophobic, he says, “so this is relevant to a large class of potential drug molecules.”

Another advantage of the new process, he says, is that it should make it easier to combine multiple different drugs in a single pill. “For different types of diseases where you’re taking multiple drugs at the same time, this kind of product can be very important in improving patient compliance,” he adds — only having to take one pill instead of a handful makes it much more likely that patients will keep up with their medications. “That’s actually a big issue with these chronic illnesses where patients are on very challenging pill regimes, so combination products have been shown to help a lot.”

One key to the new process is the use of a hydrogel — a sort of sponge-like gel material that can retain water and hold molecules in place. Present processes for making hydrophobic materials more bioavailable involve mechanically grinding the crystals down to smaller size, which makes them dissolve more readily, but this process adds time and expense to the manufacturing process, provides little control over the size distribution of the particles, and can actually damage some more delicate drug molecules.

Instead, the new process involves dissolving the drug in a carrier solution, then generating tiny nanodroplets of this carrier dispersed throughout a polymer solution — a material called a nanoemulsion. Then, this nanoemulsion is squeezed through a syringe and gelled into a hydrogel. The hydrogel holds the droplets in place as the carrier evaporates, leaving behind drug nanocrystals. This approach allows precise control over the final crystal size. The hydrogel, by keeping the droplets in place as they dry, prevents them from simply merging together to form lumpy agglomerations of different sizes. Without the hydrogel the droplets would merge randomly, and “you’d get a mess,” Doyle says. Instead, the new process leaves a batch of perfectly uniform nanoparticles. “That’s a very unique, novel way that our group has invented, to do this sort of crystallization and maintain the nano size,” he says.

The new process yields a two-part package: a core, which contains the active molecules, surrounded by a shell, also made of hydrogel, which can control the timing between ingestion of the pill and the release of its contents into the body.

“We showed that we can get very precise control over the drug release, both in terms of delay and rate,” says Doyle, who is the Robert T. Haslam Professor of Chemical Engineering and Singapore Research Professor. For example, if a drug is targeting disease in the lower intestine or colon, “we can control how long until the drug release starts, and then we also get very fast release once it begins.” Drugs formulated the conventional way with mechanical nanomilling, he says, “would have a slow drug release.”

This process, Attia says, “is the first approach that can form core-shell composite particles and structure drugs in distinct polymeric layers in a single processing step.”

The next steps in developing the process will be to test the system on a wide variety of drug molecules, beyond the two representative examples that were tested so far, Doyle says. Although they have reason to believe the process is generalizable, he says, “the proof is in the pudding — having the data in hand.”

The dripping process they use, he says, “can be scalable, but there’s a lot of details to be worked out.” But because all of the materials they are working with have been chosen as ones that are already recognized as safe for medical use, the approval process should be straightforward, he says. “It could be implemented in a few years. … We’re not worrying about all those typical safety hurdles that I think other novel formulations have to go through, which can be very expensive.”

The work received support from the U.S. Department of Energy.



----------------------------------------------------------------------------------------------------------------

Tue, 28 Nov 2023 11:00:00 -0500


Boosting rocket reliability at the material level
Posted on Tuesday November 28, 2023


Category : School of Engineering

Author : Eric Brown | MIT Industrial Liaison Program

Zack Cordero’s research focuses on extending the lifespan of reusable rockets, while simultaneously reducing the risk of catastrophic failure.


Read more about this article :

The success of the SpaceX Falcon 9 reusable launch vehicle has been one of the most remarkable technological achievements of the last decade. Powered by SpaceX’s Merlin engine, the Falcon 9 booster can be reused over 10 times, with minimal maintenance between flights.

Now there is a new generation of reusable rocket engines and vehicles that promise much larger payloads and greater reuse. Unlike Falcon 9, the 390-foot-tall SpaceX Starship, powered by its new Raptor engines, can land both the booster and the second stage for reuse, thereby further reducing launch costs. Blue Origin has its own next-generation BE-4 engine that will power its 320-foot New Glenn launch vehicle.

“The new class of reusable launch vehicles is likely to transform the space industry by lowering launch costs and improving space accessibility,” says Zack Cordero, the Esther and Harold E. Edgerton Career Development Assistant Professor of Aeronautics and Astronautics at MIT. “This will enable applications such as mega constellations for space-based internet and space-based sensing for things like persistent, real-time CO2 emissions monitoring.”

Yet, launch failures such as the April 2023 explosion of SpaceX’s Starship prototype suggest that the new designs still have significant reliability issues. In the Starship explosion, about six of the 33 Raptor engines on the boost stage appear to have malfunctioned. In June, Blue Origin’s BE-4 engine exploded during acceptance testing, suggesting the engine suffers from similar challenges.

“People assume that Starship is going to succeed, but that isn’t necessarily true,” says Cordero. “There is a real, underappreciated risk that these new heavy lift launch vehicles will continue to fail unless there are fundamental advances in materials technology.”

The Cordero Lab, based in the MIT Aerospace Materials and Structures Laboratory, has accepted this challenge with a variety of projects that aim to solve the reliability problem at the materials level. Working with partners including NASA, which plans to use Starship for its crewed Artemis missions to the moon, Cordero is leveraging expertise in additive manufacturing (AM), processing science, materials engineering, and structural design. The goal is to reduce the maintenance costs and extend the lifespan for reusable rockets while decreasing the chance of catastrophic failure.

Reusable rocket research is just one of several Cordero Lab projects to address emerging aerospace applications. Cordero is also developing technologies for in-space manufacturing of larger space structures such as solar cells, solar sails, and reflectors, enabled by the greater payloads of heavy-lift reusable rockets. Cordero’s novel manufacturing technique uses plastic deformation to fold metallic feedstock into net-shaped reticulated structures. These structures can then precisely contour a reflector surface using embedded electrostatic actuators.

Bigger reusable rockets = bigger reliability challenges

Unlike traditional, expendable rockets, reusable launch vehicles must integrate components and design elements that allow the vehicles to automatically maneuver for a soft landing. They also require greater thermal protection to withstand extreme aerothermal heating during reentry.

“Propulsion devices need to be designed differently for reusable rockets,” says Cordero. “With reusable liquid propellant rocket engines, you must ensure safe operation over multiple flight cycles and ease off on performance to reduce stress.”

Larger, more powerful reusable rockets make these design additions even more challenging. “SpaceX’s Raptor and Blue Origin’s BE4 engines operate on different power cycles compared to the Merlin engine,” says Cordero. “The new staged combustion power cycles are more amenable to reusability because they lower turbine inlet temperatures to extend the life of turbine hardware. Yet, the new power cycles pose a greater risk of catastrophic failure. Oxygen compatibility and metal fires represent critical challenges.”

Cordero is attempting to strengthen the components that limit the life of a reusable rocket engine, starting with the turbopump that pressurizes the liquid propellant. Other vulnerable components include the thrust chamber in which propellants are burned to create a hot gas, as well as the nozzle through which the gas is exhausted.

Extended wear on turbopumps, chambers, and nozzles does not always end in a catastrophic explosion. Yet, they add to the maintenance and repair costs that are factored into overall launch payload costs.

“There is a wide spectrum of failure behaviors,” says Cordero. “Thrust chambers can start to crack but continue to function. Yet, turbopumps can have more serious issues. There could be a blisk [a type of rotor disc] failure or in the case of oxygen-rich turbopumps, a rub between the rotor and casing. The new engines are also vulnerable to particle impact ignition in which FOD [foreign objects and debris] are accelerated into a surface, igniting the hardware. In a turbopump, these ignition modes can lead to a metal fire and a catastrophic, single-point failure mode that results in the vehicle exploding.”

The growing role of AM in reusable rockets

Additive manufacturing is now widely used in the space industry, including printing parts for launch vehicles with laser power bed fusion printers. “Space is probably the heaviest user of metal AM and is basically dictating technological developments,” says Cordero.

AM is frequently used to print metal propulsion devices such as the small pumps used in gas generator engines. However, it is only selectively used in larger boost stage engines and their turbopumps.

“There is a debate over whether metal 3D printing of large structures is economical,” says Cordero. Yet, improved quality control and qualification protocols have enabled greater use in large, mission critical-flight devices. The next step is developing novel materials that improve reliability.

“We are developing material advances that should enable greater use of AM for larger turbopumps,” says Cordero. “Our technology enables novel designs with improved thermal efficiency or resilience against high temperatures or rapid thermal transients.”

One critical challenge for full-flow staged combustion (Raptor) and oxygen-rich stage combustion (BE-4) engines is the problem of oxidizer compatibility. “In the turbine and downstream hardware, you often see high-temperature, high-pressure oxygen gas, which can drive metal fires and rapid energetic failure modes,” says Cordero.

One solution is to design a pump with larger clearances in the rotating hardware. Yet because this approach degrades performance, Cordero has chosen another path: using metal AM to create more intrinsically oxygen-compatible materials. “Building oxygen-rich turbopumps with metal AM makes it easier to integrate exotic materials that are more compatible with high-pressure, high-temperature oxygen environments,” says Cordero.

Cordero Lab is pursuing this approach with two projects. The first is developing oxygen-compatible ceramic coatings that protect against particle impact ignition. The second is creating ignition-resistant AM materials that can be printed into complex net shapes to avoid friction ignition.

Toughening up coatings with metallic ductile phases

In the coating project, stationary and rotating components in oxygen-rich turbopumps are coated with an inner ceramic coating that prevents heat transfer to the substrate and protects the metal from high pressure oxygen. “The advantage of coatings is that you can apply them to almost any kind of hardware whether printed, cast, or forged,” says Cordero.

The material improves on current ceramic coatings used in conventional gas turbine designs. “Conventional aero coatings tend to delaminate and break apart under the rapid thermal transients that are typical in rockets,” says Cordero. “In an aero engine, the engine starts up in over a minute, then idles a few minutes before taking off. By contrast, a rocket engine goes to full throttle in a split second. The rapid change from very low to very high temperatures generates incredible stresses that cause conventional coatings to pop off.”

To solve this problem Cordero is using toughened ceramic coatings with embedded metallic ductile phases that suppress delamination via crack bridging. “If cracks develop in the ceramic coating, they are bridged and held in place by metallic inclusions that help it to withstand the thermal transients,” says Cordero.

The Cordero Lab has successfully tested the coatings with typical thermal transients seen in rockets. “Now we are exploring how to apply them to real-world flight hardware and optimize their composition and design for higher turbine inlet temperatures,” says Cordero.

The researchers are collaborating with NASA to investigate the particle impact ignition resistance of the coatings using different thicknesses, particle sizes, and operating conditions. “Our research into fundamental principles of ductile phase toughened environmental barrier coatings should allow us to develop new coatings with chemistries and properties specifically tuned to different applications,” says Cordero. One potential application is to “cover acreage aero-surfaces on hypersonic vehicles.”

Printing friction-resistant alloys

Cordero Lab’s research into ignition-resistant alloys is a collaboration with Aerospace Corp., a nonprofit federally funded R&D center. The lab is investigating the mechanisms that drive frictional ignition, another ignition mode that can lead to metal fires.

Frictional ignition, which “is like striking a match when the match is traveling at 300 meters per second,” is often caused by a rub between the rotor and casing, says Cordero. To reduce the risk, Cordero is designing new printable superalloy materials that incorporate oxide nanoparticles for dispersion-strengthening. Dubbed TGT100, the material “can be printed into complex net shapes and offers best-in-class frictional ignition resistance.”

The burn-resistant material will first be used to print casing and stationary hardware. Cordero has launched a startup called Top Grain Technologies that will commercialize the material, as well as the ceramic coatings.

Cordero has recently begun to investigate how turbopumps could be redesigned using his new materials to achieve extremely long lifespans. “Our goal is to build a turbopump that can endure hundreds of hot cycles before replacing or repairing components,” says Cordero.

Solving the reliability issues of reusable rockets will require expertise in cross-disciplinary subjects that are not typically paired. Toward this end, Cordero recently worked with the MIT Department of Aeronautics and Astronautics and the Industrial Liaison Program to launch a new one-week crash course in AM for aerospace engineers.

Cordero has also organized a yearly workshop with collaborators from Aerospace Corp. and Lehigh University that explores materials challenges in reusable rocket engines. “We are bringing together experts from academia, industry, and government to discuss the key technical challenges,” says Cordero.

Beyond education, more collaboration is needed between academics and companies like SpaceX and Blue Origin, says Cordero. “The academics have more time to explore these more fundamental challenges,” he says. “The vision is to bring reliability and reusability of reusable rocket engines up to the standards of aero engines, which would transform the industry.”



----------------------------------------------------------------------------------------------------------------

Mon, 27 Nov 2023 13:45:00 -0500


Team engineers nanoparticles using ion irradiation to advance clean energy and fuel conversion
Posted on Monday November 27, 2023


Category : Research

Author : Elizabeth Thomson | Materials Research Laboratory

The work demonstrates control over key properties leading to better performance.


Read more about this article :

MIT researchers and colleagues have demonstrated a way to precisely control the size, composition, and other properties of nanoparticles key to the reactions involved in a variety of clean energy and environmental technologies. They did so by leveraging ion irradiation, a technique in which beams of charged particles bombard a material.

They went on to show that nanoparticles created this way have superior performance over their conventionally made counterparts.

“The materials we have worked on could advance several technologies, from fuel cells to generate CO2-free electricity to the production of clean hydrogen feedstocks for the chemical industry [through electrolysis cells],” says Bilge Yildiz, leader of the work and a professor in MIT’s departments of Nuclear Science and Engineering and Materials Science and Engineering.

Critical catalyst

Fuel and electrolysis cells both involve electrochemical reactions through three principal parts: two electrodes (a cathode and anode) separated by an electrolyte. The difference between the two cells is that the reactions involved run in reverse.

The electrodes are coated with catalysts, or materials that make the reactions involved go faster. But a critical catalyst made of metal-oxide materials has been limited by challenges including low durability. “The metal catalyst particles coarsen at high temperatures, and you lose surface area and activity as a result,” says Yildiz, who is also affiliated with the Materials Research Laboratory and is an author of an open-access paper on the work published in the journal Energy & Environmental Science.

Enter metal exsolution, which involves precipitating metal nanoparticles out of a host oxide onto the surface of the electrode. The particles embed themselves into the electrode, “and that anchoring makes them more stable,” says Yildiz. As a result, exsolution has “led to remarkable progress in clean energy conversion and energy-efficient computing devices,” the researchers write in their paper.

However, controlling the precise properties of the resulting nanoparticles has been difficult. “We know that exsolution can give us stable and active nanoparticles, but the challenging part is really to control it. The novelty of this work is that we’ve found a tool — ion irradiation — that can give us that control,” says Jiayue Wang PhD ’22, first author of the paper. Wang, who conducted the work while earning his PhD in the MIT Department of Nuclear Science and Engineering, is now a postdoc at Stanford University.

Sossina Haile ’86, PhD ’92, the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern University, who was not involved in the current work, says:

“Metallic nanoparticles serve as catalysts in a whole host of reactions, including the important reaction of splitting water to generate hydrogen for energy storage. In this work, Yildiz and colleagues have created an ingenious method for controlling the way that nanoparticles form.”

Haile continues, “the community has shown that exsolution results in structurally stable nanoparticles, but the process is not easy to control, so one doesn’t necessarily get the optimal number and size of particles. Using ion irradiation, this group was able to precisely control the features of the nanoparticles, resulting in excellent catalytic activity for water splitting.”

What they did

The researchers found that aiming a beam of ions at the electrode while simultaneously exsolving metal nanoparticles onto the electrode’s surface allowed them to control several properties of the resulting nanoparticles.

“Through ion-matter interactions, we have successfully engineered the size, composition, density, and location of the exsolved nanoparticles,” the team writes in Energy & Environmental Science.

For example, they could make the particles much smaller — down to 2 billionths of a meter in diameter — than those made using conventional thermal exsolution methods alone. Further, they were able to change the composition of the nanoparticles by irradiating with specific elements. They demonstrated this with a beam of nickel ions that implanted nickel into the exsolved metal nanoparticle. As a result, they demonstrated a direct and convenient way to engineer the composition of exsolved nanoparticles.

“We want to have multi-element nanoparticles, or alloys, because they usually have higher catalytic activity,” Yildiz says. “With our approach, the exsolution target does not have to be dependent on the substrate oxide itself.” Irradiation opens the door to many more compositions. “We can pretty much choose any oxide and any ion that we can irradiate with and exsolve that,” says Yildiz.

The team also found that ion irradiation forms defects in the electrode itself. And these defects provide additional nucleation sites, or places for the exsolved nanoparticles to grow from, increasing the density of the resulting nanoparticles.

Irradiation could also allow extreme spatial control over the nanoparticles. “Because you can focus the ion beam, you can imagine that you could ‘write’ with it to form specific nanostructures,” says Wang. “We did a preliminary demonstration [of that], but we believe it has potential to realize well-controlled micro- and nano-structures.”

The team also showed that the nanoparticles they created with ion irradiation had superior catalytic activity over those created by conventional thermal exsolution alone.

Additional MIT authors of the paper are Kevin B. Woller, a principal research scientist at the Plasma Science and Fusion Center (PSFC), home to the equipment used for ion irradiation; Abinash Kumar PhD ’22, who received his PhD from the Department of Materials Science and Engineering (DMSE) and is now at Oak Ridge National Laboratory; and James M. LeBeau, an associate professor in DMSE. Other authors are Zhan Zhang and Hua Zhou of Argonne National Laboratory, and Iradwikanari Waluyo and Adrian Hunt of Brookhaven National Laboratory.

This work was funded by the OxEon Corp. and MIT’s PSFC. The research also used resources supported by the U.S. Department of Energy Office of Science, MIT’s Materials Research Laboratory, and MIT.nano. The work was performed, in part, at Harvard University through a network funded by the National Science Foundation.



----------------------------------------------------------------------------------------------------------------

Mon, 27 Nov 2023 00:00:00 -0500


New method uses crowdsourced feedback to help train robots
Posted on Monday November 27, 2023


Category : Research

Author : Adam Zewe | MIT News

Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.


Read more about this article :

To teach an AI agent a new task, like how to open a kitchen cabinet, researchers often use reinforcement learning — a trial-and-error process where the agent is rewarded for taking actions that get it closer to the goal.

In many instances, a human expert must carefully design a reward function, which is an incentive mechanism that gives the agent motivation to explore. The human expert must iteratively update that reward function as the agent explores and tries different actions. This can be time-consuming, inefficient, and difficult to scale up, especially when the task is complex and involves many steps.

Researchers from MIT, Harvard University, and the University of Washington have developed a new reinforcement learning approach that doesn’t rely on an expertly designed reward function. Instead, it leverages crowdsourced feedback, gathered from many nonexpert users, to guide the agent as it learns to reach its goal.

While some other methods also attempt to utilize nonexpert feedback, this new approach enables the AI agent to learn more quickly, despite the fact that data crowdsourced from users are often full of errors. These noisy data might cause other methods to fail.

In addition, this new approach allows feedback to be gathered asynchronously, so nonexpert users around the world can contribute to teaching the agent.

“One of the most time-consuming and challenging parts in designing a robotic agent today is engineering the reward function. Today reward functions are designed by expert researchers — a paradigm that is not scalable if we want to teach our robots many different tasks. Our work proposes a way to scale robot learning by crowdsourcing the design of reward function and by making it possible for nonexperts to provide useful feedback,” says Pulkit Agrawal, an assistant professor in the MIT Department of Electrical Engineering and Computer Science (EECS) who leads the Improbable AI Lab in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

In the future, this method could help a robot learn to perform specific tasks in a user’s home quickly, without the owner needing to show the robot physical examples of each task. The robot could explore on its own, with crowdsourced nonexpert feedback guiding its exploration.

“In our method, the reward function guides the agent to what it should explore, instead of telling it exactly what it should do to complete the task. So, even if the human supervision is somewhat inaccurate and noisy, the agent is still able to explore, which helps it learn much better,” explains lead author Marcel Torne ’23, a research assistant in the Improbable AI Lab.

Torne is joined on the paper by his MIT advisor, Agrawal; senior author Abhishek Gupta, assistant professor at the University of Washington; as well as others at the University of Washington and MIT. The research will be presented at the Conference on Neural Information Processing Systems next month.

Noisy feedback

One way to gather user feedback for reinforcement learning is to show a user two photos of states achieved by the agent, and then ask that user which state is closer to a goal. For instance, perhaps a robot’s goal is to open a kitchen cabinet. One image might show that the robot opened the cabinet, while the second might show that it opened the microwave. A user would pick the photo of the “better” state.

Some previous approaches try to use this crowdsourced, binary feedback to optimize a reward function that the agent would use to learn the task. However, because nonexperts are likely to make mistakes, the reward function can become very noisy, so the agent might get stuck and never reach its goal.

“Basically, the agent would take the reward function too seriously. It would try to match the reward function perfectly. So, instead of directly optimizing over the reward function, we just use it to tell the robot which areas it should be exploring,” Torne says.

He and his collaborators decoupled the process into two separate parts, each directed by its own algorithm. They call their new reinforcement learning method HuGE (Human Guided Exploration).

On one side, a goal selector algorithm is continuously updated with crowdsourced human feedback. The feedback is not used as a reward function, but rather to guide the agent’s exploration. In a sense, the nonexpert users drop breadcrumbs that incrementally lead the agent toward its goal.

On the other side, the agent explores on its own, in a self-supervised manner guided by the goal selector. It collects images or videos of actions that it tries, which are then sent to humans and used to update the goal selector.

This narrows down the area for the agent to explore, leading it to more promising areas that are closer to its goal. But if there is no feedback, or if feedback takes a while to arrive, the agent will keep learning on its own, albeit in a slower manner. This enables feedback to be gathered infrequently and asynchronously.

“The exploration loop can keep going autonomously, because it is just going to explore and learn new things. And then when you get some better signal, it is going to explore in more concrete ways. You can just keep them turning at their own pace,” adds Torne.

And because the feedback is just gently guiding the agent’s behavior, it will eventually learn to complete the task even if users provide incorrect answers.

Faster learning

The researchers tested this method on a number of simulated and real-world tasks. In simulation, they used HuGE to effectively learn tasks with long sequences of actions, such as stacking blocks in a particular order or navigating a large maze.

In real-world tests, they utilized HuGE to train robotic arms to draw the letter “U” and pick and place objects. For these tests, they crowdsourced data from 109 nonexpert users in 13 different countries spanning three continents.

In real-world and simulated experiments, HuGE helped agents learn to achieve the goal faster than other methods.

The researchers also found that data crowdsourced from nonexperts yielded better performance than synthetic data, which were produced and labeled by the researchers. For nonexpert users, labeling 30 images or videos took fewer than two minutes.

“This makes it very promising in terms of being able to scale up this method,” Torne adds.

In a related paper, which the researchers presented at the recent Conference on Robot Learning, they enhanced HuGE so an AI agent can learn to perform the task, and then autonomously reset the environment to continue learning. For instance, if the agent learns to open a cabinet, the method also guides the agent to close the cabinet.

“Now we can have it learn completely autonomously without needing human resets,” he says.

The researchers also emphasize that, in this and other learning approaches, it is critical to ensure that AI agents are aligned with human values.

In the future, they want to continue refining HuGE so the agent can learn from other forms of communication, such as natural language and physical interactions with the robot. They are also interested in applying this method to teach multiple agents at once.

This research is funded, in part, by the MIT-IBM Watson AI Lab.



----------------------------------------------------------------------------------------------------------------

Thu, 23 Nov 2023 14:00:00 -0500


Search algorithm reveals nearly 200 new kinds of CRISPR systems
Posted on Thursday November 23, 2023


Category : Research

Author : Allessandra DiCorato | Broad Institute

By analyzing bacterial data, researchers have discovered thousands of rare new CRISPR systems that have a range of functions and could enable gene editing, diagnostics, and more.


Read more about this article :

Microbial sequence databases contain a wealth of information about enzymes and other molecules that could be adapted for biotechnology. But these databases have grown so large in recent years that they’ve become difficult to search efficiently for enzymes of interest.

Now, scientists at the McGovern Institute for Brain Research at MIT, the Broad Institute of MIT and Harvard, and the National Center for Biotechnology Information (NCBI) at the National Institutes of Health have developed a new search algorithm that has identified 188 kinds of new rare CRISPR systems in bacterial genomes, encompassing thousands of individual systems. The work appears today in Science.

The algorithm, which comes from the lab of pioneering CRISPR researcher Professor Feng Zhang, uses big-data clustering approaches to rapidly search massive amounts of genomic data. The team used their algorithm, called Fast Locality-Sensitive Hashing-based clustering (FLSHclust) to mine three major public databases that contain data from a wide range of unusual bacteria, including ones found in coal mines, breweries, Antarctic lakes, and dog saliva. The scientists found a surprising number and diversity of CRISPR systems, including ones that could make edits to DNA in human cells, others that can target RNA, and many with a variety of other functions.

The new systems could potentially be harnessed to edit mammalian cells with fewer off-target effects than current Cas9 systems. They could also one day be used as diagnostics or serve as molecular records of activity inside cells.

The researchers say their search highlights an unprecedented level of diversity and flexibility of CRISPR and that there are likely many more rare systems yet to be discovered as databases continue to grow.

“Biodiversity is such a treasure trove, and as we continue to sequence more genomes and metagenomic samples, there is a growing need for better tools, like FLSHclust, to search that sequence space to find the molecular gems,” says Zhang, a co-senior author on the study and the James and Patricia Poitras Professor of Neuroscience at MIT with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering. Zhang is also an investigator at the McGovern Institute for Brain Research at MIT, a core institute member at the Broad, and an investigator at the Howard Hughes Medical Institute. Eugene Koonin, a distinguished investigator at the NCBI, is co-senior author on the study as well.

Searching for CRISPR

CRISPR, which stands for clustered regularly interspaced short palindromic repeats, is a bacterial defense system that has been engineered into many tools for genome editing and diagnostics.

To mine databases of protein and nucleic acid sequences for novel CRISPR systems, the researchers developed an algorithm based on an approach borrowed from the big data community. This technique, called locality-sensitive hashing, clusters together objects that are similar but not exactly identical. Using this approach allowed the team to probe billions of protein and DNA sequences — from the NCBI, its Whole Genome Shotgun database, and the Joint Genome Institute — in weeks, whereas previous methods that look for identical objects would have taken months. They designed their algorithm to look for genes associated with CRISPR.

“This new algorithm allows us to parse through data in a time frame that’s short enough that we can actually recover results and make biological hypotheses,” says Soumya Kannan PhD ’23, who is a co-first author on the study. Kannan was a graduate student in Zhang’s lab when the study began and is currently a postdoc and Junior Fellow at Harvard University. Han Altae-Tran PhD ’23, a graduate student in Zhang’s lab during the study and currently a postdoc at the University of Washington, was the study’s other co-first author.

“This is a testament to what you can do when you improve on the methods for exploration and use as much data as possible,” says Altae-Tran. “It’s really exciting to be able to improve the scale at which we search.”

New systems

In their analysis, Altae-Tran, Kannan, and their colleagues noticed that the thousands of CRISPR systems they found fell into a few existing and many new categories. They studied several of the new systems in greater detail in the lab.

They found several new variants of known Type I CRISPR systems, which use a guide RNA that is 32 base pairs long rather than the 20-nucleotide guide of Cas9. Because of their longer guide RNAs, these Type I systems could potentially be used to develop more precise gene-editing technology that is less prone to off-target editing. Zhang’s team showed that two of these systems could make short edits in the DNA of human cells. And because these Type I systems are similar in size to CRISPR-Cas9, they could likely be delivered to cells in animals or humans using the same gene-delivery technologies being used today for CRISPR.

One of the Type I systems also showed “collateral activity” — broad degradation of nucleic acids after the CRISPR protein binds its target. Scientists have used similar systems to make infectious disease diagnostics such as SHERLOCK, a tool capable of rapidly sensing a single molecule of DNA or RNA. Zhang’s team thinks the new systems could be adapted for diagnostic technologies as well.

The researchers also uncovered new mechanisms of action for some Type IV CRISPR systems, and a Type VII system that precisely targets RNA, which could potentially be used in RNA editing. Other systems could potentially be used as recording tools — a molecular document of when a gene was expressed — or as sensors of specific activity in a living cell.

Mining data

The scientists say their algorithm could aid in the search for other biochemical systems. “This search algorithm could be used by anyone who wants to work with these large databases for studying how proteins evolve or discovering new genes,” Altae-Tran says.

The researchers add that their findings illustrate not only how diverse CRISPR systems are, but also that most are rare and only found in unusual bacteria. “Some of these microbial systems were exclusively found in water from coal mines,” Kannan says. “If someone hadn’t been interested in that, we may never have seen those systems. Broadening our sampling diversity is really important to continue expanding the diversity of what we can discover.”

This work was supported by the Howard Hughes Medical Institute; the K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; and Robert Metcalfe.



----------------------------------------------------------------------------------------------------------------

Tue, 21 Nov 2023 00:00:00 -0500


Merging science and systems thinking to make materials more sustainable
Posted on Tuesday November 21, 2023


Category : Faculty

Author : Zach Winn | MIT News

Passionate about materials science “from the atom to the system,” Elsa Olivetti brings a holistic approach to sustainability to her teaching, research, and coalition-building.


Read more about this article :

For Professor Elsa Olivetti, tackling a problem as large and complex as climate change requires not only lab research but also understanding the systems of production that power the global economy.

Her career path reflects a quest to investigate materials at scales ranging from the microscopic to the mass-manufactured.

“I’ve always known what questions I wanted to ask, and then set out to build the tools to help me ask those questions,” says Olivetti, the Jerry McAfee Professor in Engineering.

Olivetti, who earned tenure in 2022 and was recently appointed associate dean of engineering, has sought to equip students with similar skills, whether in the classroom, in her lab group, or through the interdisciplinary programs she leads at MIT. Those efforts have earned her accolades including the Bose Award for Excellence in Teaching, a MacVicar Faculty Fellowship in 2021, and the McDonald Award for Excellence in Mentoring and Advising in 2023.

“I think to make real progress in sustainability, materials scientists need to think in interdisciplinary, systems-level ways, but at a deep technical level,” Olivetti says. “Supporting my students so that’s something that a lot more people can do is very rewarding for me.”

Her mission to make materials more sustainable also makes Olivetti grateful she’s at MIT, which has a long tradition of both interdisciplinary collaboration and technical know-how.

“MIT’s core competencies are well-positioned for bold achievements in climate and sustainability — the deep expertise on the economics side, the frontier knowledge in science, the computational creativity,” Olivetti says. “It’s a really exciting time and place where the key ingredients for progress are simmering in transformative ways.”

Answering the call

The moment that set Olivetti on her life’s journey began when she was 8, with a knock at her door. Her parents were in the other room, so Olivetti opened the door and met an organizer for Greenpeace, a nonprofit that works to raise awareness of environmental issues.

“I had a chat with that guy and got hooked on environmental concerns,” Olivetti says. “I still remember that conversation.”

The interaction changed the way Olivetti thought about her place in the world, and her new perspective manifested itself in some unique ways. Her elementary school science fair projects became elaborate pursuits of environmental solutions involving burying various items in the backyard to test for biodegradability. There was also an awkward attempt at natural pesticide development, which lead to a worm hatching in her bedroom.

As an undergraduate at the University of Virginia, Olivetti gravitated toward classes in environmentalism and materials science.

“There was a link between materials science and a broader, systems way of framing design for environment, and that just clicked for me in terms of the way I wanted to think about environmental problems — from the atom to the system,” Olivetti recalls.

That interest led Olivetti to MIT for a PhD in 2001, where she studied the feasibility of new materials for lithium-ion batteries.

“I really wanted to be thinking of things at a systems level, but I wanted to ground that in lab-based research,” Olivetti says. “I wanted an experiential experience in grad school, and that’s why I chose MIT’s program.”

Whether it was her undergraduate studies, her PhD, or her ensuing postdoc work at MIT, Olivetti sought to learn new skills to continue bridging the gap between materials science and environmental systems thinking.

“I think of it as, ‘Here’s how I can build up the ways I ask questions,’” Olivetti explains. “How do we design these materials while thinking about their implications as early as possible?”

Since joining MIT’s faculty in 2014, Olivetti has developed computational models to measure the cost and environmental impact of new materials, explored ways to adopt more sustainable and circular supply chains, and evaluated potential materials limitations as lithium-ion battery production is scaled. That work helps companies increase their use of greener, recyclable materials and more sustainably dispose of waste.

Olivetti believes the wide scope of her research gives the students in her lab a more holistic understanding of the life cycle of materials.

When the group started, each student was working on a different aspect of the problem — like on the natural language processing pipeline, or on recycling technology assessment, or beneficial use of waste — and now each student can link each of those pieces in their research,” Olivetti explains.

Beyond her research, Olivetti also co-directs the MIT Climate and Sustainability Consortium, which has established a set of eight areas of sustainability that it organizes coalitions around. Each coalition involves technical leaders at companies and researchers at MIT that work together to accelerate the impact of MIT’s research by helping companies adopt innovative and more sustainable technologies.

“Climate change mitigation and resilience is such a complex problem, and at MIT we have practice in working together across disciplines on many challenges,” Olivetti says. “It’s been exciting to lean on that culture and unlock ways to move forward more effectively.”

Bridging divides

Today, Olivetti tries to maximize the impact of her and her students’ research in materials industrial ecology by maintaining close ties to applications. In her research, this means working directly with aluminum companies to design alloys that could incorporate more scrap material or with nongovernmental organizations to incorporate agricultural residues in building products. In the classroom, that means bringing in people from companies to explain how they think about concepts like heat exchange or fluid flow in their products.

“I enjoy trying to ground what students are learning in the classroom with what’s happening in the world,” Olivetti explains.

Exposing students to industry is also a great way to help them think about their own careers. In her research lab, she’s started using the last 30 minutes of meetings to host talks from people working in national labs, startups, and larger companies to show students what they can do after their PhDs. The talks are similar to the Industry Seminar series Olivetti started that pairs undergraduate students with people working in areas like 3D printing, environmental consulting, and manufacturing.

“It’s about helping students learn what they’re excited about,” Olivetti says.

Whether in the classroom, lab, or at events held by organizations like MCSC, Olivetti believes collaboration is humanity’s most potent tool to combat climate change.

“I just really enjoy building links between people,” Olivetti says. “Learning about people and meeting them where they are is a way that one can create effective links. It’s about creating the right playgrounds for people to think and learn.”



----------------------------------------------------------------------------------------------------------------

Mon, 20 Nov 2023 09:00:00 -0500


Synthetic imagery sets new bar in AI training efficiency
Posted on Monday November 20, 2023


Category : Research

Author : Rachel Gordon | MIT CSAIL

MIT CSAIL researchers innovate with synthetic imagery to train AI, paving the way for more efficient and bias-reduced machine learning.


Read more about this article :

Data is the new soil, and in this fertile new ground, MIT researchers are planting more than just pixels. By using synthetic images to train machine learning models, a team of scientists recently surpassed results obtained from traditional “real-image” training methods. 

At the core of the approach is a system called StableRep, which doesn't just use any synthetic images; it generates them through ultra-popular text-to-image models like Stable Diffusion. It’s like creating worlds with words. 

So what’s in StableRep's secret sauce? A strategy called “multi-positive contrastive learning.”

“We're teaching the model to learn more about high-level concepts through context and variance, not just feeding it data,” says Lijie Fan, MIT PhD student in electrical engineering, affiliate of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), lead researcher on the work. “When multiple images, all generated from the same text, all treated as depictions of the same underlying thing, the model dives deeper into the concepts behind the images, say the object, not just their pixels.”

This approach considers multiple images spawned from identical text prompts as positive pairs, providing additional information during training, not just adding more diversity but specifying to the vision system which images are alike and which are different. Remarkably, StableRep outshone the prowess of top-tier models trained on real images, such as SimCLR and CLIP, in extensive datasets.

“While StableRep helps mitigate the challenges of data acquisition in machine learning, it also ushers in a stride towards a new era of AI training techniques. The capacity to produce high-caliber, diverse synthetic images on command could help curtail cumbersome expenses and resources,” says Fan. 

The process of data collection has never been straightforward. Back in the 1990s, researchers had to manually capture photographs to assemble datasets for objects and faces. The 2000s saw individuals scouring the internet for data. However, this raw, uncurated data often contained discrepancies when compared to real-world scenarios and reflected societal biases, presenting a distorted view of reality. The task of cleansing datasets through human intervention is not only expensive, but also exceedingly challenging. Imagine, though, if this arduous data collection could be distilled down to something as simple as issuing a command in natural language. 

A pivotal aspect of StableRep’s triumph is the adjustment of the “guidance scale” in the generative model, which ensures a delicate balance between the synthetic images’ diversity and fidelity. When finely tuned, synthetic images used in training these self-supervised models were found to be as effective, if not more so, than real images.

Taking it a step forward, language supervision was added to the mix, creating an enhanced variant: StableRep+. When trained with 20 million synthetic images, StableRep+ not only achieved superior accuracy but also displayed remarkable efficiency compared to CLIP models trained with a staggering 50 million real images.

Yet, the path ahead isn't without its potholes. The researchers candidly address several limitations, including the current slow pace of image generation, semantic mismatches between text prompts and the resultant images, potential amplification of biases, and complexities in image attribution, all of which are imperative to address for future advancements. Another issue is that StableRep requires first training the generative model on large-scale real data. The team acknowledges that starting with real data remains a necessity; however, when you have a good generative model, you can repurpose it for new tasks, like training recognition models and visual representations. 

The team notes that they haven’t gotten around the need to start with real data; it’s just that once you have a good generative model you can repurpose it for new tasks, like training recognition models and visual representations. 

While StableRep offers a good solution by diminishing the dependency on vast real-image collections, it brings to the fore concerns regarding hidden biases within the uncurated data used for these text-to-image models. The choice of text prompts, integral to the image synthesis process, is not entirely free from bias, “indicating the essential role of meticulous text selection or possible human curation,” says Fan. 

“Using the latest text-to-image models, we've gained unprecedented control over image generation, allowing for a diverse range of visuals from a single text input. This surpasses real-world image collection in efficiency and versatility. It proves especially useful in specialized tasks, like balancing image variety in long-tail recognition, presenting a practical supplement to using real images for training,” says Fan. “Our work signifies a step forward in visual learning, towards the goal of offering cost-effective training alternatives while highlighting the need for ongoing improvements in data quality and synthesis.”

“One dream of generative model learning has long been to be able to generate data useful for discriminative model training,” says Google DeepMind researcher and University of Toronto professor of computer science David Fleet, who was not involved in the paper. “While we have seen some signs of life, the dream has been elusive, especially on large-scale complex domains like high-resolution images. This paper provides compelling evidence, for the first time to my knowledge, that the dream is becoming a reality. They show that contrastive learning from massive amounts of synthetic image data can produce representations that outperform those learned from real data at scale, with the potential to improve myriad downstream vision tasks.”

Fan is joined by Yonglong Tian PhD ’22 as lead authors of the paper, as well as MIT associate professor of electrical engineering and computer science and CSAIL principal investigator Phillip Isola; Google researcher and OpenAI technical staff member Huiwen Chang; and Google staff research scientist Dilip Krishnan. The team will present StableRep at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in New Orleans.



----------------------------------------------------------------------------------------------------------------

Mon, 20 Nov 2023 00:00:00 -0500


How do reasonable people disagree?
Posted on Monday November 20, 2023


Category : Research

Author : Peter Dizikes | MIT News

A study by philosopher Kevin Dorst explains how political differences can result from a process of “rational polarization.”


Read more about this article :

U.S. politics is heavily polarized. This is often regarded as a product of irrationality: People can be tribal, are influenced by their peers, and often get information from very different, sometimes inaccurate sources.

Tribalism and misinformation are real enough. But what if people are often acting rationally as well, even in the process of arriving at very different views? What if they are not being misled or too emotional, but are thinking logically?

“There can be quite reasonable ways people can be predictably polarized,” says MIT philosopher Kevin Dorst, author of a new paper on the subject, based partly on his own empirical research.

This may especially be the case when people deal with a lot of ambiguity when weighing political and civic issues. Those ambiguities generate political asymmetry. People consider evidence in predictably different ways, leading them to different conclusions. That doesn’t mean they are not thinking logically, though.

“What’s going is people are selectively scrutinizing information,” Dorst says. “That’s effectively why they move in opposite directions, because they scrutinize and selectively look for flaws in different places, and so they get overall different takes.”

The concept of rational polarization may help us develop a more coherent account about how views differ, by helping us avoid thinking that we alone are rational — or, conversely, that we have done no real thinking while arriving at our own opinions. Thus it can add nuance to our assessments of others.

The paper, “Rational Polarization,” appears in The Philosophical Review. Dorst, the sole author, is an assistant professor in MIT’s Department of Linguistics and Philosophy.

Looking for flaws

To Dorst, rational polarization stands as a useful alternative to other models about belief formation. In particular, rational polarization in his view improves upon one type of model of “Bayesian” thinking, in which people keep using new information to hone their views.

In Bayesian terms, because people use new information to update their views, they will rationally either change their ideas or not, as is warranted. it, But in reality, Dorst asserts, things are not so simple. Often when we assess new evidence, there is ambiguity present — and Dorst contends that it is rational to be unsure about that ambiguity. But this can generate polarization because people’s prior assumptions do influence the places where they find ambiguity.

Suppose a group of people have been given two studies about the death penalty: One study finds the death penalty has no deterrent effect on people’s behavior, and the other study finds it does. Even reading the same evidence, people in the group will likely wind up with different interpretations of it.

“Those who really believe in the deterrent effect will look closely at the study suggesting there is no deterrent effect, be skeptical about it, poke holes in the argument, and claim to recognize flaws in its reasoning,” Dorst says. “Conversely, for the people who disbelieve the deterrent effect, it’s the exact opposite. They find flaws in the study suggesting there is a deterrent effect.”

Even to these seemingly selective readings can be rational, Dorst says: “It makes sense to scrutinize surprising information more than unsurprising information.” Therefore, he adds, “You can see that people who have this tendency to selectively scrutinize [can] drift apart even when they are presented with the same evidence that’s mixed in the same way.”

By the letter

To help show that this habit exists, Dorst also ran an online experiment about ambiguity, with 250 participants on the Prolific online survey platform. The aim was to see how much people’s views might become polarized in the presence of ambiguous information.

The participants were given an incomplete string of letters, as one might find in a crossword puzzle or on “Wheel of Fortune.” Some letter strings were parts of real words, and some were not. Depending on what kinds of additional information participants were given, the ambiguous, unsolvable strings of letters had a sharply polarizing effect on how people reacted to the additional information they received.

This process at work in the experiment, Dorst says, is similar to what happens when people receive uncertain information, in the news or in studies, about political matters.

“When you find a flaw, it gives you clear evidence that undermines the study,” Dorst says. Otherwise, people often tend to be uncertain about the material they see. “When you don’t find a flaw, it [can] give you ambiguous evidence and you don’t know what to make of it. As a result, that can lead to predictable polarization.”

The larger point, Dorst believes, is that we can arrive at a more nuanced and consistent picture of how political differences exist when people process similar information.

“There’s a perception that in politics, rational brains shut off and people think with their guts,” Dorst says. “If you take that seriously, you should say, ‘I form my beliefs on politics in the same ways.’”

Unless, that is, you believe you alone are rational, and everyone else is not — though Dorst finds this to be an untenable view of the world.

“Part of what I’m trying to do is give an account that’s not subject to that sort of instability,” Dorst says. “You don’t necessarily have to point the finger at others. It’s a much more interesting process if you think there’s something [rational] there as well.”



----------------------------------------------------------------------------------------------------------------

Fri, 17 Nov 2023 11:00:00 -0500


Ingestible electronic device detects breathing depression in patients
Posted on Friday November 17, 2023


Category : Research

Author : Anne Trafton | MIT News

The new sensor measures heart and breathing rate from patients with sleep apnea and could also be used to monitor people at risk of opioid overdose.


Read more about this article :

Diagnosing sleep disorders such as sleep apnea usually requires a patient to spend the night in a sleep lab, hooked up to a variety of sensors and monitors. Researchers from MIT, Celero Systems, and West Virginia University hope to make that process less intrusive, using an ingestible capsule they developed that can monitor vital signs from within the patient’s GI tract.

The capsule, which is about the size of a multivitamin, uses an accelerometer to measure the patient’s breathing rate and heart rate. In addition to diagnosing sleep apnea, the device could also be useful for detecting opioid overdoses in people at high risk, the researchers say.

“It’s an exciting intervention to help people be diagnosed and then receive the appropriate treatment if they suffer from obstructive sleep apnea,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital. “The device also has the potential for early detection of changes in respiratory status, whether it’s a result of opiates or other conditions that could be monitored, like asthma or chronic obstructive pulmonary disease (COPD).”

In a study of 10 human volunteers, the researchers showed that the capsule can be used to monitor vital signs and to detect sleep apnea episodes, which occur when the patient repeatedly stops and starts breathing during sleep. The patients did not show any adverse effects from the capsule, which passed harmlessly through the digestive tract.

Traverso is one of the senior authors of the study, along with Robert Langer, an MIT Institute Professor and member of MIT’s Koch Institute for Integrative Cancer Research; Victor Finomore, director of the Human Performance and Applied Neuroscience Research Center at the West Virginia University School of Medicine; and Ali Rezai, director of the Rockefeller Neuroscience Institute at the West Virginia University School of Medicine. The paper appears today in the journal Device.

Vital sign measurements

Over the past decade, Traverso and Langer have developed a range of ingestible sensors that could be used to monitor vital signs and diagnose disorders of the GI tract, such as gastrointestinal slowdown and inflammatory bowel diseases.

This new study focused on measuring vital signs, using a capsule developed by Celero Systems that includes an accelerometer that detects slight movements generated by the beating of the heart and the expansion of the lungs. The capsule also contains two small batteries and a wireless antenna that transmits data to an external device such as a laptop.

In tests in an animal model, the researchers found that this capsule could accurately measure breathing rate and heart rate. In one experiment, they showed that the sensor could detect the depression of breathing rate that resulted from a large dose of fentanyl, an opioid drug.

Building on those results, the researchers decided to further test the capsule in a clinical trial at the West Virginia University Rockefeller Neuroscience Institute. Ten patients who enrolled in the study were monitored using the ingestible capsule, and these patients were also connected to the sensors typically used to monitor sleep, so the researchers could compare measurements from both types of sensors.

The researchers found that their ingestible sensor was able to accurately measure both breathing rate and heart rate, and it also detected a sleep apnea episode that one of the patients experienced.

“What we were able to show is that using the capsule, we could capture data that matched what the traditional transdermal sensors would capture,” Traverso says. “We also observed that the capsule could detect apnea, and that was confirmed with standard monitoring systems that are available in the sleep lab.”

In this study, the researchers monitored signals emitted by the capsule while it was in the stomach, but in a previous study, they showed that vital signs can also be measured from other parts of the GI tract.

“The stomach generally offers some of the best signals, mainly because it’s close to the heart and the lungs, but we know that we can also sense them elsewhere,” Traverso says.

None of the patients reported any discomfort or harm from the capsule. Radiographic imaging performed 14 days after the capsules were ingested revealed that all of them had passed through the patients’ bodies. The research team’s previous work has shown that objects of similar size usually move through the digestive tract in a little more than a day.

Close monitoring

The researchers envision that this kind of sensor could be used to diagnose sleep apnea in a less intrusive way than the skin-based sensors that are now used. It could also be used to monitor patients when they begin treatment for apnea, to make sure that the treatments are effective.

Celero Systems, a company founded by Traverso, Langer, Jeremy Ruskin, a professor of medicine at Harvard Medical School, and Benjamin Pless, now CEO of the company, is now working on sensors that could be used to detect sleep apnea or opioid overdose.

“We know that people who have had an overdose are at higher risk of recurrence, so those individuals could be monitored more closely so that in the event of another overdose, someone could help them,” Traverso says.

In future work, the researchers hope to incorporate an overdose reversal agent such as nalmefene into the device, so that drug release would be triggered when the person’s breathing rate slowed or stopped. They are also working on strategies to lengthen the amount of time that the capsules could remain in the stomach.

The research was funded by the Karl van Tassel Career Professorship, MIT’s Department of Mechanical Engineering, and Celero Systems.

Authors of the paper also include Pless, James Mahoney, Justin Kupec, Robert Stansbury, Daniel Bacher, Shannon Schuetz, and Alison Hayward.



----------------------------------------------------------------------------------------------------------------

Thu, 16 Nov 2023 16:30:00 -0500


Rewarding excellence in open data
Posted on Thursday November 16, 2023


Category : Special events and guest speakers

Author : Brigham Fay | MIT Libraries

MIT researchers who share their data recognized at second annual awards celebration.


Read more about this article :

The second annual MIT Prize for Open Data, which included a $2,500 cash prize, was recently awarded to 10 individual and group research projects. Presented jointly by the School of Science and the MIT Libraries, the prize highlights the value of open data — research data that is openly accessible and reusable — at the Institute. The prize winners and 12 honorable mention recipients were honored at the Open Data @ MIT event held Oct. 24 at Hayden Library. 

Conceived by Chris Bourg, director of MIT Libraries, and Rebecca Saxe, associate dean of the School of Science and the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, the prize program was launched in 2022. It recognizes MIT-affiliated researchers who use or share open data, create infrastructure for open data sharing, or theorize about open data. Nominations were solicited from across the Institute, with a focus on trainees: undergraduate and graduate students, postdocs, and research staff. 

“The prize is explicitly aimed at early-career researchers,” says Bourg. “Supporting and encouraging the next generation of researchers will help ensure that the future of scholarship is characterized by a norm of open sharing.”

The 2023 awards were presented at a celebratory event held during International Open Access Week. Winners gave five-minute presentations on their projects and the role that open data plays in their research. The program also included remarks from Bourg and Anne White, School of Engineering Distinguished Professor of Engineering, vice provost, and associate vice president for research administration. White reflected on the ways in which MIT has demonstrated its values with the open sharing of research and scholarship and acknowledged the efforts of the honorees and advocates gathered at the event: “Thank you for the active role you’re all playing in building a culture of openness in research,” she said. “It benefits us all.” 

Winners were chosen from more than 80 nominees, representing all five MIT schools, the MIT Schwarzman College of Computing, and several research centers across the Institute. A committee composed of faculty, staff, and graduate students made the selections:

  • Hammaad Adam, graduate student in the Institute for Data, Systems, and Society, accepted on behalf of the team behind Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first ever multi-center dataset dedicated to the organ procurement process. ORCHID provides the first opportunity to quantitatively analyze organ procurement organization decisions and identify operational inefficiencies.
  • Adam Atanas, postdoc in the Department of Brain and Cognitive Sciences (BCS), and Jungsoo Kim, graduate student in BCS, created WormWideWeb.org. The site, allowing researchers to easily browse and download C. elegans whole-brain datasets, will be useful to C. elegans neuroscientists and theoretical/computational neuroscientists.
     
  • Paul Berube, research scientist in the Department of Civil and Environmental Engineering, and Steven Biller, assistant professor of biological sciences at Wellesley College, won for “Unlocking Marine Microbiomes with Open Data.” Open data of genomes and metagenomes for marine ecosystems, with a focus on cyanobacteria, leverage the power of contemporaneous data from GEOTRACES and other long-standing ocean time-series programs to provide underlying information to answer questions about marine ecosystem function.
     
  • Jack Cavanagh, Sarah Kopper, and Diana Horvath of the Abdul Latif Jameel Poverty Action Lab (J-PAL) were recognized for J-PAL’s Data Publication Infrastructure, which includes a trusted repository of open-access datasets, a dedicated team of data curators, and coding tools and training materials to help other teams publish data in an efficient and ethical manner.
     
  • Jerome Patrick Cruz, graduate student in the Department of Political Science, won for OpenAudit, leveraging advances in natural language processing and machine learning to make data in public audit reports more usable for academics and policy researchers, as well as governance practitioners, watchdogs, and reformers. This work was done in collaboration with colleagues at Ateneo de Manila University in the Philippines.
     
  • Undergraduate student Daniel Kurlander created a tool for planetary scientists to rapidly access and filter images of the comet 67P/Churyumov-Gerasimenko. The web-based tool enables searches by location and other properties, does not require a time-intensive download of a massive dataset, allows analysis of the data independent of the speed of one’s computer, and does not require installation of a complex set of programs.
     
  • Halie Olson, postdoc in BCS, was recognized for sharing data from a functional magnetic resonance imaging (fMRI) study on language processing. The study used video clips from “Sesame Street” in which researchers manipulated the comprehensibility of the speech stream, allowing them to isolate a “language response” in the brain.
  • Thomas González Roberts, graduate student in the Department of Aeronautics and Astronautics, won for the International Telecommunication Union Compliance Assessment Monitor. This tool combats the heritage of secrecy in outer space operations by creating human- and machine-readable datasets that succinctly describe the international agreements that govern satellite operations.
     
  • Melissa Kline Struhl, research scientist in BCS, was recognized for Children Helping Science, a free, open-source platform for remote studies with babies and children that makes it possible for researchers at more than 100 institutions to conduct reproducible studies.
     
  • JS Tan, graduate student in the Department of Urban Studies and Planning, developed the Collective Action in Tech Archive in collaboration with Nataliya Nedzhvetskaya of the University of California at Berkeley. It is an open database of all publicly recorded collective actions taken by workers in the global tech industry. 

A complete list of winning projects and honorable mentions, including links to the research data, is available on the MIT Libraries website.



----------------------------------------------------------------------------------------------------------------

Thu, 16 Nov 2023 14:00:00 -0500


How cell identity is preserved when cells divide
Posted on Thursday November 16, 2023


Category : Research

Author : Anne Trafton | MIT News

MIT study suggests 3D folding of the genome is key to cells’ ability to store and pass on “memories” of which genes they should express.


Read more about this article :

Every cell in the human body contains the same genetic instructions, encoded in its DNA. However, out of about 30,000 genes, each cell expresses only those genes that it needs to become a nerve cell, immune cell, or any of the other hundreds of cell types in the body.

Each cell’s fate is largely determined by chemical modifications to the proteins that decorate its DNA; these modification in turn control which genes get turned on or off. When cells copy their DNA to divide, however, they lose half of these modifications, leaving the question: How do cells maintain the memory of what kind of cell they are supposed to be?

A new MIT study proposes a theoretical model that helps explain how these memories are passed from generation to generation when cells divide. The research team suggests that within each cell’s nucleus, the 3D folding of its genome determines which parts of the genome will be marked by these chemical modifications. After a cell copies its DNA, the marks are partially lost, but the 3D folding allows the cell to easily restore the chemical marks needed to maintain its identity. And each time a cell divides, chemical marks allow a cell to restore its 3D folding of its genome. This way, by juggling the memory between 3D folding and the marks, the memory can be preserved over hundreds of cell divisions.

“A key aspect of how cell types differ is that different genes are turned on or off. It's very difficult to transform one cell type to another because these states are very committed,” says Jeremy Owen PhD ’22, the lead author of the study. “What we have done in this work is develop a simple model that highlights qualitative features of the chemical systems inside cells and how they need to work in order to make memories of gene expression stable.”

Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, is the senior author of the paper, which appears today in Science. Dino Osmanović, a former postdoctoral fellow at MIT’s Center for the Physics of Living Systems, is also an author of the study.

Maintaining memory

Within the cell nucleus, DNA is wrapped around proteins called histones, forming a densely packed structure known as chromatin. Histones can display a variety of modifications that help control which genes are expressed in a given cell. These modifications generate “epigenetic memory,” which helps a cell to maintain its cell type. However, how this memory is passed on to daughter cells is somewhat of a mystery.

Previous work by Mirny’s lab has shown that the 3D structure of chromosomes is, to a great extent, determined by these epigenetic modifications, or marks. In particular, they found that certain chromatin regions, with marks telling cells not to read a particular segment of DNA, attract each other and form dense clumps called heterochromatin, which are difficult for the cell to access.

In their new study, Mirny and his colleagues wanted to answer the question of how those epigenetic marks are maintained from generation to generation. They developed a computational model of a polymer with a few marked regions, and saw that these marked regions collapse into each other, forming a dense clump. Then they studied how these marks are lost and gained.

When a cell copies its DNA to divide it between two daughter cells, each copy gets about half of the epigenetic marks. The cell then needs to restore the lost marks before the DNA is passed to the daughter cells, and the way chromosomes were folded serves as a blueprint for where these remaining marks should go.

These modifications are added by specialized enzymes known as “reader-writer” enzymes. Each of these enzymes is specific for a certain mark, and once they “read” existing marks, they “write” additional marks at nearby locations. If the chromatin is already folded into a 3D shape, marks will accumulate in regions that already had modifications inherited from the parent cell.

“There are several lines of evidence that suggest that the spreading can happen in 3D, meaning if there are two parts that are near each other in space, even if they're not adjacent along the DNA, then spreading can happen from one to another,” Owen says. “That is how the 3D structure can influence the spreading of these marks.”

This process is analogous to the spread of infectious disease, as the more contacts that a chromatin region has with other regions, the more likely it is to be modified, just as an individual is more likely to become infected as their number of contacts increases. In this analogy, dense regions of marked chromatin are like cities where people have many social interactions, while the rest of the genome is comparable to sparsely populated rural areas.

“That essentially means that the marks will be spreading in the dense region and will be very sparse anywhere outside it,” Mirny says.

The new model also suggests possible parallels between epigenetic memories stored in a folded polymer and memories stored in a neural network, he adds. Folding of marked regions can be thought of as analogous to the strong connections formed between neurons that fire together in a neural network.

“Broadly this suggests that akin to the way neural networks are able to do very complex information processing, the epigenetic memory mechanism we described may be able to process information, not only store it,” he says.

“One beautiful aspect of the work is how it offers and explores connections with ideas from the seemingly very distant corners of science, including spreading of infections (to describe formation of new chemical marks in the 3D vicinity of the existing one), associative memory in model neural networks, and protein folding,” says Alexander Grosberg, a professor of physics at New York University, who was not involved in the research.

Epigenetic erosion

While this model appeared to offer a good explanation for how epigenetic memory can be maintained, the researchers found that eventually, reader-writer enzyme activity would lead to the entire genome being covered in epigenetic modifications. When they altered the model to make the enzyme weaker, it didn’t cover enough of the genome and memories were lost in a few cell generations.

To get the model to more accurately account for the preservation of epigenetic marks, the researchers added another element: limiting the amount of reader-writer enzyme available. They found that if the amount of enzyme was kept between 0.1 and 1 percent of the number of histones (a percentage based on estimates of the actual abundance of these enzymes), their model cells could accurately maintain their epigenetic memory for up to hundreds of generations, depending on the complexity of the epigenetic pattern.

It is already known that cells begin to lose their epigenetic memory as they age, and the researchers now plan to study whether the process they described in this paper might play a role in epigenetic erosion and loss of cell identity. They also plan to model a disease called progeria, in which cells have a genetic mutation that leads to loss of heterochromatin. People with this disease experience accelerated aging.

“The mechanistic link between these mutations and the epigenetic changes that eventually happen is not well understood,” Owen says. “It would be great to use a model like ours where there are dynamic marks, together with polymer dynamics, to try and explain that.”

The researchers also hope to work with collaborators to experimentally test some of the predictions of their model, which could be done, for example, by altering the level of reader-writer enzymes in living cells and measuring the effect on epigenetic memory.

The research was funded by the National Human Genome Research Institute, the National Institute of General Medical Sciences, and the National Science Foundation.



----------------------------------------------------------------------------------------------------------------

Thu, 16 Nov 2023 11:30:00 -0500


A new ultrasound patch can measure how full your bladder is
Posted on Thursday November 16, 2023


Category : Research

Author : Anne Trafton | MIT News

The wearable device, designed to monitor bladder and kidney health, could be adapted for earlier diagnosis of cancers deep within the body.


Read more about this article :

MIT researchers have designed a wearable ultrasound monitor, in the form of a patch, that can image organs within the body without the need for an ultrasound operator or application of gel.

In a new study, the researchers showed that their patch can accurately image the bladder and determine how full it is. This could help patients with bladder or kidney disorders more easily track whether these organs are functioning properly, the researchers say.

This approach could also be adapted to monitor other organs within the body by changing the location of the ultrasound array and tuning the frequency of the signal. Such devices could potentially enable earlier detection of cancers that form deep within the body, such as ovarian cancer.

“This technology is versatile and can be used not only on the bladder but any deep tissue of the body. It’s a novel platform that can do identification and characterization of many of the diseases that we carry in our body,” says Canan Dagdeviren, an associate professor in MIT’s Media Lab and the senior author of the study.

Lin Zhang, an MIT research scientist; Colin Marcus, an MIT graduate student in electrical engineering and computer science; and Dabin Lin, a professor at Xi’an Technological University, are the lead authors of a paper describing the work, which appears today in Nature Electronics.

Wearable monitoring

Dagdeviren’s lab, which specializes in designing flexible, wearable electronic devices, recently developed an ultrasound monitor that can be incorporated into a bra and used to screen for breast cancer. In the new study, the team used a similar approach to develop a wearable patch that can adhere to the skin and take ultrasound images of organs located within the body.

For their first demonstration, the researchers decided to focus on the bladder, partly inspired by Dagdeviren’s younger brother, who was diagnosed with kidney cancer a few years ago. After having one of his kidneys surgically removed, he had difficulty fully emptying his bladder. Dagdeviren wondered if an ultrasound monitor that reveals how full the bladder is might help patients similar to her brother, or people with other types of bladder or kidney problems.

“Millions of people are suffering from bladder dysfunction and related diseases, and not surprisingly, bladder volume monitoring is an effective way to assess your kidney health and wellness,” she says.

Currently, the only way to measure bladder volume is using a traditional, bulky ultrasound probe, which requires going to a medical facility. Dagdeviren and her colleagues wanted to develop a wearable alternative that patients could use at home.

To achieve that, they created a flexible patch made of silicone rubber, embedded with five ultrasound arrays made from a new piezoelectric material that the researchers developed for this device. The arrays are positioned in the shape of a cross, which allows the patch to image the entire bladder, which is about 12 by 8 centimeters when full.

The polymer that makes up the patch is naturally sticky and adheres gently to the skin, making it easy to attach and detach. Once placed on the skin, underwear or leggings can help to hold it in place.

Bladder volume

In a study performed with collaborators at the Center for Ultrasound Research and Translation and Department of Radiology at Massachusetts General Hospital, the researchers showed that the new patch could capture images comparable to those taken with a traditional ultrasound probe, and these images could be used to track changes in bladder volume.

For the study, the researchers recruited 20 patients with a range of body mass indexes. Subjects were first imaged with a full bladder, then with a partially empty bladder, and then with a completely empty bladder. The images obtained from the new patch were similar in quality to those taken with traditional ultrasound, and the ultrasound arrays worked on all subjects regardless of their body mass index.

Using this patch, no ultrasound gel is needed, and no pressure needs to be applied, as with a regular ultrasound probe, because the field of view is large enough to encompass the entire bladder.

To see the images, the researchers connected their ultrasound arrays to the same kind of ultrasound machine used in medical imaging centers. However, the MIT team is now working on a portable device, about the size of a smartphone, that could be used to view the images.

“In this work, we have further developed a path toward clinical translation of conformable ultrasonic biosensors that yield valuable information about vital physiologic parameters. Our group hopes to build on this and develop a suite of devices that will ultimately bridge the information gap between clinicians and patients,” says Anthony E. Samir, director of the MGH Center for Ultrasound Research and Translation and Associate Chair of Imaging Sciences at MGH Radiology, who is also an author of the study.

The MIT team also hopes to develop ultrasound devices that could be used to image other organs within the body, such as the pancreas, liver, or ovaries. Based on the location and depth of each organ, the researchers need to alter the frequency of the ultrasound signal, which requires designing new piezoelectric materials. For some of these organs, located deep within the body, the device may work better as an implant rather than a wearable patch.

“For whatever organ that we need to visualize, we go back to the first step, select the right materials, come up with the right device design and then fabricate everything accordingly,” before testing the device and performing clinical trials, Dagdeviren says.

“This work could develop into a central area of focus in ultrasound research, motivate a new approach to future medical device designs, and lay the groundwork for many more fruitful collaborations between materials scientists, electrical engineers, and biomedical researchers,” says Anantha Chandrakasan, dean of MIT’s School of Engineering, the Vannevar Bush Professor of Electrical Engineering and Computer Science, and an author of the paper.

The research was funded by a National Science Foundation CAREER award, a 3M Non-Tenured Faculty Award, the Sagol Weizmann-MIT Bridge Program, Texas Instruments Inc., the MIT Media Lab Consortium, a National Science Foundation Graduate Research Fellowship, and an ARRS Scholar Award.



----------------------------------------------------------------------------------------------------------------

Thu, 16 Nov 2023 00:00:00 -0500


Technique enables AI on edge devices to keep learning over time
Posted on Thursday November 16, 2023


Category : Research

Author : Adam Zewe | MIT News

With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.


Read more about this article :

Personalized deep-learning models can enable artificial intelligence chatbots that adapt to understand a user’s accent or smart keyboards that continuously update to better predict the next word based on someone’s typing history. This customization requires constant fine-tuning of a machine-learning model with new data.

Because smartphones and other edge devices lack the memory and computational power necessary for this fine-tuning process, user data are typically uploaded to cloud servers where the model is updated. But data transmission uses a great deal of energy, and sending sensitive user data to a cloud server poses a security risk.  

Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere developed a technique that enables deep-learning models to efficiently adapt to new sensor data directly on an edge device.

Their on-device training method, called PockEngine, determines which parts of a huge machine-learning model need to be updated to improve accuracy, and only stores and computes with those specific pieces. It performs the bulk of these computations while the model is being prepared, before runtime, which minimizes computational overhead and boosts the speed of the fine-tuning process.    

When compared to other methods, PockEngine significantly sped up on-device training, performing up to 15 times faster on some hardware platforms. Moreover, PockEngine didn’t cause models to have any dip in accuracy. The researchers also found that their fine-tuning method enabled a popular AI chatbot to answer complex questions more accurately.

“On-device fine-tuning can enable better privacy, lower costs, customization ability, and also lifelong learning, but it is not easy. Everything has to happen with a limited number of resources. We want to be able to run not only inference but also training on an edge device. With PockEngine, now we can,” says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, a distinguished scientist at NVIDIA, and senior author of an open-access paper describing PockEngine.

Han is joined on the paper by lead author Ligeng Zhu, an EECS graduate student, as well as others at MIT, the MIT-IBM Watson AI Lab, and the University of California San Diego. The paper was recently presented at the IEEE/ACM International Symposium on Microarchitecture.

Layer by layer

Deep-learning models are based on neural networks, which comprise many interconnected layers of nodes, or “neurons,” that process data to make a prediction. When the model is run, a process called inference, a data input (such as an image) is passed from layer to layer until the prediction (perhaps the image label) is output at the end. During inference, each layer no longer needs to be stored after it processes the input.

But during training and fine-tuning, the model undergoes a process known as backpropagation. In backpropagation, the output is compared to the correct answer, and then the model is run in reverse. Each layer is updated as the model’s output gets closer to the correct answer.

Because each layer may need to be updated, the entire model and intermediate results must be stored, making fine-tuning more memory demanding than inference

However, not all layers in the neural network are important for improving accuracy. And even for layers that are important, the entire layer may not need to be updated. Those layers, and pieces of layers, don’t need to be stored. Furthermore, one may not need to go all the way back to the first layer to improve accuracy — the process could be stopped somewhere in the middle.

PockEngine takes advantage of these factors to speed up the fine-tuning process and cut down on the amount of computation and memory required.

The system first fine-tunes each layer, one at a time, on a certain task and measures the accuracy improvement after each individual layer. In this way, PockEngine identifies the contribution of each layer, as well as trade-offs between accuracy and fine-tuning cost, and automatically determines the percentage of each layer that needs to be fine-tuned.

“This method matches the accuracy very well compared to full back propagation on different tasks and different neural networks,” Han adds.

A pared-down model

Conventionally, the backpropagation graph is generated during runtime, which involves a great deal of computation. Instead, PockEngine does this during compile time, while the model is being prepared for deployment.

PockEngine deletes bits of code to remove unnecessary layers or pieces of layers, creating a pared-down graph of the model to be used during runtime. It then performs other optimizations on this graph to further improve efficiency.

Since all this only needs to be done once, it saves on computational overhead for runtime.

“It is like before setting out on a hiking trip. At home, you would do careful planning — which trails are you going to go on, which trails are you going to ignore. So then at execution time, when you are actually hiking, you already have a very careful plan to follow,” Han explains.

When they applied PockEngine to deep-learning models on different edge devices, including Apple M1 Chips and the digital signal processors common in many smartphones and Raspberry Pi computers, it performed on-device training up to 15 times faster, without any drop in accuracy. PockEngine also significantly slashed the amount of memory required for fine-tuning.

The team also applied the technique to the large language model Llama-V2. With large language models, the fine-tuning process involves providing many examples, and it’s crucial for the model to learn how to interact with users, Han says. The process is also important for models tasked with solving complex problems or reasoning about solutions.

For instance, Llama-V2 models that were fine-tuned using PockEngine answered the question “What was Michael Jackson’s last album?” correctly, while models that weren’t fine-tuned failed. PockEngine cut the time it took for each iteration of the fine-tuning process from about seven seconds to less than one second on a NVIDIA Jetson Orin, an edge GPU platform.

In the future, the researchers want to use PockEngine to fine-tune even larger models designed to process text and images together.

“This work addresses growing efficiency challenges posed by the adoption of large AI models such as LLMs across diverse applications in many different industries. It not only holds promise for edge applications that incorporate larger models, but also for lowering the cost of maintaining and updating large AI models in the cloud,” says Ehry MacRostie, a senior manager in Amazon’s Artificial General Intelligence division who was not involved in this study but works with MIT on related AI research through the MIT-Amazon Science Hub.

This work was supported, in part, by the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, the MIT-Amazon Science Hub, the National Science Foundation (NSF), and the Qualcomm Innovation Fellowship.



----------------------------------------------------------------------------------------------------------------

Wed, 15 Nov 2023 11:00:00 -0500


This 3D printer can watch itself fabricate objects
Posted on Wednesday November 15, 2023


Category : Research

Author : Adam Zewe | MIT News

Computer vision enables contact-free 3D printing, letting engineers print with high-performance materials they couldn’t use before.


Read more about this article :

With 3D inkjet printing systems, engineers can fabricate hybrid structures that have soft and rigid components, like robotic grippers that are strong enough to grasp heavy objects but soft enough to interact safely with humans.

These multimaterial 3D printing systems utilize thousands of nozzles to deposit tiny droplets of resin, which are smoothed with a scraper or roller and cured with UV light. But the smoothing process could squish or smear resins that cure slowly, limiting the types of materials that can be used. 

Researchers from MIT, the MIT spinout Inkbit, and ETH Zurich have developed a new 3D inkjet printing system that works with a much wider range of materials. Their printer utilizes computer vision to automatically scan the 3D printing surface and adjust the amount of resin each nozzle deposits in real-time to ensure no areas have too much or too little material.

Since it does not require mechanical parts to smooth the resin, this contactless system works with materials that cure more slowly than the acrylates which are traditionally used in 3D printing. Some slower-curing material chemistries can offer improved performance over acrylates, such as greater elasticity, durability, or longevity.

In addition, the automatic system makes adjustments without stopping or slowing the printing process, making this production-grade printer about 660 times faster than a comparable 3D inkjet printing system.

The researchers used this printer to create complex, robotic devices that combine soft and rigid materials. For example, they made a completely 3D-printed robotic gripper shaped like a human hand and controlled by a set of reinforced, yet flexible, tendons.

“Our key insight here was to develop a machine-vision system and completely active feedback loop. This is almost like endowing a printer with a set of eyes and a brain, where the eyes observe what is being printed, and then the brain of the machine directs it as to what should be printed next,” says co-corresponding author Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

He is joined on the paper by lead author Thomas Buchner, a doctoral student at ETH Zurich, co-corresponding author Robert Katzschmann PhD ’18, assistant professor of robotics who leads the Soft Robotics Laboratory at ETH Zurich; as well as others at ETH Zurich and Inkbit. The research appears today in Nature.

Contact free

This paper builds off a low-cost, multimaterial 3D printer known as MultiFab that the researchers introduced in 2015. By utilizing thousands of nozzles to deposit tiny droplets of resin that are UV-cured, MultiFab enabled high-resolution 3D printing with up to 10 materials at once.

With this new project, the researchers sought a contactless process that would expand the range of materials they could use to fabricate more complex devices.

They developed a technique, known as vision-controlled jetting, which utilizes four high-frame-rate cameras and two lasers that rapidly and continuously scan the print surface. The cameras capture images as thousands of nozzles deposit tiny droplets of resin.

The computer vision system converts the image into a high-resolution depth map, a computation that takes less than a second to perform. It compares the depth map to the CAD (computer-aided design) model of the part being fabricated, and adjusts the amount of resin being deposited to keep the object on target with the final structure.

The automated system can make adjustments to any individual nozzle. Since the printer has 16,000 nozzles, the system can control fine details of the device being fabricated.

“Geometrically, it can print almost anything you want made of multiple materials. There are almost no limitations in terms of what you can send to the printer, and what you get is truly functional and long-lasting,” says Katzschmann.

The level of control afforded by the system enables it to print very precisely with wax, which is used as a support material to create cavities or intricate networks of channels inside an object. The wax is printed below the structure as the device is fabricated. After it is complete, the object is heated so the wax melts and drains out, leaving open channels throughout the object.

Because it can automatically and rapidly adjust the amount of material being deposited by each of the nozzles in real time, the system doesn’t need to drag a mechanical part across the print surface to keep it level. This enables the printer to use materials that cure more gradually, and would be smeared by a scraper.

Superior materials

The researchers used the system to print with thiol-based materials, which are slower-curing than the traditional acrylic materials used in 3D printing. However, thiol-based materials are more elastic and don’t break as easily as acrylates. They also tend to be more stable over a wider range of temperatures and don’t degrade as quickly when exposed to sunlight.

“These are very important properties when you want to fabricate robots or systems that need to interact with a real-world environment,” says Katzschmann.

The researchers used thiol-based materials and wax to fabricate several complex devices that would otherwise be nearly impossible to make with existing 3D printing systems. For one, they produced a functional, tendon-driven robotic hand that has 19 independently actuatable tendons, soft fingers with sensor pads, and rigid, load-bearing bones.

“We also produced a six-legged walking robot that can sense objects and grasp them, which was possible due to the system’s ability to create airtight interfaces of soft and rigid materials, as well as complex channels inside the structure,” says Buchner.

The team also showcased the technology through a heart-like pump with integrated ventricles and artificial heart valves, as well as metamaterials that can be programmed to have non-linear material properties.

“This is just the start. There is an amazing number of new types of materials you can add to this technology. This allows us to bring in whole new material families that couldn’t be used in 3D printing before,” Matusik says.

The researchers are now looking at using the system to print with hydrogels, which are used in tissue-engineering applications, as well as silicon materials, epoxies, and special types of durable polymers.

They also want to explore new application areas, such as printing customizable medical devices, semiconductor polishing pads, and even more complex robots.

This research was funded, in part, by Credit Suisse, the Swiss National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the U.S. National Science Foundation.



----------------------------------------------------------------------------------------------------------------

Wed, 15 Nov 2023 11:00:00 -0500


New laser setup probes metamaterial structures with ultrafast pulses
Posted on Wednesday November 15, 2023


Category : Materials science and engineering

Author : Jennifer Chu | MIT News

The LIRAS technique could speed up the development of acoustic lenses, impact-resistant films, and other futuristic materials.


Read more about this article :

Metamaterials are products of engineering wizardry. They are made from everyday polymers, ceramics, and metals. And when constructed precisely at the microscale, in intricate architectures, these ordinary materials can take on extraordinary properties.

With the help of computer simulations, engineers can play with any combination of microstructures to see how certain materials can transform, for instance, into sound-focusing acoustic lenses or lightweight, bulletproof films.

But simulations can only take a design so far. To know for sure whether a metamaterial will stand up to expectation, physically testing them is a must. But there’s been no reliable way to push and pull on metamaterials at the microscale, and to know how they will respond, without contacting and physically damaging the structures in the process.

Now, a new laser-based technique offers a safe and fast solution that could speed up the discovery of promising metamaterials for real-world applications.

The technique, developed by MIT engineers, probes metamaterials with a system of two lasers — one to quickly zap a structure and the other to measure the ways in which it vibrates in response, much like striking a bell with a mallet and recording its reverb. In contrast to a mallet, the lasers make no physical contact. Yet they can produce vibrations throughout a metamaterial’s tiny beams and struts, as if the structure were being physically struck, stretched, or sheared.

The engineers can then use the resulting vibrations to calculate various dynamic properties of the material, such as how it would respond to impacts and how it would absorb or scatter sound. With an ultrafast laser pulse, they can excite and measure hundreds of miniature structures within minutes. The new technique offers a safe, reliable, and high-throughput way to dynamically characterize microscale metamaterials, for the first time.

“We need to find quicker ways of testing, optimizing, and tweaking these materials,” says Carlos Portela, the Brit and Alex d’Arbeloff Career Development Professor in Mechanical Engineering at MIT. “With this approach, we can accelerate the discovery of optimal materials, depending on the properties you want.”

Portela and his colleagues detail their new system, which they’ve named LIRAS (for laser-induced resonant acoustic spectroscopy) in a paper appearing today in Nature. His MIT co-authors include first author Yun Kai, Somayajulu Dhulipala, Rachel Sun, Jet Lem, and Thomas Pezeril, along with Washington DeLima at the U.S. Department of Energy’s Kansas City National Security Campus.

Animated drawing of a rectangular tower with a flat top against a white background. The tower is made of an intericate lattice structure, and the top of the tower bends from side to side. When the tower is straight, it is black and white, but as it bends, the areas that are flexed turn green and purple.

A slow tip

The metamaterials that Portela works with are made from common polymers that he 3D-prints into tiny, scaffold-like towers made from microscopic struts and beams. Each tower is patterned by repeating and layering a single geometric unit, such as an eight-pointed configuration of connecting beams. When stacked end to end, the tower arrangement can give the whole polymer properties that it would not otherwise have.

But engineers are severely limited in their options for physically testing and validating these metamaterial properties. Nanoindentation is the typical way in which such microstructures are probed, though in a very deliberate and controlled fashion. The method employs a micrometer-scale tip to slowly push down on a structure while measuring the tiny displacement and forces on the structure as it’s compressed.

“But this technique can only go so fast, while also damaging the structure,” Portela notes. “We wanted to find a way to measure how these structures would behave dynamically, for instance in the initial response to a strong impact, but in a way that would not destroy them.”

A (meta)material world

The team turned to laser ultrasonics — a nondestructive method that uses a short laser pulse tuned to ultrasound frequencies, to excite very thin materials such as gold films without physically touching them. The ultrasound waves created by the laser excitation are within a range that can cause a thin film to vibrate at a frequency that scientists can then use to determine the film’s exact thickness down to nanometer precision. The technique can also be used to determine whether a thin film holds any defects.

Portela and his colleagues realized that ultrasonic lasers might also safely induce their 3D metamaterial towers to vibrate; the height of the towers — ranging from 50 to 200 micrometers tall, or up to roughly twice the diameter of a human hair — is on a similar microscopic scale to thin films.

To test this idea, Yun Kai, who joined Portela’s group with expertise in laser optics, built a tabletop setup comprising two ultrasonic lasers — a “pulse” laser to excite metamaterial samples and a “probe” laser to measure the resulting vibrations.

On a single chip no bigger than a fingernail, the team then printed hundreds of microscopic towers, each with a specific height and architecture. They placed this miniature city of metamaterials in the two-laser setup, then excited the towers with repeated ultrashort pulses. The second laser measured the vibrations from each individual tower. The team then gathered the data, and looked for patterns in the vibrations.

“We excite all these structures with a laser, which is like hitting them with a hammer. And then we capture all the wiggles from hundreds of towers, and they all wobble in slightly different ways,” Portela says. “Then we can analyze these wiggles and extract the dynamic properties of each structure, such as their stiffness in response to impact, and how fast ultrasound travels through them.”

The team used the same technique to scan towers for defects. They printed several defect-free towers and then printed the same architectures, but with varying degrees of defects, such as missing struts and beams, each smaller than the size of a red blood cell.

“Since each tower has a vibrational signature, we saw that the more defects we put into that same structure, the more this signature shifted,” Portela explains. “You could imagine scanning an assembly line of structures. If you detect one with a slightly different signature, you know it’s not perfect.”

He says scientists can easily recreate the laser setup in their own labs. Then, Portela predicts the discovery of practical, real-world metamaterials will take off. For his part, Portela is keen to fabricate and test metamaterials that focus ultrasound waves, for instance to boost the sensitivity of ultrasound probes. He’s also exploring impact-resistant metamaterials, for instance to line the inside of bike helmets.

“We know how important it is to make materials to mitigate shock and impacts,” Kai offers. “Now with our study, for the first time we can characterize the dynamic behavior of metamaterials and explore them to the extreme.”

This research was conducted, in part, using facilities at MIT.nano, and supported, in part, by the Department of Energy’s Kansas City National Security Campus, the National Science Foundation, and DEVCOM ARL Army Research Office through the MIT Institute of Soldier Nanotechnologies.



----------------------------------------------------------------------------------------------------------------

Wed, 15 Nov 2023 00:00:00 -0500


Microbes could help reduce the need for chemical fertilizers
Posted on Wednesday November 15, 2023


Category : Research

Author : Anne Trafton | MIT News

New coating protects nitrogen-fixing bacteria from heat and humidity, which could allow them to be deployed for large-scale agricultural use.


Read more about this article :

Production of chemical fertilizers accounts for about 1.5 percent of the world’s greenhouse gas emissions. MIT chemists hope to help reduce that carbon footprint by replacing some chemical fertilizer with a more sustainable source — bacteria.

Bacteria that can convert nitrogen gas to ammonia could not only provide nutrients that plants need, but also help regenerate soil and protect plants from pests. However, these bacteria are sensitive to heat and humidity, so it’s difficult to scale up their manufacture and ship them to farms.

To overcome that obstacle, MIT chemical engineers have devised a metal-organic coating that protects bacterial cells from damage without impeding their growth or function. In a new study, they found that these coated bacteria improved the germination rate of a variety of seeds, including vegetables such as corn and bok choy.

This coating could make it much easier for farmers to deploy microbes as fertilizers, says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT and the senior author of the study.

“We can protect them from the drying process, which would allow us to distribute them much more easily and with less cost because they’re a dried powder instead of in liquid,” she says. “They can also withstand heat up to 132 degrees Fahrenheit, which means that you wouldn’t have to use cold storage for these microbes.”

Benjamin Burke ’23 and postdoc Gang Fan are the lead authors of the open-access paper, which appears in the Journal of the American Chemical Society Au. MIT undergraduate Pris Wasuwanich and Evan Moore ’23 are also authors of the study.

Protecting microbes

Chemical fertilizers are manufactured using an energy-intensive process known as Haber-Bosch, which uses extremely high pressures to combine nitrogen from the air with hydrogen to make ammonia.

In addition to the significant carbon footprint of this process, another drawback to chemical fertilizers is that long-term use eventually depletes the nutrients in the soil. To help restore soil, some farmers have turned to “regenerative agriculture,” which uses a variety of strategies, including crop rotation and composting, to keep soil healthy. Nitrogen-fixing bacteria, which convert nitrogen gas to ammonia, can aid in this approach.

Some farmers have already begun deploying these “microbial fertilizers,” growing them in large onsite fermenters before applying them to the soil. However, this is cost-prohibitive for many farmers.

Shipping these bacteria to rural areas is not currently a viable option, because they are susceptible to heat damage. The microbes are also too delicate to survive the freeze-drying process that would make them easier to transport.

To protect the microbes from both heat and freeze-drying, Furst decided to apply a coating called a metal-phenol network (MPN), which she has previously developed to encapsulate microbes for other uses, such as protecting therapeutic bacteria delivered to the digestive tract.

The coatings contain two components — a metal and an organic compound called a polyphenol — that can self-assemble into a protective shell. The metals used for the coatings, including iron, manganese, aluminum, and zinc, are considered safe as food additives. Polyphenols, which are often found in plants, include molecules such as tannins and other antioxidants. The FDA classifies many of these polyphenols as GRAS (generally regarded as safe).

“We are using these natural food-grade compounds that are known to have benefits on their own, and then they form these little suits of armor that protect the microbes,” Furst says.

For this study, the researchers created 12 different MPNs and used them to encapsulate Pseudomonas chlororaphis, a nitrogen-fixing bacterium that also protects plants against harmful fungi and other pests. They found that all of the coatings protected the bacteria from temperatures up to 50 degrees Celsius (122 degrees Fahrenheit), and also from relative humidity up to 48 percent. The coatings also kept the microbes alive during the freeze-drying process.

A boost for seeds

Using microbes coated with the most effective MPN — a combination of manganese and a polyphenol called epigallocatechin gallate (EGCG) — the researchers tested their ability to help seeds germinate in a lab dish. They heated the coated microbes to 50 C before placing them in the dish, and compared them to fresh uncoated microbes and freeze-dried uncoated microbes.

The researchers found that the coated microbes improved the seeds’ germination rate by 150 percent, compared to seeds treated with fresh, uncoated microbes. This result was consistent across several different types of seeds, including dill, corn, radishes, and bok choy.

Furst has started a company called Seia Bio to commercialize the coated bacteria for large-scale use in regenerative agriculture. She hopes that the low cost of the manufacturing process will help make microbial fertilizers accessible to small-scale farmers who don’t have the fermenters needed to grow such microbes.

“When we think about developing technology, we need to intentionally design it to be inexpensive and accessible, and that’s what this technology is. It would help democratize regenerative agriculture,” she says.

The research was funded by the Army Research Office, a National Institutes of Health New Innovator Award, a National Institute for Environmental Health Sciences Core Center Grant, the CIFAR Azrieli Global Scholars Program, the Abdul Latif Jameel Water and Food Systems Lab at MIT, the MIT Climate and Sustainability Consortium, and the MIT Deshpande Center.



----------------------------------------------------------------------------------------------------------------

Tue, 14 Nov 2023 15:10:00 -0500


MIT physicists turn pencil lead into “gold”
Posted on Tuesday November 14, 2023


Category : Research

Author : Elizabeth A. Thomson | Materials Research Laboratory

Thin flakes of graphite can be tuned to exhibit three important properties.


Read more about this article :

MIT physicists have metaphorically turned graphite, or pencil lead, into gold by isolating five ultrathin flakes stacked in a specific order. The resulting material can then be tuned to exhibit three important properties never before seen in natural graphite.

“It is kind of like one-stop shopping,” says Long Ju, an assistant professor in the Department of Physics and leader of the work, which is reported in the Oct. 5 issue of Nature Nanotechnology. “Nature has plenty of surprises. In this case, we never realized that all of these interesting things are embedded in graphite.”

Further, he says, “It is very rare material to find materials that can host this many properties.”

Graphite is composed of graphene, which is a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure. Graphene, in turn, has been the focus of intense research since it was first isolated about 20 years ago. More recently, about five years ago, researchers including a team at MIT discovered that stacking individual sheets of graphene, and twisting them at a slight angle to each other, can impart new properties to the material, from superconductivity to magnetism. The field of “twistronics” was born.

In the current work, “we discovered interesting properties with no twisting at all,” says Ju, who is also affiliated with the Materials Research Laboratory.

He and colleagues discovered that five layers of graphene arranged in a certain order allow the electrons moving around inside the material to talk with each other. That phenomenon, known as electron correlation, “is the magic that makes all of these new properties possible,” Ju says.

Bulk graphite — and even single sheets of graphene — are good electrical conductors, but that’s it. The material Ju and colleagues isolated, which they call pentalayer rhombohedral stacked graphene, becomes much more than the sum of its parts.

Novel microscope

Key to isolating the material was a novel microscope Ju built at MIT in 2021 that can quickly and relatively inexpensively determine a variety of important characteristics of a material at the nanoscale. Pentalayer rhombohedral stacked graphene is only a few billionths of a meter thick.

Scientists including Ju were looking for multilayer graphene that was stacked in a very precise order, known as rhombohedral stacking. Says Ju, “there are more than 10 possible stacking orders when you go to five layers. Rhombohedral is just one of them.” The microscope Ju built, known as Scattering-type Scanning Nearfield Optical Microscopy, or s-SNOM, allowed the scientists to identify and isolate only the pentalayers in the rhombohedral stacking order they were interested in.

Three in one

From there, the team attached electrodes to a tiny sandwich composed of boron nitride “bread” that protects the delicate “meat” of pentalayer rhombohedral stacked graphene. The electrodes allowed them to tune the system with different voltages, or amounts of electricity. The result: They discovered the emergence of three different phenomena depending on the number of electrons flooding the system.

“We found that the material could be insulating, magnetic, or topological,” Ju says. The latter is somewhat related to both conductors and insulators. Essentially, Ju explains, a topological material allows the unimpeded movement of electrons around the edges of a material, but not through the middle. The electrons are traveling in one direction along a “highway” at the edge of the material separated by a median that makes up the center of the material. So the edge of a topological material is a perfect conductor, while the center is an insulator.

“Our work establishes rhombohedral stacked multilayer graphene as a highly tunable platform to study these new possibilities of strongly correlated and topological physics,” Ju and his coauthors conclude in Nature Nanotechnology.

In addition to Ju, authors of the paper are Tonghang Han and Zhengguang Lu. Han is a graduate student in the Department of Physics; Lu is a postdoc in the Materials Research Laboratory. The two are co-first authors of the paper.

Other authors are Giovanni Scuri, Jiho Sung, Jue Wang and Hongkun Park of Harvard University; Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan; and Tianyi Han of the MIT Department of Physics.

This work was supported by a Sloan Fellowship; the U.S. National Science Foundation; the U.S. Office of the Under Secretary of Defense for Research and Engineering; the Japan Society for the Promotion of Science KAKENHI;  the World Premier International Research Initiative of Japan; and the U.S. Air Force Office of Scientific Research.



----------------------------------------------------------------------------------------------------------------

Thu, 09 Nov 2023 00:00:00 -0500


MIT engineers are on a failure-finding mission
Posted on Thursday November 09, 2023


Category : Research

Author : Jennifer Chu | MIT News

The team’s new algorithm finds failures and fixes in all sorts of autonomous systems, from drone teams to power grids.


Read more about this article :

From vehicle collision avoidance to airline scheduling systems to power supply grids, many of the services we rely on are managed by computers. As these autonomous systems grow in complexity and ubiquity, so too could the ways in which they fail.

Now, MIT engineers have developed an approach that can be paired with any autonomous system, to quickly identify a range of potential failures in that system before they are deployed in the real world. What’s more, the approach can find fixes to the failures, and suggest repairs to avoid system breakdowns.

The team has shown that the approach can root out failures in a variety of simulated autonomous systems, including a small and large power grid network, an aircraft collision avoidance system, a team of rescue drones, and a robotic manipulator. In each of the systems, the new approach, in the form of an automated sampling algorithm, quickly identifies a range of likely failures as well as repairs to avoid those failures.

The new algorithm takes a different tack from other automated searches, which are designed to spot the most severe failures in a system. These approaches, the team says, could miss subtler though significant vulnerabilities that the new algorithm can catch.

“In reality, there’s a whole range of messiness that could happen for these more complex systems,” says Charles Dawson, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We want to be able to trust these systems to drive us around, or fly an aircraft, or manage a power grid. It’s really important to know their limits and in what cases they’re likely to fail.”

Dawson and Chuchu Fan, assistant professor of aeronautics and astronautics at MIT, are presenting their work this week at the Conference on Robotic Learning.

Sensitivity over adversaries

In 2021, a major system meltdown in Texas got Fan and Dawson thinking. In February of that year, winter storms rolled through the state, bringing unexpectedly frigid temperatures that set off failures across the power grid. The crisis left more than 4.5 million homes and businesses without power for multiple days. The system-wide breakdown made for the worst energy crisis in Texas’ history.

“That was a pretty major failure that made me wonder whether we could have predicted it beforehand,” Dawson says. “Could we use our knowledge of the physics of the electricity grid to understand where its weak points could be, and then target upgrades and software fixes to strengthen those vulnerabilities before something catastrophic happened?”

Dawson and Fan’s work focuses on robotic systems and finding ways to make them more resilient in their environment. Prompted in part by the Texas power crisis, they set out to expand their scope, to spot and fix failures in other more complex, large-scale autonomous systems. To do so, they realized they would have to shift the conventional approach to finding failures.

Designers often test the safety of autonomous systems by identifying their most likely, most severe failures. They start with a computer simulation of the system that represents its underlying physics and all the variables that might affect the system’s behavior. They then run the simulation with a type of algorithm that carries out “adversarial optimization” — an approach that automatically optimizes for the worst-case scenario by making small changes to the system, over and over, until it can narrow in on those changes that are associated with the most severe failures.

“By condensing all these changes into the most severe or likely failure, you lose a lot of complexity of behaviors that you could see,” Dawson notes. “Instead, we wanted to prioritize identifying a diversity of failures.”

To do so, the team took a more “sensitive” approach. They developed an algorithm that automatically generates random changes within a system and assesses the sensitivity, or potential failure of the system, in response to those changes. The more sensitive a system is to a certain change, the more likely that change is associated with a possible failure.

The approach enables the team to route out a wider range of possible failures. By this method, the algorithm also allows researchers to identify fixes by backtracking through the chain of changes that led to a particular failure.

“We recognize there’s really a duality to the problem,” Fan says. “There are two sides to the coin. If you can predict a failure, you should be able to predict what to do to avoid that failure. Our method is now closing that loop.”

Hidden failures

The team tested the new approach on a variety of simulated autonomous systems, including a small and large power grid. In those cases, the researchers paired their algorithm with a simulation of generalized, regional-scale electricity networks. They showed that, while conventional approaches zeroed in on a single power line as the most vulnerable to fail, the team’s algorithm found that, if combined with a failure of a second line, a complete blackout could occur.  

“Our method can discover hidden correlations in the system,” Dawson says. “Because we’re doing a better job of exploring the space of failures, we can find all sorts of failures, which sometimes includes even more severe failures than existing methods can find.”

The researchers showed similarly diverse results in other autonomous systems, including a simulation of avoiding aircraft collisions, and coordinating rescue drones. To see whether their failure predictions in simulation would bear out in reality, they also demonstrated the approach on a robotic manipulator — a robotic arm that is designed to push and pick up objects.

The team first ran their algorithm on a simulation of a robot that was directed to push a bottle out of the way without knocking it over. When they ran the same scenario in the lab with the actual robot, they found that it failed in the way that the algorithm predicted — for instance, knocking it over or not quite reaching the bottle. When they applied the algorithm’s suggested fix, the robot successfully pushed the bottle away.

“This shows that, in reality, this system fails when we predict it will, and succeeds when we expect it to,” Dawson says.

In principle, the team’s approach could find and fix failures in any autonomous system as long as it comes with an accurate simulation of its behavior. Dawson envisions one day that the approach could be made into an app that designers and engineers can download and apply to tune and tighten their own systems before testing in the real world.

“As we increase the amount that we rely on these automated decision-making systems, I think the flavor of failures is going to shift,” Dawson says. “Rather than mechanical failures within a system, we’re going to see more failures driven by the interaction of automated decision-making and the physical world. We’re trying to account for that shift by identifying different types of failures, and addressing them now.”

This research is supported, in part, by NASA, the National Science Foundation, and the U.S. Air Force Office of Scientific Research.



----------------------------------------------------------------------------------------------------------------

Wed, 08 Nov 2023 11:00:00 -0500


Physicists trap electrons in a 3D crystal for the first time
Posted on Wednesday November 08, 2023


Category : Electronics

Author : Jennifer Chu | MIT News

The results open the door to exploring superconductivity and other exotic electronic states in three-dimensional materials.


Read more about this article :

Electrons move through a conducting material like commuters at the height of Manhattan rush hour. The charged particles may jostle and bump against each other, but for the most part they’re unconcerned with other electrons as they hurtle forward, each with their own energy.

But when a material’s electrons are trapped together, they can settle into the exact same energy state and start to behave as one. This collective, zombie-like state is what’s known in physics as an electronic “flat band,” and scientists predict that when electrons are in this state they can start to feel the quantum effects of other electrons and act in coordinated, quantum ways. Then, exotic behavior such as superconductivity and unique forms of magnetism may emerge.

Now, physicists at MIT have successfully trapped electrons in a pure crystal. It is the first time that scientists have achieved an electronic flat band in a three-dimensional material. With some chemical manipulation, the researchers also showed they could transform the crystal into a superconductor — a material that conducts electricity with zero resistance.

The electrons’ trapped state is possible thanks to the crystal’s atomic geometry. The crystal, which the physicists synthesized, has an arrangement of atoms that resembles the woven patterns in “kagome,” the Japanese art of basket-weaving. In this specific geometry, the researchers found that rather than jumping between atoms, electrons were “caged,” and settled into the same band of energy.

Animation of spinning 3D crystal structure that looks like a star made up of latticed cubes and pyramids.

The researchers say that this flat-band state can be realized with virtually any combination of atoms — as long as they are arranged in this kagome-inspired 3D geometry. The results, appearing today in Nature, provide a new way for scientists to explore rare electronic states in three-dimensional materials. These materials might someday be optimized to enable ultraefficient power lines, supercomputing quantum bits, and faster, smarter electronic devices.

“Now that we know we can make a flat band from this geometry, we have a big motivation to study other structures that might have other new physics that could be a platform for new technologies,” says study author Joseph Checkelsky, associate professor of physics.

Checkelsky’s MIT co-authors include graduate students Joshua Wakefield, Mingu Kang, and Paul Neves, and postdoc Dongjin Oh, who are co-lead authors; graduate students Tej Lamichhane and Alan Chen; postdocs Shiang Fang and Frank Zhao; undergraduate Ryan Tigue; associate professor of nuclear science and engineering Mingda Li; and associate professor of physics Riccardo Comin, who collaborated with Checkelsky to direct the study; along with collaborators at multiple other laboratories and institutions.

Setting a 3D trap

In recent years, physicists have successfully trapped electrons and confirmed their electronic flat-band state in two-dimensional materials. But scientists have found that electrons that are trapped in two dimensions can easily escape out the third, making flat-band states difficult to maintain in 2D.

In their new study, Checkelsky, Comin, and their colleagues looked to realize flat bands in 3D materials, such that electrons would be trapped in all three dimensions and any exotic electronic states could be more stably maintained. They had an idea that kagome patterns might play a role.

In previous work, the team observed trapped electrons in a two-dimensional lattice of atoms that resembled some kagome designs. When the atoms were arranged in a pattern of interconnected, corner-sharing triangles, electrons were confined within the hexagonal space between triangles, rather than hopping across the lattice. But, like others, the researchers found that the electrons could escape up and out of the lattice, through the third dimension.

The team wondered: Could a 3D configuration of similar lattices work to box in the electrons? They looked for an answer in databases of material structures and came across a certain geometric configuration of atoms, classified generally as a pyrochlore — a type of mineral with a highly symmetric atomic geometry. The pychlore’s 3D structure of atoms formed a repeating pattern of cubes, the face of each cube resembling a kagome-like lattice. They found that, in theory, this geometry could effectively trap electrons within each cube.

Rocky landings

To test this hypothesis, the researchers synthesized a pyrochlore crystal in the lab.

“It’s not dissimilar to how nature makes crystals,” Checkelsky explains. “We put certain elements together — in this case, calcium and nickel — melt them at very high temperatures, cool them down, and the atoms on their own will arrange into this crystalline, kagome-like configuration.”

They then looked to measure the energy of individual electrons in the crystal, to see if they indeed fell into the same flat band of energy. To do so, researchers typically carry out photoemission experiments, in which they shine a single photon of light onto a sample, that in turn kicks out a single electron. A detector can then precisely measure the energy of that individual electron.

Scientists have used photoemission to confirm flat-band states in various 2D materials. Because of their physically flat, two-dimensional nature, these materials are relatively straightforward to measure using standard laser light. But for 3D materials, the task is more challenging.

“For this experiment, you typically require a very flat surface,” Comin explains. “But if you look at the surface of these 3D materials, they are like the Rocky Mountains, with a very corrugated landscape. Experiments on these materials are very challenging, and that is part of the reason no one has demonstrated that they host trapped electrons.”

The team cleared this hurdle with angle-resolved photoemission spectroscopy (ARPES), an ultrafocused beam of light that is able to target specific locations across an uneven 3D surface and measure the individual electron energies at those locations.

“It’s like landing a helicopter on very small pads, all across this rocky landscape,” Comin says.

With ARPES, the team measured the energies of thousands of electrons across a synthesized crystal sample in about half an hour. They found that, overwhelmingly, the electrons in the crystal exhibited the exact same energy, confirming the 3D material’s flat-band state.

To see whether they could manipulate the coordinated electrons into some exotic electronic state, the researchers synthesized the same crystal geometry, this time with atoms of rhodium and ruthenium instead of nickel. On paper, the researchers calculated that this chemical swap should shift the electrons’ flat band to zero energy — a state that automatically leads to superconductivity.

And indeed, they found that when they synthesized a new crystal, with a slightly different combination of elements, in the same kagome-like 3D geometry, the crystal’s electrons exhibited a flat band, this time at superconducting states.

“This presents a new paradigm to think about how to find new and interesting quantum materials,” Comin says. “We showed that, with this special ingredient of this atomic arrangement that can trap electrons, we always find these flat bands. It’s not just a lucky strike. From this point on, the challenge is to optimize to achieve the promise of flat-band materials, potentially to sustain superconductivity at higher temperatures.”



----------------------------------------------------------------------------------------------------------------

Tue, 07 Nov 2023 09:50:00 -0500


Anesthesia technology precisely controls unconsciousness in animal tests
Posted on Tuesday November 07, 2023


Category : Research

Author : David Orenstein | The Picower Institute for Learning and Memory

An advanced closed-loop anesthesia delivery system that monitors brain state to tailor propofol dose and achieve exactly the desired level of unconsciousness could reduce post-op side effects.


Read more about this article :

If anesthesiologists had a rigorous means to manage dosing, they could deliver less medicine, maintaining exactly the right depth of unconsciousness while reducing postoperative cognitive side effects in vulnerable groups like the elderly. But with myriad responsibilities for keeping anesthetized patients alive and stable as well as maintaining their profoundly unconscious state, anesthesiologists don’t have the time without the technology.

To solve the problem, researchers at The Picower Institute for Learning and Memory at MIT and Massachusetts General Hospital (MGH) have invented a closed-loop system based on brain state monitoring that accurately controls unconsciousness by automating doses of the anesthetic drug propofol every 20 seconds.

The scientists detail the new system and its performance in animal testing in a new open-access paper in the journal PNAS Nexus.

“One of the ways to improve anesthesia care is to give just the right amount of drug that’s needed,” says corresponding author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT and an anesthesiologist at MGH. “This opens up the opportunity to do that in a really controlled way.”

In the operating room, Brown monitors the brain state of his patients using electroencephalograms (EEGs). He frequently adjusts dosing based on that feedback, which can cut the amount of drug he uses by as much as half compared to if he just picks a constant infusion rate and sticks with that. Nevertheless, the practice of maintaining dose, rather than consciousness level, is common because most anesthesiologists are not trained to track brain states and often don’t take time in the operating room to precisely manage dosing.

The new system is not the first closed-loop anesthesia delivery (CLAD) system, Brown says, but it advances the young field in critical ways. Some prior systems merely automate a single, stable infusion rate based on general patient characteristics like height, weight, and age but gather no feedback about the actual effect on unconsciousness, says Brown, who is also a member of the Institute for Medical Engineering and Science at MIT and the Warren Zapol Professor in Harvard Medical School. Others use a proprietary control system that maintains “black box” markers of unconsciousness that vary within a wide range.

The new CLAD system, developed by Brown and his team at the MIT and MGH Brain Arousal State Control Innovation Center (BASCIC), enables very precise management of unconsciousness by making a customized estimate of how doses will affect the subject and by measuring unconsciousness based on brain state. The system uses those measures as feedback to constantly adjust the drug dose.

In the paper, the team demonstrates that the system enabled more than 18 hours of fine-grained consciousness control over the course of nine anesthesia sessions with two animal subjects. Brown Lab research affiliate Sourish Chakravarty and Jacob Donoghue, a former graduate student from the lab of co-senior author and Picower Professor Earl K. Miller, are the paper's co-lead authors.

Though there is more work to do, the authors write, “We are highly optimistic that the CLAD framework we have established … can be successfully extended to humans.”

How it works

A foundation of the team's CLAD technology is that it employs a physiologically principled readout of unconsciousness from the brain (in the operating room, anesthesiologists typically rely on indirect markers such as heart rate, blood pressure, and immobility). The researchers established their brain-based marker by measuring changes in neural spiking activity amid unconsciousness in the animals and the larger-scale rhythms that spiking produces, called local field potentials (LFPs). By closely associating LFP power with spiking-based measures of unconsciousness in the animal subjects, they were able to determine that the total power of LFPs between 20 and 30 Hz is a reliable marker.

The researchers also built into the system a physiologically principled model of the pharmacokinetics (PK) and pharmacodynamics (PD) of propofol, which determines how much drug is needed to alter consciousness and how fast a given dose will have that effect. In the study they show that by coupling the model with the unconsciousness marker they could quickly tune the model for each subject.

“With a few basic recordings of the LFPs as drug is administered you can quickly learn how the subject is responding to the drug,” Brown says.

To manage propofol dosing, every 20 seconds a “linear quadratic integral” controller determines the difference between the measured 20-30 Hz LFP power and the desired brain state (set by the anesthesiologist) and uses the PK/PD model to adjust the infusion of medicine to close the gap.

Initially the team ran computer simulations of how their CLAD system would work under realistic parameters, but then they performed nine 125-minute-long experiments with two animal subjects. They manually put the animals under and then let the CLAD system take over after about 30 minutes. In each case the CLAD had to bring the animals to a precise state of unconsciousness for 45 minutes, change to a different level for another 40 minutes, and then bring them back to the original level for 40 more minutes. In every session the system kept the marker very close to the goal levels throughout the duration of the testing. 

In other words, rather than a system that automatically maintains the drug dose, the new system automatically maintains the desired level of unconsciousness by updating that dose every 20 seconds.

“The common practice of using constant infusion rates can lead to overdosing,” the researchers wrote. “This observation is particularly relevant for elderly patients who at standard propofol infusion rates readily drift into burst suppression, a profound level of unconsciousness associated with post-operative cognitive disorders.”

Still to do

In the study the team acknowledges that they have more work to do to advance the technology for human use.

One needed step is basing the system on EEGs, which can be measured via the scalp. Along with that the team will need to determine a marker of unconsciousness based on EEG measurements of human brain rhythms, rather than animal LFPs. Finally, the team wants to extend the system’s capabilities so that it not only maintains unconsciousness, but also helps induce it and helps bring patients back to wakefulness.

In addition to Brown, Chakravarty, Donoghue, and Miller, the paper’s other authors are Ayan Waite, Meredith Mahnke, Indie Garwood, and Sebastian Gallo.

Funding for the study came from National Institutes of Health Awards, the JPB Foundation, and the Picower Institute for Learning and Memory. Support for BASCIC comes from George J. Elbaum ’59, SM’63, PhD ’67; Mimi Jensen; Diane B. Greene SM ’78; Mendel Rosenblum; Bill Swanson; and Cheryl Swanson.



----------------------------------------------------------------------------------------------------------------

Mon, 06 Nov 2023 13:00:00 -0500


Using AI to optimize for rapid neural imaging
Posted on Monday November 06, 2023


Category : Research

Author : Rachel Gordon | MIT CSAIL

MIT CSAIL researchers combine AI and electron microscopy to expedite detailed brain network mapping, aiming to enhance connectomics research and clinical pathology.


Read more about this article :

Connectomics, the ambitious field of study that seeks to map the intricate network of animal brains, is undergoing a growth spurt. Within the span of a decade, it has journeyed from its nascent stages to a discipline that is poised to (hopefully) unlock the enigmas of cognition and the physical underpinning of neuropathologies such as in Alzheimer’s disease. 

At its forefront is the use of powerful electron microscopes, which researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Samuel and Lichtman Labs of Harvard University bestowed with the analytical prowess of machine learning. Unlike traditional electron microscopy, the integrated AI serves as a “brain” that learns a specimen while acquiring the images, and intelligently focuses on the relevant pixels at nanoscale resolution similar to how animals inspect their worlds. 

SmartEM” assists connectomics in quickly examining and reconstructing the brain’s complex network of synapses and neurons with nanometer precision. Unlike traditional electron microscopy, its integrated AI opens new doors to understand the brain's intricate architecture.

The integration of hardware and software in the process is crucial. The team embedded a GPU into the support computer connected to their microscope. This enabled running machine-learning models on the images, helping the microscope beam be directed to areas deemed interesting by the AI. “This lets the microscope dwell longer in areas that are harder to understand until it captures what it needs,” says MIT professor and CSAIL principal investigator Nir Shavit. “This step helps in mirroring human eye control, enabling rapid understanding of the images.” 

“When we look at a human face, our eyes swiftly navigate to the focal points that deliver vital cues for effective communication and comprehension,” says the lead architect of SmartEM, Yaron Meirovitch, a visiting scientist at MIT CSAIL who is also a former postdoc and current research associate neuroscientist at Harvard. “When we immerse ourselves in a book, we don't scan all of the empty space; rather, we direct our gaze towards the words and characters with ambiguity relative to our sentence expectations. This phenomenon within the human visual system has paved the way for the birth of the novel microscope concept.” 

For the task of reconstructing a human brain segment of about 100,000 neurons, achieving this with a conventional microscope would necessitate a decade of continuous imaging and a prohibitive budget. However, with SmartEM, by investing in four of these innovative microscopes at less than $1 million each, the task could be completed in a mere three months.

Nobel Prizes and little worms  

Over a century ago, Spanish neuroscientist Santiago Ramón y Cajal was heralded as being the first to characterize the structure of the nervous system. Employing the rudimentary light microscopes of his time, he embarked on leading explorations into neuroscience, laying the foundational understanding of neurons and sketching the initial outlines of this expansive and uncharted realm — a feat that earned him a Nobel Prize. He noted, on the topics of inspiration and discovery, that “As long as our brain is a mystery, the universe, the reflection of the structure of the brain will also be a mystery.”

Progressing from these early stages, the field has advanced dramatically, evidenced by efforts in the 1980s, mapping the relatively simpler connectome of C. elegans, small worms, to today’s endeavors probing into more intricate brains of organisms like zebrafish and mice. This evolution reflects not only enormous strides, but also escalating complexities and demands: mapping the mouse brain alone means managing a staggering thousand petabytes of data, a task that vastly eclipses the storage capabilities of any university, the team says. 

Testing the waters

For their own work, Meirovitch and others from the research team studied 30-nanometer thick slices of octopus tissue that were mounted on tapes, put on wafers, and finally inserted into the electron microscopes. Each section of an octopus brain, comprising billions of pixels, was imaged, letting the scientists reconstruct the slices into a three-dimensional cube at nanometer resolution. This provided an ultra-detailed view of synapses. The chief aim? To colorize these images, identify each neuron, and understand their interrelationships, thereby creating a detailed map or “connectome” of the brain's circuitry.

“SmartEM will cut the imaging time of such projects from two weeks to 1.5 days,” says Meirovitch. “Neuroscience labs that currently can't be engaged with expensive and long EM imaging will be able to do it now,” The method should also allow synapse-level circuit analysis in samples from patients with psychiatric and neurologic disorders. 

Down the line, the team envisions a future where connectomics is both affordable and accessible. They hope that with tools like SmartEM, a wider spectrum of research institutions could contribute to neuroscience without relying on large partnerships, and that the method will soon be a standard pipeline in cases where biopsies from living patients are available. Additionally, they’re eager to apply the tech to understand pathologies, extending utility beyond just connectomics. “We are now endeavoring to introduce this to hospitals for large biopsies, utilizing electron microscopes, aiming to make pathology studies more efficient,” says Shavit. 

Two other authors on the paper have MIT CSAIL ties: lead author Lu Mi MCS ’19, PhD ’22, who is now a postdoc at the Allen Institute for Brain Science, and Shashata Sawmya, an MIT graduate student in the lab. The other lead authors are Core Francisco Park and Pavel Potocek, while Harvard professors Jeff Lichtman and Aravi Samuel are additional senior authors. Their research was supported by the NIH BRAIN Initiative and was presented at the 2023 International Conference on Machine Learning (ICML) Workshop on Computational Biology. The work was done in collaboration with scientists from Thermo Fisher Scientific.



----------------------------------------------------------------------------------------------------------------

Fri, 03 Nov 2023 14:40:00 -0400


Reflecting on a decade of SuperUROP at MIT
Posted on Friday November 03, 2023


Category : Classes and programs

Author : Jane Halpern | Department of Electrical Engineering and Computer Science

Ten years after the founding of the undergraduate research program, its alumni reflect on the unexpected gifts of their experiences.


Read more about this article :

The Advanced Undergraduate Research Opportunities Program, or SuperUROP, is celebrating a significant milestone: 10 years of setting careers in motion.  

Originally mapped out by Dean Anantha Chandrakasan (then the head of the Department of Electrical Engineering and Computer Science, SuperUROP is designed to act as a launching pad for careers in research and industry, allowing juniors and seniors to experience an authentic — and authentically challenging — research experience. Students begin their year-long effort by identifying a project and building a relationship with a faculty member or senior research scientist, before spending many hours per week engaged in closely focused research on a specific question; writing a high-quality research paper and bringing it through the review process; and finally, presenting their findings in a scientific poster conference attended by key stakeholders, including faculty, peers, and generous supporters of the program. 

Unlike most homework or exams, which usually have a highly structured result, SuperUROP research is frequently very open-ended, morphing into graduate theses, startup plans, or industry positions as students continue their work well past the semester’s close.

“Research, especially as an undergraduate, is always very challenging,” says Chelsea Finn '14, an alumna of SuperUROP who is now an assistant professor at Stanford University working on robotic interaction. “Doing research as an undergraduate student is the best way to get a flavor of the ambiguity, challenge, and thrill that comes from trying to solve problems that no one has solved before. SuperUROP is super useful for figuring out if a career in research is a good fit.”

Students come to SuperUROP to get ahead not only on research skills, but on the entrepreneurial skills they’ll need for careers in startups and industry. A SuperUROP scholar in 2015-16, Eric Dahlseng '17 went on to co-found Empo Health, a medical device company. “At its core, I think the SuperUROP program teaches undergraduates how to create things that don’t exist (whether that be processes, ideas, technologies, etc.) and share those creations with the world effectively,” says Dahlseng, whose company has introduced a device used to remotely monitor patients at risk of dangerous diabetic complications. “This is an important set of skills for research and academia, but also an immensely important set of skills for entrepreneurship.”

Dahlseng also found that SuperUROP stretched his communications abilities — “I took the communication portion of my SuperUROP very seriously,” recalls the entrepreneur, who received the Ilona Karmel Writing and Humanistic Studies Prize for Engineering Writing award for the paper portion of his project. “As I advance in my career, and especially as Empo Health grows, the importance of good scientific communication is only expanding. I find my role focusing more and more on the communication pieces as I work on growing the team and establishing strong collaboration amongst everyone, sharing our learnings with key stakeholders, and highlighting what we’re creating for end customers and users.” 

Luis Voloch '13, SM '15 can also testify to the power of SuperUROP to transform strong students into strong scientific communicators. When Voloch was enrolled in SuperUROP, in 2012-13, he investigated how sources of information, including viruses, can be concealed or revealed in computer networks. He is now the co-founder of Immunai, an AI-driven cancer immunotherapy biotech company based in New York City which employs over 140 people and develops technologies at the intersection of AI, genomics, big data, and immunology. In addition to his career at Immunai, Voloch lectures within the Stanford Graduate School of Business on management and entrepreneurship topics in data science and AI-heavy companies. In both roles, the communications skills he acquired during his SuperUROP experience help him connect with students. “In my SuperUROP, I started to learn how to do better scientific communications, which I built up further during my graduate research work and beyond. Communicating clearly is a core professional and research skill, and I’m thankful we got started with it that early.” 

As careers change and grow, those core skills can flex to meet new challenges. Jennifer Madiedo '19, MNG '20, a senior software engineer in Industry Solutions Engineering at Microsoft, credits her SuperUROP experience with developing her skills in scientific communication and storytelling. “How do you introduce your work to someone who may understand the overarching concepts of your field but not all the details? How do you figure out what background work is relevant and pull it into a cohesive backstory? How do you explain your methodology without losing your reader in too many details? It's all about the communication; learning to communicate deeply technical ideas in a way peers can understand was a whole new challenge I hadn't really encountered before at MIT.” 

Madiedo started her career in a half-engineering, half-research natural language processing team at Microsoft, and now works directly with customers and their engineering teams to solve multifaceted problems. “I'm completely out of a lab setting now, but the skills I learned in undergraduate research truly form the bedrock of how I communicate with my teammates and peers.”

Again and again, the alumni of SuperUROP stress that communication — often regarded as a “soft” skill–was one of the most important abilities to be tested and developed by the program. “Communication is important in many areas, but is truly an essential part of science,” says Chelsea Finn, who balances her research and teaching responsibilities at Stanford with a role on the Google Brain team. “The ultimate outcome of science is knowledge, and that knowledge is not very useful if it is not communicated to others!” Finn credits much of her passion for science communication to the “infectious passion” of her SuperUROP advisor, the late Seth Teller: “Seth instilled in me the importance of conveying enthusiasm for things that I am excited about, especially when talking to students and mentees.”

With 10 years of enthusiastic alumni now engaged in groundbreaking work across many fields, that legacy of enthusiasm continues to pull new scientists into the lab, and new students into a productive year of critical thinking, communicating, and creating through SuperUROP.



----------------------------------------------------------------------------------------------------------------

Thu, 02 Nov 2023 16:25:00 -0400


Using language to give robots a better grasp of an open-ended world
Posted on Thursday November 02, 2023


Category : Research

Author : Alex Shipps | MIT CSAIL

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.


Read more about this article :

Imagine you’re visiting a friend abroad, and you look inside their fridge to see what would make for a great breakfast. Many of the items initially appear foreign to you, with each one encased in unfamiliar packaging and containers. Despite these visual distinctions, you begin to understand what each one is used for and pick them up as needed.

Inspired by humans' ability to handle unfamiliar objects, a group from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed Feature Fields for Robotic Manipulation (F3RM), a system that blends 2D images with foundation model features into 3D scenes to help robots identify and grasp nearby items. F3RM can interpret open-ended language prompts from humans, making the method helpful in real-world environments that contain thousands of objects, like warehouses and households.

F3RM offers robots the ability to interpret open-ended text prompts using natural language, helping the machines manipulate objects. As a result, the machines can understand less-specific requests from humans and still complete the desired task. For example, if a user asks the robot to “pick up a tall mug,” the robot can locate and grab the item that best fits that description.

“Making robots that can actually generalize in the real world is incredibly hard,” says Ge Yang, postdoc at the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions and MIT CSAIL. “We really want to figure out how to do that, so with this project, we try to push for an aggressive level of generalization, from just three or four objects to anything we find in MIT’s Stata Center. We wanted to learn how to make robots as flexible as ourselves, since we can grasp and place objects even though we've never seen them before.”

Learning “what’s where by looking”

The method could assist robots with picking items in large fulfillment centers with inevitable clutter and unpredictability. In these warehouses, robots are often given a description of the inventory that they're required to identify. The robots must match the text provided to an object, regardless of variations in packaging, so that customers’ orders are shipped correctly.

For example, the fulfillment centers of major online retailers can contain millions of items, many of which a robot will have never encountered before. To operate at such a scale, robots need to understand the geometry and semantics of different items, with some being in tight spaces. With F3RM’s advanced spatial and semantic perception abilities, a robot could become more effective at locating an object, placing it in a bin, and then sending it along for packaging. Ultimately, this would help factory workers ship customers’ orders more efficiently.

“One thing that often surprises people with F3RM is that the same system also works on a room and building scale, and can be used to build simulation environments for robot learning and large maps,” says Yang. “But before we scale up this work further, we want to first make this system work really fast. This way, we can use this type of representation for more dynamic robotic control tasks, hopefully in real-time, so that robots that handle more dynamic tasks can use it for perception.”

The MIT team notes that F3RM’s ability to understand different scenes could make it useful in urban and household environments. For example, the approach could help personalized robots identify and pick up specific items. The system aids robots in grasping their surroundings — both physically and perceptively.

“Visual perception was defined by David Marr as the problem of knowing ‘what is where by looking,’” says senior author Phillip Isola, MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. “Recent foundation models have gotten really good at knowing what they are looking at; they can recognize thousands of object categories and provide detailed text descriptions of images. At the same time, radiance fields have gotten really good at representing where stuff is in a scene. The combination of these two approaches can create a representation of what is where in 3D, and what our work shows is that this combination is especially useful for robotic tasks, which require manipulating objects in 3D.”

Creating a “digital twin”

F3RM begins to understand its surroundings by taking pictures on a selfie stick. The mounted camera snaps 50 images at different poses, enabling it to build a neural radiance field (NeRF), a deep learning method that takes 2D images to construct a 3D scene. This collage of RGB photos creates a “digital twin” of its surroundings in the form of a 360-degree representation of what’s nearby.

In addition to a highly detailed neural radiance field, F3RM also builds a feature field to augment geometry with semantic information. The system uses CLIP, a vision foundation model trained on hundreds of millions of images to efficiently learn visual concepts. By reconstructing the 2D CLIP features for the images taken by the selfie stick, F3RM effectively lifts the 2D features into a 3D representation.

Keeping things open-ended

After receiving a few demonstrations, the robot applies what it knows about geometry and semantics to grasp objects it has never encountered before. Once a user submits a text query, the robot searches through the space of possible grasps to identify those most likely to succeed in picking up the object requested by the user. Each potential option is scored based on its relevance to the prompt, similarity to the demonstrations the robot has been trained on, and if it causes any collisions. The highest-scored grasp is then chosen and executed.

To demonstrate the system’s ability to interpret open-ended requests from humans, the researchers prompted the robot to pick up Baymax, a character from Disney’s “Big Hero 6.” While F3RM had never been directly trained to pick up a toy of the cartoon superhero, the robot used its spatial awareness and vision-language features from the foundation models to decide which object to grasp and how to pick it up.

F3RM also enables users to specify which object they want the robot to handle at different levels of linguistic detail. For example, if there is a metal mug and a glass mug, the user can ask the robot for the “glass mug.” If the bot sees two glass mugs and one of them is filled with coffee and the other with juice, the user can ask for the “glass mug with coffee.” The foundation model features embedded within the feature field enable this level of open-ended understanding.

“If I showed a person how to pick up a mug by the lip, they could easily transfer that knowledge to pick up objects with similar geometries such as bowls, measuring beakers, or even rolls of tape. For robots, achieving this level of adaptability has been quite challenging,” says MIT PhD student, CSAIL affiliate, and co-lead author William Shen. “F3RM combines geometric understanding with semantics from foundation models trained on internet-scale data to enable this level of aggressive generalization from just a small number of demonstrations.”

Shen and Yang wrote the paper under the supervision of Isola, with MIT professor and CSAIL principal investigator Leslie Pack Kaelbling and undergraduate students Alan Yu and Jansen Wong as co-authors. The team was supported, in part, by Amazon.com Services, the National Science Foundation AI Institute for Artificial Intelligence Fundamental Interactions, the Air Force Office of Scientific Research, the Office of Naval Research’s Multidisciplinary University Initiative, the Army Research Office, the MIT-IBM Watson AI Lab, and the MIT Quest for Intelligence. Their work will be presented at the 2023 Conference on Robot Learning.



----------------------------------------------------------------------------------------------------------------

Thu, 02 Nov 2023 15:50:00 -0400


2023-24 Takeda Fellows: Advancing research at the intersection of AI and health
Posted on Thursday November 02, 2023


Category : Awards, honors and fellowships

Author : School of Engineering

Thirteen new graduate student fellows will pursue exciting new paths of knowledge and discovery.


Read more about this article :

The School of Engineering has selected 13 new Takeda Fellows for the 2023-24 academic year. With support from Takeda, the graduate students will conduct pathbreaking research ranging from remote health monitoring for virtual clinical trials to ingestible devices for at-home, long-term diagnostics.

Now in its fourth year, the MIT-Takeda Program, a collaboration between MIT’s School of Engineering and Takeda, fuels the development and application of artificial intelligence capabilities to benefit human health and drug development. Part of the Abdul Latif Jameel Clinic for Machine Learning in Health, the program coalesces disparate disciplines, merges theory and practical implementation, combines algorithm and hardware innovations, and creates multidimensional collaborations between academia and industry.

The 2023-24 Takeda Fellows are:

Adam Gierlach

Adam Gierlach is a PhD candidate in the Department of Electrical Engineering and Computer Science. Gierlach’s work combines innovative biotechnology with machine learning to create ingestible devices for advanced diagnostics and delivery of therapeutics. In his previous work, Gierlach developed a non-invasive, ingestible device for long-term gastric recordings in free-moving patients. With the support of a Takeda Fellowship, he will build on this pathbreaking work by developing smart, energy-efficient, ingestible devices powered by application-specific integrated circuits for at-home, long-term diagnostics. These revolutionary devices — capable of identifying, characterizing, and even correcting gastrointestinal diseases — represent the leading edge of biotechnology. Gierlach’s innovative contributions will help to advance fundamental research on the enteric nervous system and help develop a better understanding of gut-brain axis dysfunctions in Parkinson’s disease, autism spectrum disorder, and other prevalent disorders and conditions.

Vivek Gopalakrishnan

Vivek Gopalakrishnan is a PhD candidate in the Harvard-MIT Program in Health Sciences and Technology. Gopalakrishnan’s goal is to develop biomedical machine-learning methods to improve the study and treatment of human disease. Specifically, he employs computational modeling to advance new approaches for minimally invasive, image-guided neurosurgery, offering a safe alternative to open brain and spinal procedures. With the support of a Takeda Fellowship, Gopalakrishnan will develop real-time computer vision algorithms that deliver high-quality, 3D intraoperative image guidance by extracting and fusing information from multimodal neuroimaging data. These algorithms could allow surgeons to reconstruct 3D neurovasculature from X-ray angiography, thereby enhancing the precision of device deployment and enabling more accurate localization of healthy versus pathologic anatomy.

Hao He

Hao He is a PhD candidate in the Department of Electrical Engineering and Computer Science. His research interests lie at the intersection of generative AI, machine learning, and their applications in medicine and human health, with a particular emphasis on passive, continuous, remote health monitoring to support virtual clinical trials and health-care management. More specifically, He aims to develop trustworthy AI models that promote equitable access and deliver fair performance independent of race, gender, and age. In his past work, He has developed monitoring systems applied in clinical studies of Parkinson’s disease, Alzheimer’s disease, and epilepsy. Supported by a Takeda Fellowship, He will develop a novel technology for the passive monitoring of sleep stages (using radio signaling) that seeks to address existing gaps in performance across different demographic groups. His project will tackle the problem of imbalance in available datasets and account for intrinsic differences across subpopulations, using generative AI and multi-modality/multi-domain learning, with the goal of learning robust features that are invariant to different subpopulations. He’s work holds great promise for delivering advanced, equitable health-care services to all people and could significantly impact health care and AI.

Chengyi Long

Chengyi Long is a PhD candidate in the Department of Civil and Environmental Engineering. Long’s interdisciplinary research integrates the methodology of physics, mathematics, and computer science to investigate questions in ecology. Specifically, Long is developing a series of potentially groundbreaking techniques to explain and predict the temporal dynamics of ecological systems, including human microbiota, which are essential subjects in health and medical research. His current work, supported by a Takeda Fellowship, is focused on developing a conceptual, mathematical, and practical framework to understand the interplay between external perturbations and internal community dynamics in microbial systems, which may serve as a key step toward finding bio solutions to health management. A broader perspective of his research is to develop AI-assisted platforms to anticipate the changing behavior of microbial systems, which may help to differentiate between healthy and unhealthy hosts and design probiotics for the prevention and mitigation of pathogen infections. By creating novel methods to address these issues, Long’s research has the potential to offer powerful contributions to medicine and global health.

Omar Mohd

Omar Mohd is a PhD candidate in the Department of Electrical Engineering and Computer Science. Mohd’s research is focused on developing new technologies for the spatial profiling of microRNAs, with potentially important applications in cancer research. Through innovative combinations of micro-technologies and AI-enabled image analysis to measure the spatial variations of microRNAs within tissue samples, Mohd hopes to gain new insights into drug resistance in cancer. This work, supported by a Takeda Fellowship, falls within the emerging field of spatial transcriptomics, which seeks to understand cancer and other diseases by examining the relative locations of cells and their contents within tissues. The ultimate goal of Mohd’s current project is to find multidimensional patterns in tissues that may have prognostic value for cancer patients. One valuable component of his work is an open-source AI program developed with collaborators at Beth Israel Deaconess Medical Center and Harvard Medical School to auto-detect cancer epithelial cells from other cell types in a tissue sample and to correlate their abundance with the spatial variations of microRNAs. Through his research, Mohd is making innovative contributions at the interface of microsystem technology, AI-based image analysis, and cancer treatment, which could significantly impact medicine and human health.

Sanghyun Park

Sanghyun Park is a PhD candidate in the Department of Mechanical Engineering. Park specializes in the integration of AI and biomedical engineering to address complex challenges in human health. Drawing on his expertise in polymer physics, drug delivery, and rheology, his research focuses on the pioneering field of in-situ forming implants (ISFIs) for drug delivery. Supported by a Takeda Fellowship, Park is currently developing an injectable formulation designed for long-term drug delivery. The primary goal of his research is to unravel the compaction mechanism of drug particles in ISFI formulations through comprehensive modeling and in-vitro characterization studies utilizing advanced AI tools. He aims to gain a thorough understanding of this unique compaction mechanism and apply it to drug microcrystals to achieve properties optimal for long-term drug delivery. Beyond these fundamental studies, Park's research also focuses on translating this knowledge into practical applications in a clinical setting through animal studies specifically aimed at extending drug release duration and improving mechanical properties. The innovative use of AI in developing advanced drug delivery systems, coupled with Park's valuable insights into the compaction mechanism, could contribute to improving long-term drug delivery. This work has the potential to pave the way for effective management of chronic diseases, benefiting patients, clinicians, and the pharmaceutical industry.

Huaiyao Peng

Huaiyao Peng is a PhD candidate in the Department of Biological Engineering. Peng’s research interests are focused on engineered tissue, microfabrication platforms, cancer metastasis, and the tumor microenvironment. Specifically, she is advancing novel AI techniques for the development of pre-cancer organoid models of high-grade serous ovarian cancer (HGSOC), an especially lethal and difficult-to-treat cancer, with the goal of gaining new insights into progression and effective treatments. Peng’s project, supported by a Takeda Fellowship, will be one of the first to use cells from serous tubal intraepithelial carcinoma lesions found in the fallopian tubes of many HGSOC patients. By examining the cellular and molecular changes that occur in response to treatment with small molecule inhibitors, she hopes to identify potential biomarkers and promising therapeutic targets for HGSOC, including personalized treatment options for HGSOC patients, ultimately improving their clinical outcomes. Peng’s work has the potential to bring about important advances in cancer treatment and spur innovative new applications of AI in health care. 

Priyanka Raghavan

Priyanka Raghavan is a PhD candidate in the Department of Chemical Engineering. Raghavan’s research interests lie at the frontier of predictive chemistry, integrating computational and experimental approaches to build powerful new predictive tools for societally important applications, including drug discovery. Specifically, Raghavan is developing novel models to predict small-molecule substrate reactivity and compatibility in regimes where little data is available (the most realistic regimes). A Takeda Fellowship will enable Raghavan to push the boundaries of her research, making innovative use of low-data and multi-task machine learning approaches, synthetic chemistry, and robotic laboratory automation, with the goal of creating an autonomous, closed-loop system for the discovery of high-yielding organic small molecules in the context of underexplored reactions. Raghavan’s work aims to identify new, versatile reactions to broaden a chemist’s synthetic toolbox with novel scaffolds and substrates that could form the basis of essential drugs. Her work has the potential for far-reaching impacts in early-stage, small-molecule discovery and could help make the lengthy drug-discovery process significantly faster and cheaper.

Zhiye Song

Zhiye “Zoey” Song is a PhD candidate in the Department of Electrical Engineering and Computer Science. Song’s research integrates cutting-edge approaches in machine learning (ML) and hardware optimization to create next-generation, wearable medical devices. Specifically, Song is developing novel approaches for the energy-efficient implementation of ML computation in low-power medical devices, including a wearable ultrasound “patch” that captures and processes images for real-time decision-making capabilities. Her recent work, conducted in collaboration with clinicians, has centered on bladder volume monitoring; other potential applications include blood pressure monitoring, muscle diagnosis, and neuromodulation. With the support of a Takeda Fellowship, Song will build on that promising work and pursue key improvements to existing wearable device technologies, including developing low-compute and low-memory ML algorithms and low-power chips to enable ML on smart wearable devices. The technologies emerging from Song’s research could offer exciting new capabilities in health care, enabling powerful and cost-effective point-of-care diagnostics and expanding individual access to autonomous and continuous medical monitoring.

Peiqi Wang

Peiqi Wang is a PhD candidate in the Department of Electrical Engineering and Computer Science. Wang’s research aims to develop machine learning methods for learning and interpretation from medical images and associated clinical data to support clinical decision-making. He is developing a multimodal representation learning approach that aligns knowledge captured in large amounts of medical image and text data to transfer this knowledge to new tasks and applications. Supported by a Takeda Fellowship, Wang will advance this promising line of work to build robust tools that interpret images, learn from sparse human feedback, and reason like doctors, with potentially major benefits to important stakeholders in health care.

Oscar Wu

Haoyang “Oscar” Wu is a PhD candidate in the Department of Chemical Engineering. Wu’s research integrates quantum chemistry and deep learning methods to accelerate the process of small-molecule screening in the development of new drugs. By identifying and automating reliable methods for finding transition state geometries and calculating barrier heights for new reactions, Wu’s work could make it possible to conduct the high-throughput ab initio calculations of reaction rates needed to screen the reactivity of large numbers of active pharmaceutical ingredients (APIs). A Takeda Fellowship will support his current project to: (1) develop open-source software for high-throughput quantum chemistry calculations, focusing on the reactivity of drug-like molecules, and (2) develop deep learning models that can quantitatively predict the oxidative stability of APIs. The tools and insights resulting from Wu’s research could help to transform and accelerate the drug-discovery process, offering significant benefits to the pharmaceutical and medical fields and to patients.

Soojung Yang

Soojung Yang is a PhD candidate in the Department of Materials Science and Engineering. Yang’s research applies cutting-edge methods in geometric deep learning and generative modeling, along with atomistic simulations, to better understand and model protein dynamics. Specifically, Yang is developing novel tools in generative AI to explore protein conformational landscapes that offer greater speed and detail than physics-based simulations at a substantially lower cost. With the support of a Takeda Fellowship, she will build upon her successful work on the reverse transformation of coarse-grained proteins to the all-atom resolution, aiming to build machine-learning models that bridge multiple size scales of protein conformation diversity (all-atom, residue-level, and domain-level). Yang’s research holds the potential to provide a powerful and widely applicable new tool for researchers who seek to understand the complex protein functions at work in human diseases and to design drugs to treat and cure those diseases.

Yuzhe Yang

Yuzhe Yang is a PhD candidate in the Department of Electrical Engineering and Computer Science. Yang’s research interests lie at the intersection of machine learning and health care. In his past and current work, Yang has developed and applied innovative machine-learning models that address key challenges in disease diagnosis and tracking. His many notable achievements include the creation of one of the first machine learning-based solutions using nocturnal breathing signals to detect Parkinson’s disease (PD), estimate disease severity, and track PD progression. With the support of a Takeda Fellowship, Yang will expand this promising work to develop an AI-based diagnosis model for Alzheimer’s disease (AD) using sleep-breathing data that is significantly more reliable, flexible, and economical than current diagnostic tools. This passive, in-home, contactless monitoring system — resembling a simple home Wi-Fi router — will also enable remote disease assessment and continuous progression tracking. Yang’s groundbreaking work has the potential to advance the diagnosis and treatment of prevalent diseases like PD and AD, and it offers exciting possibilities for addressing many health challenges with reliable, affordable machine-learning tools. 



----------------------------------------------------------------------------------------------------------------

Thu, 02 Nov 2023 00:00:00 -0400


In online news, do mouse clicks speak louder than words?
Posted on Thursday November 02, 2023


Category : Political science

Author : Peter Dizikes | MIT News

Partisan media might deepen political polarization, but we should measure people’s media habits more carefully before drawing conclusions, researchers say.


Read more about this article :

In a polarized country, how much does the media influence people’s political views? A new study co-authored by MIT scholars finds the answer depends on people’s media preferences — and, crucially, how these preferences are measured.

The researchers combined a large online survey experiment with web-tracking data that recorded all of the news sites participants visited in the month before the study. They found that the media preferences individuals reported in the survey generally mirrored their real-world news consumption, but important differences stood out.  

First, there was substantial variation in the actual news consumption habits of participants who reported identical media preferences, suggesting that survey-based measures may not fully capture the variance in individuals’ experiences. Additionally, people with divergent media preferences in the survey often visited similar online news outlets. These findings challenge common assumptions about the polarized nature of Americans’ media habits and raise questions about the use of survey data when studying the effects of political media.

“There’s good reason to think that the information people report in surveys may not be a perfect representation of their actual media habits,” says Chloe Wittenberg PhD ’23, a postdoc in the MIT Department of Political Science and co-author of a new paper detailing the results.

The open-access paper, “Media Measurement Matters: Estimating the Persuasive Effects of Partisan Media with Survey and Behavioral Data,” appears in the Journal of Politics. The authors are Wittenberg; Matthew A. Baum, a professor at the Harvard Kennedy School; Adam Berinsky, MIT’s Mitsui Professor of Political Science and director of the MIT Political Experiments Research Lab; Justin de Benedictis-Kessner, an assistant professor of public policy at the Harvard Kennedy School; and Teppei Yamamoto, a professor of political science and director of MIT’s Political Methodology Lab.

Stated and revealed preferences

The study was motivated by a split within some academic research. Some scholars believe existing polarization produces highly partisan media consumption; others think partisan media sources influence citizens to adopt more polarized views. But few have measured both self-selection of media and its persuasive effects at the same time — using both survey and behavioral data.

To conduct the experiment, the researchers contracted with the media analytics company comScore to recruit a diverse sample of American adults in 2018. ComScore then combined survey responses from over 3,300 of these participants with detailed information about their web-browsing history in the month prior to the study.

“In this study, we adopted a novel experimental design called the Preference-Incorporating Choice and Assignment design — or the PICA design — which we invented and derived a formal statistical framework for in an earlier work,” Yamamoto says. “The PICA design was a perfect fit for the study, given its objectives.”

In the first part of the experiment, participants were asked to report their media preferences, including the quantity and type of news they like to read. In the second part, participants were assigned to one of two groups. The first group could select which type of media — Fox News, MSNBC, or an entertainment option — they wanted to read, whereas the second group was required to view articles from one of these three sources. This approach enabled the researchers to assess both how individuals’ stated preferences in the survey compared to their online news consumption, and how persuasive partisan media can be to different sets of consumers.

Overall, the study revealed differences in the persuasiveness of partisan media across news audiences. When examining the volume of news that participants consumed, the authors found that people who generally visited fewer news sites, relative to entertainment sites, tended to be more readily persuaded by partisan media.

However, when they looked at the political slant of participants’ news consumption, the authors observed a small but striking deviation between their survey and behavioral measures of media preferences. At one end, the results based on survey data suggested that members of the public may be receptive to information from ideologically opposed sources. In contrast, the results based on web-browsing data showed that people with more extreme media diets are persuaded primarily by outlets with which they already agree.

“Together, these results suggest that inferences about media polarization may depend heavily on how individuals’ media preferences are measured,” the authors state in the paper.

“Our results affirm the value of harnessing real-world data to study political media,” adds de Benedictis-Kessner. “Precise measurement of people’s behavior in online news environments is difficult, but it is important to confront these measurement challenges due to the different conclusions that can arise about the dangers of political polarization.”

Extending the research

As the scholars acknowledge, there are necessarily some questions left open by their work. For one, the current study focused on providing media content related to education policy, including issues such as school choice and charter schools. While education is a prominent issue for many citizens, it is an area that tends not to display as much polarization as some other topics in American life. It is possible that studies involving other political issues might reveal different dynamics.

“An interesting extension for this work would be to look at different issue areas, some of which might be more polarized than education,” Wittenberg says.

She adds: “I hope the field can move toward testing a broader array of measures to see how they cohere, and I think there’s going to be a lot of interesting and actionable insights. Our goal is not to say, ‘Here is a perfect measure you should go out and use.’ It’s to nudge people to think about how they are measuring these preferences.”

Support for the research was provided by the National Science Foundation.



----------------------------------------------------------------------------------------------------------------

Thu, 02 Nov 2023 00:00:00 -0400


How “blue” and “green” appeared in a language that didn’t have words for them
Posted on Thursday November 02, 2023


Category : Research

Author : Anne Trafton | MIT News

People of a remote Amazonian society who learned Spanish as a second language began to interpret colors in a new way, an MIT study has found.


Read more about this article :

The human eye can perceive about 1 million colors, but languages have far fewer words to describe those colors. So-called basic color terms, single color words used frequently by speakers of a given language, are often employed to gauge how languages differ in their handling of color. Languages spoken in industrialized nations such as the United States, for example, tend to have about a dozen basic color terms, while languages spoken by more isolated populations often have fewer.

However, the way that a language divides up color space can be influenced by contact with other languages, according to a new study from MIT.

Among members of the Tsimane’ society, who live in a remote part of the Bolivian Amazon rainforest, the researchers found that those who had learned Spanish as a second language began to classify colors into more words, making color distinctions that are not commonly used by Tsimane’ who are monolingual.

In the most striking finding, Tsimane’ who were bilingual began using two different words to describe blue and green, which monolingual Tsimane’ speakers do not typically do. And, instead of borrowing Spanish words for blue and green, they repurposed words from their own language to describe those colors.

“Learning a second language enables you to understand these concepts that you didn’t have in your first language,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the senior author of the study. “What’s also interesting is they used their own Tsimane’ terms to start dividing up the color space more like Spanish does.”

The researchers also found that the bilingual Tsimane’ became more precise in describing colors such as yellow and red, which monolingual speakers tend to use to encompass many shades beyond what a Spanish or English speaker would include.

“It’s a great example of one of the main benefits of learning a second language, which is that you open a different worldview and different concepts that then you can import to your native language,” says Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University and the lead author of the study.

Kyle Mahowald, an assistant professor of linguistics at the University of Texas at Austin, and Bevil Conway, a senior investigator at the National Eye Institute, are also authors of the paper, which appears this week in Psychological Science.

Dividing up the color space

In English and many other languages of industrialized nations, there are basic color words corresponding to black, white, red, orange, yellow, green, blue, purple, brown, pink, and gray. South American Spanish additionally divides the blue space into light blue (“celeste”) and dark blue (“azul”).

Members of Tsimane’ society consistently use only three color words, which correspond to black, white, and red. There are also a handful of words that encompass many shades of yellow or brown, as well as two words that are used interchangeably to mean either green or blue. However, these words are not used by everyone in the population.

Several years ago, Gibson and others reported that in a study of more than 100 languages, including Tsimane’, speakers tend to divide the “warm” part of the color spectrum into more color words than the “cooler” regions, which include blue and green. In the Tsimane’ language, two words, “shandyes” and “yushñus,” are used interchangeably for any hue that falls within blue or green.

As a follow-up to that study, Malik-Moraleda wanted to explore whether learning a second language would have any effect on how the Tsimane’ use color words. Today, many Tsimane’ learn Bolivian Spanish as a second language.

Working with monolingual and bilingual members of the Tsimane’, the researchers asked people to perform two different tasks. For the bilingual population, they asked them to do the tasks twice, once in Tsimane’ and once in Spanish.

In the first task, the researchers showed the subjects 84 chips of different colors, one by one, and asked them what word they would use to describe the color. In the second task, the subjects were shown the entire set of chips and asked to group the chips by color word.

The researchers found that when performing this task in Spanish, the bilingual Tsimane’ classified colors into the traditional color words of the Spanish language. Additionally, the bilingual speakers were much more precise about naming colors when they were performed the task in their native language.

“Remarkably, the bilinguals really divide up the space much more than the monolinguals, in spite of the fact that they’re still primarily Tsimane’ speakers,” Gibson says.

Strikingly, the bilingual Tsimane’ also began using separate words for blue and green, even though their native language does not distinguish those colors. Bilingual Tsimane’ speakers began to use “yushñus” exclusively to describe blue, and “shandyes” exclusively to describe green.

Borrowing concepts

The findings suggest that contact between languages can influence how people think about concepts such as color, the researchers say.

“It does seem like the concepts are being borrowed from Spanish,” Gibson says. “The bilingual speakers learn a different way to divide up the color space, which is pretty useful if you’re dealing with the industrialized world. It’s useful to be able to label colors that way, and somehow they import some of that into the Tsimane’ meaning space.”

While the researchers observed that the distinctions between blue and green appeared only in Tsimane’ who had learned Spanish, they say it’s possible that this usage could spread within the population so that monolingual Tsimane’ also start to use it. Another possibility, which they believe is more likely, is that more of the population will become bilingual, as they have more contact with the Spanish-speaking villages nearby.

“Over time, these populations tend to learn whatever the dominant outside language is because it’s valuable for getting jobs where you earn money,” Gibson says.

The researchers now hope to study whether other concepts, such as frames of reference for time, may spread from Spanish to Tsimane’ speakers who become bilingual. Malik-Moraleda also hopes to see if the color language findings from this study could be replicated in other remote populations, specifically, in the Gujjar, a nomadic community living in the Himalayan mountains in Kashmir.

The research was funded by a La Caixa Fellowship, the Dingwall Foundation, the Intramural Research Program of the National Eye Institute, and the National Science Foundation CompCog Program.


 

 

Back to Top BACK TO TOP