The system could be used for battery-free underwater communication across kilometer-scale distances, to aid monitoring of climate and coastal change.
Read more about this article :
MIT researchers have demonstrated the first system for ultra-low-power underwater networking and communication, which can transmit signals across kilometer-scale distances.
This technique, which the researchers began developing several years ago, uses about one-millionth the power that existing underwater communication methods use. By expanding their battery-free system’s communication range, the researchers have made the technology more feasible for applications such as aquaculture, coastal hurricane prediction, and climate change modeling.
“What started as a very exciting intellectual idea a few years ago — underwater communication with a million times lower power — is now practical and realistic. There are still a few interesting technical challenges to address, but there is a clear path from where we are now to deployment,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab.
Underwater backscatter enables low-power communication by encoding data in sound waves that it reflects, or scatters, back toward a receiver. These innovations enable reflected signals to be more precisely directed at their source.
Due to this “retrodirectivity,” less signal scatters in the wrong directions, allowing for more efficient and longer-range communication.
When tested in a river and an ocean, the retrodirective device exhibited a communication range that was more than 15 times farther than previous devices. However, the experiments were limited by the length of the docks available to the researchers.
To better understand the limits of underwater backscatter, the team also developed an analytical model to predict the technology’s maximum range. The model, which they validated using experimental data, showed that their retrodirective system could communicate across kilometer-scale distances.
The researchers shared these findings in two papers which will be presented at this year’s ACM SIGCOMM and MobiCom conferences. Adib, senior author on both papers, is joined on the SIGCOMM paper by co-lead authors Aline Eid, a former postdoc who is now an assistant professor at the University of Michigan, and Jack Rademacher, a research assistant; as well as research assistants Waleed Akbar and Purui Wang, and postdoc Ahmed Allam. The MobiCom paper is also written by co-lead authors Akbar and Allam.
Communicating with sound waves
Underwater backscatter communication devices utilize an array of nodes made from “piezoelectric” materials to receive and reflect sound waves. These materials produce an electric signal when mechanical force is applied to them.
When sound waves strike the nodes, they vibrate and convert the mechanical energy to an electric charge. The nodes use that charge to scatter some of the acoustic energy back to the source, transmitting data that a receiver decodes based on the sequence of reflections.
But because the backscattered signal travels in all directions, only a small fraction reaches the source, reducing the signal strength and limiting the communication range.
To overcome this challenge, the researchers leveraged a 70-year-old radio device called a Van Atta array, in which symmetric pairs of antennas are connected in such a way that the array reflects energy back in the direction it came from.
But connecting piezoelectric nodes to make a Van Atta array reduces their efficiency. The researchers avoided this problem by placing a transformer between pairs of connected nodes. The transformer, which transfers electric energy from one circuit to another, allows the nodes to reflect the maximum amount of energy back to the source.
“Both nodes are receiving and both nodes are reflecting, so it is a very interesting system. As you increase the number of elements in that system, you build an array that allows you to achieve much longer communication ranges,” Eid explains.
In addition, they used a technique called cross-polarity switching to encode binary data in the reflected signal. Each node has a positive and a negative terminal (like a car battery), so when the positive terminals of two nodes are connected and the negative terminals of two nodes are connected, that reflected signal is a “bit one.”
But if the researchers switch the polarity, and the negative and positive terminals are connected to each other instead, then the reflection is a “bit zero.”
“Just connecting the piezoelectric nodes together is not enough. By alternating the polarities between the two nodes, we are able to transmit data back to the remote receiver,” Rademacher explains.
When building the Van Atta array, the researchers found that if the connected nodes were too close, they would block each other’s signals. They devised a new design with staggered nodes that enables signals to reach the array from any direction. With this scalable design, the more nodes an array has, the greater its communication range.
They tested the array in more than 1,500 experimental trials in the Charles River in Cambridge, Massachusetts, and in the Atlantic Ocean, off the coast of Falmouth, Massachusetts, in collaboration with the Woods Hole Oceanographic Institution. The device achieved communication ranges of 300 meters, more than 15 times longer than they previously demonstrated.
However, they had to cut the experiments short because they ran out of space on the dock.
Modeling the maximum
That inspired the researchers to build an analytical model to determine the theoretical and practical communication limits of this new underwater backscatter technology.
Building off their group’s work on RFIDs, the team carefully crafted a model that captured the impact of system parameters, like the size of the piezoelectric nodes and the input power of the signal, on the underwater operation range of the device.
“It is not a traditional communication technology, so you need to understand how you can quantify the reflection. What are the roles of the different components in that process?” Akbar says.
For instance, the researchers needed to derive a function that captures the amount of signal reflected out of an underwater piezoelectric node with a specific size, which was among the biggest challenges of developing the model, he adds.
They used these insights to create a plug-and-play model into a which a user could enter information like input power and piezoelectric node dimensions and receive an output that shows the expected range of the system.
They evaluated the model on data from their experimental trials and found that it could accurately predict the range of retrodirected acoustic signals with an average error of less than one decibel.
Using this model, they showed that an underwater backscatter array can potentially achieve kilometer-long communication ranges.
“We are creating a new ocean technology and propelling it into the realm of the things we have been doing for 6G cellular networks. For us, it is very rewarding because we are starting to see this now very close to reality,” Adib says.
The researchers plan to continue studying underwater backscatter Van Atta arrays, perhaps using boats so they could evaluate longer communication ranges. Along the way, they intend to release tools and datasets so other researchers can build on their work. At the same time, they are beginning to move toward commercialization of this technology.
“Limited range has been an open problem in underwater backscatter networks, preventing them from being used in real-world applications. This paper takes a significant step forward in the future of underwater communication, by enabling them to operate on minimum energy while achieving long range,” says Omid Abari, assistant professor of computer science at the University of California at Los Angeles, who was not involved with this work. “The paper is the first to bring Van Atta Reflector array technique into underwater backscatter settings and demonstrate its benefits in improving the communication range by orders of magnitude. This can take battery-free underwater communication one step closer to reality, enabling applications such as underwater climate change monitoring and coastal monitoring.”
This research was funded, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization.
Stefan Helmreich’s new book examines the many facets of oceanic wave science and the propagation of wave theory into other areas of life.
Read more about this article :
Ocean waves are easy on the eyes, but hard on the brain. How do they form? How far do they travel? How do they break? Those magnificent waves you see crashing into the shore are complex.
“I’ve often asked this question,” the eminent wave scientist Walter Munk told MIT Professor Stefan Helmreich several years ago. “If we met somebody from another planet who had never seen waves, could [they] dream about what it’s like when a wave becomes unstable in shallow water? About what it would do? I don’t think so. It’s a complicated problem.”
In recent decades scientists have gotten to know waves better. In the 1960s, they confirmed that waves travel across the world; a storm in the Tasman Sea can create great surf in California. In the 1990s, scientists obtained eye-opening measurements of massive “rogue” waves. Meanwhile experts continue tailoring a standard model of waves, developed in the 1980s, to local conditions, as data and theory keep influencing each other.
“Waves are empirical and conceptual phenomena both,” writes Helmreich in his new work, “A Book of Waves,” published this month by Duke University Press. In it, Helmreich examines the development of wave science globally, the propagation of wave theory into other areas of life — such as the “waves” of the Covid-19 pandemic — and the way researchers develop both empirical knowledge and abstractions describing nature in systematic terms.
“Wave science is constantly going back and forth between registering data and interpreting that data,” says Helmreich, the Elting E. Morison Professor of Anthropology at MIT. “The aspiration of so much wave science has been to formalize and automate measurement so that everything becomes a matter of simple data registration. But you can never get away from the human interpretation of those results. Humans are the ones who care about what waves are doing.”
“You need the world”
Helmreich has long been interested in ocean science. His 2009 book “Alien Ocean” examined marine biologists and their study of microbes. In 2014, Helmreich presented material that wound up in “A Book of Waves” while delivering the Lewis Henry Morgan lectures at the University of Rochester, the nation’s oldest anthropology lecture series.
To research the book, Helmreich traveled far and wide, from the Netherlands to Australia, among other places, often embedding himself with researchers. That included a stint on board the FLIP ship, a unique, now-retired vessel operated by the Scripps Institution of Oceanography, which could turn itself from a long horizontal vessel into a kind of giant live-aboard vertical buoy, for conducting wave measurements. The FLIP ship is one of many distinctive wave science tools; as the book draws out, this has been a diverse and even quirky field, methodologically, with wave scientists approaching their subject from all angles.
“Ocean and water waves look very different in different national contexts,” Helmreich says. “In the Netherlands, interest in waves is very much bound up with hydrogical engineers’ desires to keep the country dry. In the United States, ocean wave science was crucially formatted by World War II, and the Cold War, and military prerogatives.”
As it happens, the late Munk (1917-2019), who The New York Times once called “The Einstein of waves,” developed some of the insights and techniques that helped to forecast wave heights for the Allied invasion of Normandy in World War II. In spinning out his thought experiment about aliens to Helmreich, Munk was making the case for empiricism in wave science.
“Mathematical formalisms and representations are vital to understanding what waves are doing, but they’re not enough,” Helmreich says. “You need the world.”
Disney makes waves
But as Helmreich also emphasizes in his work, wave science depends on a delicate interplay between theory, modeling, and inventive empirical research. What might the Disney film “Fantasia” have to do with wave science? Well, movies used to rely on optical film recordings to play their soundtracks; “Fantasia’s” film soundtrack also had schematic renderings of sound levels. British wave scientists realized they could adapt this technique of depicting sound patterns to represent sets of waves.
For that matter, by the 1960s, scientists also began categorizing waves into a wave spectrum, sorted by the frequency with which they arrived at the shore. That idea comes directly from the concept of spectra of light, radio, and sound waves. In this sense, existing scientific concepts have periodically been deployed by wave researchers to make sense of what they already can see.
“The book asks questions about the relationship between reality and its representations,” Helmreich says. “Waves are obviously empirical things in the world. But understanding how they work requires abstractions, whether you are a scientist at sea, a surfer, or an engineer trying to figure out what will happen at a coastline. And those representations are influenced by the tools scientists use, whether cameras, pressure sensors, sonar, film, buoys, or computer models. What scientists think waves are is imprinted by the media they use to study waves.”
As Helmreich notes, the interdisciplinary nature of wave science has evolved. Physics shaped wave science for much of the 20th century. More recently, as scientists recognize that waves transmit things like agricultural runoff and the aerosolized signatures of coastal communities’ car exhaust, biological and chemical oceanographers have entered the field. And climate scientists and engineers are increasingly concerned with rising sea levels and seemingly bigger waves.
“Ocean waves used to belong to the physcists,” Helmreich says. “Today a lot of it is about climate change and sea level rise.”
The shape of things to come
But even as other fields have fed into ocean wave science, so too has wave science influenced other disciplines. From medicine to social science, the concept of the wave has been applied to social phenomena to help organize our understanding of matters such as disease transmission and public health.
“People use the figure of the wave to think about the shape of things to come,” Helmreich says. “Certainly we saw that during the Covid pandemic, that the wave was considered to be both descriptive, of what was happening, and predictive, about what would happen next.”
Scholars have praised “A Book of Waves.” Hugh Raffles, a professor and chair of anthropology at The New School, has called it “a model of expansive transdisciplinary practice,” as well as “a constant surprise, a mind-opening recalibration of the ways we assemble nature, science, ethnography, and the arts.”
Helmreich hopes readers will consider how extensively social, political, and civic needs have influenced wave studies. Back during World War II, Walter Munk developed a concept called “significant wave height” to help evaluate the viability of landing craft off Normandy.
“There’s an interesting, very contingent history to the metric of significant wave height,” Helmreich says. “But one can open up the concept of significance to ask: Significant for whom, and for what? Significance, in its wider cultural meaning is about human projects, whether to do with warfare, coastal protection, humanitarian rescue at sea, shipping, surfing, or recreation of other kinds. How waves become significant is an anthropological question. “A Book of Waves” seeks to map the many different ways that waves have become significant to people.”
MIT.nano symposium highlights applications of ambient sensing.
Read more about this article :
“Sensing is all around you,” said MIT.nano Associate Director Brian W. Anthony at Ambient Sensing, a half-day symposium presented in May by the MIT.nano Immersion Lab. Featuring MIT faculty and researchers from multiple disciplines, the event highlighted sensing technologies deployed everywhere from beneath the Earth’s surface to high into the exosphere.
Brent Minchew, assistant professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), kicked off the symposium with a presentation on using remote sensing to understand the flow, deformation, and fracture of glacier ice, and how that is contributing to sea level rise. “There’s this fantastic separation of scales,” said Minchew. “We’re taking observations collected from satellites that are flying 700 kilometers above the surface, and we’re using the data that’s collected there to infer what’s happening at the atomic scale within the ice, which is magnificent.”
Minchew’s group is working with other researchers at MIT to build a drone capable of flying for three to four months over the polar regions, filling critical gaps in earth observations. “It’s going to give us this radical improvement over current technology and our observational capacity.”
Also using satellites, EAPS postdoc Qindan Zhu combines machine learning with observational inputs from remote sensing to study ozone pollution over North American cities. Zhu explained that, based on a decade worth of data, controlling nitrogen oxides emissions will be the most effective way to regulate ozone pollution in these urban areas. Both Zhu’s and Minchew’s presentations highlighted the important role ambient sensors play in learning more about Earth’s changing climate.
Transitioning from air to sea, Michael Benjamin, principal research scientist in the Department of Mechanical Engineering, spoke about his work on robotic marine vehicles to explore and monitor the ocean and coastal marine environments. “Robotic platforms as remote sensors have the ability to sense in places that are too dangerous, boring, or costly for crewed vessels,” explained Benjamin. At the MIT Marine Autonomy Lab, researchers are designing underwater surface robots, autonomous sailing vessels, and an amphibious surf zone robot.
Sensing is a huge part of marine robotics, said Benjamin. “Without sensors, robots wouldn’t be able to know where they are, they couldn’t avoid hidden things, they couldn’t collect information.”
Fadel Adib, associate professor in the Program in Media Arts & Sciences and the Department of Electrical Engineering & Computer Science (EECS), is also working on sensing underwater. “Battery life of underwater sensors is extremely limited,” explained Adib. “It is very difficult to recharge the battery of an ocean sensor once it’s been deployed.”
His research group built an underwater sensor that reflects acoustic signals rather than needing to generate its own, requiring much less power. They also developed a battery-free, wireless underwater camera that can capture images continuously and over a long period of time. Adib spoke about potential applications for underwater ambient sensing — climate studies, discovery of new ocean species, monitoring aquaculture farms to support food security, and even beyond the ocean, in outer space. “As you can imagine, it’s even more difficult to replace a sensor’s battery once you’ve shipped it on a space mission,” he said.
Originally working in the underwater sensing world, James Kinsey, CEO of Humatics, is applying his knowledge of ocean sensors to two different markets: public transit and automotive manufacturing. “All of that sensor data in the ocean — the value is when you can geolocate it,” explained Kinsey. “The more precisely and accurately you know that, you can begin to paint that 3D space.” Kinsey spoke about automating vehicle assembly lines with millimeter precision, allowing for the use of robotic arms. For subway trains, he highlighted the benefits of sensing systems to better know a train’s position, as well as to improve rider and worker safety by increasing situational awareness. “Precise positioning transforms the world,” he said.
At the intersection of electrical engineering, communications, and imaging, EECS Associate Professor Ruonan Han introduced his research on sensing through semiconductor chips that operate at terahertz frequencies. Using these terahertz chips, Han’s research group has demonstrated high-angular-resolution 3D imaging without mechanical scanning. They’re working on electronic nodes for gas sensing, precision timing, and miniaturizing tags and sensors.
In two Q&A panels led by Anthony, the presenters discussed how sensing technologies interface with the world, highlighting challenges in hardware design, manufacturing, packaging, reducing cost, and producing at scale. On the topic of data visualization, they agreed on a need for hardware and software technologies to interact with and assimilate data in faster, more immersive ways.
Ambient Sensing was broadcast live from the MIT.nano Immersion Lab. This unique research space, located on the third floor of MIT.nano, provides an environment to connect the physical to the digital — visualizing data, prototyping advanced tools for augmented and virtual reality (AR/VR), and developing new software and hardware concepts for immersive experiences.
To showcase current work being done in the Immersion Lab, retired MIT fencing coach Robert Hupp joined Anthony and research scientist Praneeth Namburi for a live demonstration of immersive athlete-training technology. Using wireless sensors on the fencing épée paired with OptiTrack motion-capture sensors along the room’s perimeter, a novice fencer wearing a motion-capture suit and an AR headset faced a virtual opponent while Namburi tracked the fencer’s stance on a computer. Hupp was able to show the fencer how to improve his movements with this real-time data.
“This event showcased the capabilities of the Immersion Lab, and the work being done on sensing — including sensors, data analytics, and data visualization — across MIT,” says Anthony. “Many of our speakers talked about collaboration and the importance of bringing multiple fields together to advance ambient sensing and data collection to solve societal challenges. I look forward to welcome more academic and industry researchers into the Immersion Lab to support their work with our advanced hardware and software technologies.”
A new machine-learning model makes more accurate predictions about ocean currents, which could help with tracking plastic pollution and oil spills, and aid in search and rescue.
Read more about this article :
To study ocean currents, scientists release GPS-tagged buoys in the ocean and record their velocities to reconstruct the currents that transport them. These buoy data are also used to identify “divergences,” which are areas where water rises up from below the surface or sinks beneath it.
By accurately predicting currents and pinpointing divergences, scientists can more precisely forecast the weather, approximate how oil will spread after a spill, or measure energy transfer in the ocean. A new model that incorporates machine learning makes more accurate predictions than conventional models do, a new study reports.
A multidisciplinary research team including computer scientists at MIT and oceanographers has found that a standard statistical model typically used on buoy data can struggle to accurately reconstruct currents or identify divergences because it makes unrealistic assumptions about the behavior of water.
The researchers developed a new model that incorporates knowledge from fluid dynamics to better reflect the physics at work in ocean currents. They show that their method, which only requires a small amount of additional computational expense, is more accurate at predicting currents and identifying divergences than the traditional model.
This new model could help oceanographers make more accurate estimates from buoy data, which would enable them to more effectively monitor the transportation of biomass (such as Sargassum seaweed), carbon, plastics, oil, and nutrients in the ocean. This information is also important for understanding and tracking climate change.
“Our method captures the physical assumptions more appropriately and more accurately. In this case, we know a lot of the physics already. We are giving the model a little bit of that information so it can focus on learning the things that are important to us, like what are the currents away from the buoys, or what is this divergence and where is it happening?” says senior author Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society.
Broderick’s co-authors include lead author Renato Berlinghieri, an electrical engineering and computer science graduate student; Brian L. Trippe, a postdoc at Columbia University; David R. Burt and Ryan Giordano, MIT postdocs; Kaushik Srinivasan, an assistant researcher in atmospheric and ocean sciences at the University of California at Los Angeles; Tamay Özgökmen, professor in the Department of Ocean Sciences at the University of Miami; and Junfei Xia, a graduate student at the University of Miami. The research will be presented at the International Conference on Machine Learning.
Diving into the data
Oceanographers use data on buoy velocity to predict ocean currents and identify “divergences” where water rises to the surface or sinks deeper.
To estimate currents and find divergences, oceanographers have used a machine-learning technique known as a Gaussian process, which can make predictions even when data are sparse. To work well in this case, the Gaussian process must make assumptions about the data to generate a prediction.
A standard way of applying a Gaussian process to oceans data assumes the latitude and longitude components of the current are unrelated. But this assumption isn’t physically accurate. For instance, this existing model implies that a current’s divergence and its vorticity (a whirling motion of fluid) operate on the same magnitude and length scales. Ocean scientists know this is not true, Broderick says. The previous model also assumes the frame of reference matters, which means fluid would behave differently in the latitude versus the longitude direction.
“We were thinking we could address these problems with a model that incorporates the physics,” she says.
They built a new model that uses what is known as a Helmholtz decomposition to accurately represent the principles of fluid dynamics. This method models an ocean current by breaking it down into a vorticity component (which captures the whirling motion) and a divergence component (which captures water rising or sinking).
In this way, they give the model some basic physics knowledge that it uses to make more accurate predictions.
This new model utilizes the same data as the old model. And while their method can be more computationally intensive, the researchers show that the additional cost is relatively small.
Buoyant performance
They evaluated the new model using synthetic and real ocean buoy data. Because the synthetic data were fabricated by the researchers, they could compare the model’s predictions to ground-truth currents and divergences. But simulation involves assumptions that may not reflect real life, so the researchers also tested their model using data captured by real buoys released in the Gulf of Mexico.
In each case, their method demonstrated superior performance for both tasks, predicting currents and identifying divergences, when compared to the standard Gaussian process and another machine-learning approach that used a neural network. For example, in one simulation that included a vortex adjacent to an ocean current, the new method correctly predicted no divergence while the previous Gaussian process method and the neural network method both predicted a divergence with very high confidence.
The technique is also good at identifying vortices from a small set of buoys, Broderick adds.
Now that they have demonstrated the effectiveness of using a Helmholtz decomposition, the researchers want to incorporate a time element into their model, since currents can vary over time as well as space. In addition, they want to better capture how noise impacts the data, such as winds that sometimes affect buoy velocity. Separating that noise from the data could make their approach more accurate.
“Our hope is to take this noisily observed field of velocities from the buoys, and then say what is the actual divergence and actual vorticity, and predict away from those buoys, and we think that our new technique will be helpful for this,” she says.
“The authors cleverly integrate known behaviors from fluid dynamics to model ocean currents in a flexible model,” says Massimiliano Russo, an associate biostatistician at Brigham and Women’s Hospital and instructor at Harvard Medical School, who was not involved with this work. “The resulting approach retains the flexibility to model the nonlinearity in the currents but can also characterize phenomena such as vortices and connected currents that would only be noticed if the fluid dynamic structure is integrated into the model. This is an excellent example of where a flexible model can be substantially improved with a well thought and scientifically sound specification.”
This research is supported by the Office of Naval Research through a Multi University Research Initiative (MURI) program titled "Machine Learning for Submesoscale Characterization, Ocean Prediction, and Exploration (ML-SCOPE)." It is also supported in part by a National Science Foundation (NSF) CAREER Award and the Rosenstiel School of Marine, Atmospheric, and Earth Science at the University of Miami.
Project will develop new materials characterization tools and technologies to assign unique identifiers to individual pearls.
Read more about this article :
A new research collaboration with The Bahrain Institute for Pearls and Gemstones (DANAT) will seek to develop advanced characterization tools for the analysis of the properties of pearls and to explore technologies to assign unique identifiers to individual pearls.
The three-year project will be led by Admir Mašić, associate professor of civil and environmental engineering, in collaboration with Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology and professor of electrical engineering and computer science.
“Pearls are extremely complex and fascinating hierarchically ordered biological materials that are formed by a wide range of different species,” says Mašić. “Working with DANAT provides us a unique opportunity to apply our lab’s multi-scale materials characterization tools to identify potentially species-specific pearl fingerprints, while simultaneously addressing scientific research questions regarding the underlying biomineralization processes that could inform advances in sustainable building materials.”
DANAT is a gemological laboratory specializing in the testing and study of natural pearls as a reflection of Bahrain’s pearling history and desire to protect and advance Bahrain’s pearling heritage. DANAT’s gemologists support clients and students through pearl, gemstone, and diamond identification services, as well as educational courses.
Like many other precious gemstones, pearls have been human-made through scientific experimentation, says Noora Jamsheer, chief executive officer at DANAT. Over a century ago, cultured pearls entered markets as a competitive product to natural pearls, similar in appearance but different in value.
“Gemological labs have been innovating scientific testing methods to differentiate between natural pearls and all other pearls that exist because of direct or indirect human intervention. Today the world knows natural pearls and cultured pearls. However, there are also pearls that fall in between these two categories,” says Jamsheer. “DANAT has the responsibility, as the leading gemological laboratory for pearl testing, to take the initiative necessary to ensure that testing methods keep pace with advances in the science of pearl cultivation.”
Titled “Exploring the Nanoworld of Biogenic Gems,” the project will aim to improve the process of testing and identifying pearls by identifying morphological, micro-structural, optical, and chemical features sufficient to distinguish a pearl’s area of origin, method of growth, or both. MIT.nano, MIT’s open-access center for nanoscience and nanoengineering will be the organizational home for the project, where Mašić and his team will utilize the facility’s state-of-the-art characterization tools.
In addition to discovering new methodologies for establishing a pearl’s origin, the project aims to utilize machine learning to automate pearl classification. Furthermore, researchers will investigate techniques to create a unique identifier associated with an individual pearl.
The initial sponsored research project is expected to last three years, with potential for continued collaboration based on key findings or building upon the project’s success to open new avenues for research into the structure, properties, and growth of pearls.
A new method for removing the greenhouse gas from the ocean could be far more efficient than existing systems for removing it from the air.
Read more about this article :
As carbon dioxide continues to build up in the Earth’s atmosphere, research teams around the world have spent years seeking ways to remove the gas efficiently from the air. Meanwhile, the world’s number one “sink” for carbon dioxide from the atmosphere is the ocean, which soaks up some 30 to 40 percent of all of the gas produced by human activities.
Recently, the possibility of removing carbon dioxide directly from ocean water has emerged as another promising possibility for mitigating CO2 emissions, one that could potentially someday even lead to overall net negative emissions. But, like air capture systems, the idea has not yet led to any widespread use, though there are a few companies attempting to enter this area.
Now, a team of researchers at MIT says they may have found the key to a truly efficient and inexpensive removal mechanism. The findings were reported this week in the journal Energy and Environmental Science, in a paper by MIT professors T. Alan Hatton and Kripa Varanasi, postdoc Seoni Kim, and graduate students Michael Nitzsche, Simon Rufer, and Jack Lake.
The existing methods for removing carbon dioxide from seawater apply a voltage across a stack of membranes to acidify a feed stream by water splitting. This converts bicarbonates in the water to molecules of CO2, which can then be removed under vacuum. Hatton, who is the Ralph Landau Professor of Chemical Engineering, notes that the membranes are expensive, and chemicals are required to drive the overall electrode reactions at either end of the stack, adding further to the expense and complexity of the processes. “We wanted to avoid the need for introducing chemicals to the anode and cathode half cells and to avoid the use of membranes if at all possible,” he says.
The team came up with a reversible process consisting of membrane-free electrochemical cells. Reactive electrodes are used to release protons to the seawater fed to the cells, driving the release of the dissolved carbon dioxide from the water. The process is cyclic: It first acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. Then, the water is fed to a second set of cells with a reversed voltage, to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea. Periodically, the roles of the two cells are reversed once one set of electrodes is depleted of protons (during acidification) and the other has been regenerated during alkalization.
This removal of carbon dioxide and reinjection of alkaline water could slowly start to reverse, at least locally, the acidification of the oceans that has been caused by carbon dioxide buildup, which in turn has threatened coral reefs and shellfish, says Varanasi, a professor of mechanical engineering. The reinjection of alkaline water could be done through dispersed outlets or far offshore to avoid a local spike of alkalinity that could disrupt ecosystems, they say.
“We’re not going to be able to treat the entire planet’s emissions,” Varanasi says. But the reinjection might be done in some cases in places such as fish farms, which tend to acidify the water, so this could be a way of helping to counter that effect.
Once the carbon dioxide is removed from the water, it still needs to be disposed of, as with other carbon removal processes. For example, it can be buried in deep geologic formations under the sea floor, or it can be chemically converted into a compound like ethanol, which can be used as a transportation fuel, or into other specialty chemicals. “You can certainly consider using the captured CO2 as a feedstock for chemicals or materials production, but you’re not going to be able to use all of it as a feedstock,” says Hatton. “You’ll run out of markets for all the products you produce, so no matter what, a significant amount of the captured CO2 will need to be buried underground.”
Initially at least, the idea would be to couple such systems with existing or planned infrastructure that already processes seawater, such as desalination plants. “This system is scalable so that we could integrate it potentially into existing processes that are already processing ocean water or in contact with ocean water,” Varanasi says. There, the carbon dioxide removal could be a simple add-on to existing processes, which already return vast amounts of water to the sea, and it would not require consumables like chemical additives or membranes.
“With desalination plants, you’re already pumping all the water, so why not co-locate there?” Varanasi says. “A bunch of capital costs associated with the way you move the water, and the permitting, all that could already be taken care of.”
The system could also be implemented by ships that would process water as they travel, in order to help mitigate the significant contribution of ship traffic to overall emissions. There are already international mandates to lower shipping’s emissions, and “this could help shipping companies offset some of their emissions, and turn ships into ocean scrubbers,” Varanasi says.
The system could also be implemented at locations such as offshore drilling platforms, or at aquaculture farms. Eventually, it could lead to a deployment of free-standing carbon removal plants distributed globally.
The process could be more efficient than air-capture systems, Hatton says, because the concentration of carbon dioxide in seawater is more than 100 times greater than it is in air. In direct air-capture systems it is first necessary to capture and concentrate the gas before recovering it. “The oceans are large carbon sinks, however, so the capture step has already kind of been done for you,” he says. “There’s no capture step, only release.” That means the volumes of material that need to be handled are much smaller, potentially simplifying the whole process and reducing the footprint requirements.
The research is continuing, with one goal being to find an alternative to the present step that requires a vacuum to remove the separated carbon dioxide from the water. Another need is to identify operating strategies to prevent precipitation of minerals that can foul the electrodes in the alkalinization cell, an inherent issue that reduces the overall efficiency in all reported approaches. Hatton notes that significant progress has been made on these issues, but that it is still too early to report on them. The team expects that the system could be ready for a practical demonstration project within about two years.
“The carbon dioxide problem is the defining problem of our life, of our existence,” Varanasi says. “So clearly, we need all the help we can get.”
Since 1968, the MIT-WHOI Joint Program has provided research and educational opportunities for PhD students seeking to explore the marine world.
Read more about this article :
A five-year doctoral degree program, the MIT - Woods Hole Oceanographic Institution (WHOI) Joint Program in Oceanography/Applied Ocean Science and Engineering combines the strengths of MIT and WHOI to create one of the largest oceanographic facilities in the world. Graduate study in oceanography encompasses virtually all the basic sciences as they apply to the marine environment: physics, chemistry, geochemistry, geology, geophysics, and biology.
“As a species and as a society we really want to understand the planet that we live on and our place in it,” says Professor Michael Follows, who serves as director of the MIT-WHOI Joint Program.
“The reason I joined the program was because we cannot afford to wait to be able to address the climate crisis,” explains graduate student Paris Smalls. “The freedom to be able to execute on and have your interests come to life has been incredibly rewarding.”
“If you have a research problem, you can think of the top five people in that particular niche of a topic and they’re either down the hallway or have some association with WHOI,” adds graduate student Samantha Clevenger. “It’s a really incredible place in terms of connections and just having access to really anything you need.”
Senior Sylas Horowitz tackles engineering projects with a focus on challenges related to clean energy, climate justice, and sustainable development.
Read more about this article :
MIT senior Sylas Horowitz kneeled at the edge of a marsh, tinkering with a blue-and-black robot about the size and shape of a shoe box and studded with lights and mini propellers.
The robot was a remotely operated vehicle (ROV) — an underwater drone slated to collect water samples from beneath a sheet of Arctic ice. But its pump wasn’t working, and its intake line was clogged with sand and seaweed.
“Of course, something must always go wrong,” Horowitz, a mechanical engineering major with minors in energy studies and environment and sustainability, later blogged about the Falmouth, Massachusetts, field test. By making some adjustments, Horowitz was able to get the drone functioning on site.
Through a 2020 collaboration between MIT’s Department of Mechanical Engineering and the Woods Hole Oceanographic Institute (WHOI), Horowitz had been assembling and retrofitting the high-performance ROV to measure the greenhouse gases emitted by thawing permafrost.
The Arctic’s permafrost holds an estimated 1,700 billion metric tons of methane and carbon dioxide — roughly 50 times the amount of carbon tied to fossil fuel emissions in 2019, according to climate research from NASA’s Jet Propulsion Laboratory. WHOI scientists wanted to understand the role the Arctic plays as a greenhouse gas source or sink.
Horowitz’s ROV would be deployed from a small boat in sub-freezing temperatures to measure carbon dioxide and methane in the water. Meanwhile, a flying drone would sample the air.
An MIT Student Sustainability Coalition leader and one of the first members of the MIT Environmental Solutions Initiative’s Rapid Response Group, Horowitz has focused on challenges related to clean energy, climate justice, and sustainable development.
In addition to the ROV, Horowitz has tackled engineering projects through D-Lab, where community partners from around the world work with MIT students on practical approaches to alleviating global poverty. Horowitz worked on fashioning waste bins out of heat-fused recycled plastic for underserved communities in Liberia. Their thesis project, also initiated through D-Lab, is designing and building user-friendly, space- and fuel-efficient firewood cook stoves to improve the lives of women in Santa Catarina Palopó in northern Guatemala.
Through the Tata-MIT GridEdge Solar Research program, they helped develop flexible, lightweight solar panels to mount on the roofs of street vendors’ e-rickshaws in Bihar, India.
The thread that runs through Horowitz’s projects is user-centered design that creates a more equitable society. “In the transition to sustainable energy, we want our technology to adapt to the society that we live in,” they say. “Something I’ve learned from the D-Lab projects and also from the ROV project is that when you’re an engineer, you need to understand the societal and political implications of your work, because all of that should get factored into the design.”
Horowitz describes their personal mission as creating systems and technology that “serve the well-being and longevity of communities and the ecosystems we exist within.
“I want to relate mechanical engineering to sustainability and environmental justice,” they say. “Engineers need to think about how technology fits into the greater societal context of people in the environment. We want our technology to adapt to the society we live in and for people to be able, based on their needs, to interface with the technology.”
Imagination and inspiration
In Dix Hills, New York, a Long Island suburb, Horowitz’s dad is in banking and their mom is a speech therapist. The family hiked together, but Horowitz doesn’t tie their love for the natural world to any one experience. “I like to play in the dirt,” they say. “I’ve always had a connection to nature. It was a kind of childlike wonder.”
Seeing footage of the massive 2010 oil spill in the Gulf of Mexico caused by an explosion on the Deepwater Horizon oil rig — which occurred when Horowitz was around 10 — was a jarring introduction to how human activity can impact the health of the planet.
Their first interest was art — painting and drawing portraits, album covers, and more recently, digital images such as a figure watering a houseplant at a window while lightning flashes outside; a neon pink jellyfish in a deep blue sea; and, for an MIT-wide Covid quarantine project, two figures watching the sun set over a Green Line subway platform.
Art dovetailed into a fascination with architecture, then shifted to engineering. In high school, Horowitz and a friend were co-captains of an all-girls robotics team. “It was just really wonderful, having this community and being able to build stuff,” they say. Horowitz and another friend on the team learned they were accepted to MIT on Pi Day 2018.
Art, architecture, engineering — “it’s all kind of the same,” Horowitz says. “I like the creative aspect of design, being able to create things out of imagination.”
Sustaining political awareness
At MIT, Horowitz connected with a like-minded community of makers. They also launched themself into taking action against environmental injustice.
In 2022, through the Student Sustainability Coalition (SSC), they encouraged MIT students to get involved in advocating for the Cambridge Green New Deal, legislation aimed at reducing emissions from new large commercial buildings such as those owned by MIT and creating a green jobs training program.
In February 2022, Horowitz took part in a sit-in in Building 3 as part of MIT Divest, a student-led initiative urging the MIT administration to divest its endowment of fossil fuel companies.
“I want to see MIT students more locally involved in politics around sustainability, not just the technology side,” Horowitz says. “I think there’s a lot of power from students coming together. They could be really influential.”
User-oriented design
The Arctic underwater ROV Horowitz worked on had to be waterproof and withstand water temperatures as low as 5 degrees Fahrenheit. It was tethered to a computer by a 150-meter-long cable that had to spool and unspool without tangling. The pump and tubing that collected water samples had to work without kinking.
“It was cool, throughout the project, to think, ‘OK, what kind of needs will these scientists have when they’re out in these really harsh conditions in the Arctic? How can I make a machine that will make their field work easier?’
“I really like being able to design things directly with the users, working within their design constraints,” they say.
Inevitably, snafus occurred, but in photos and videos taken the day of the Falmouth field tests, Horowitz is smiling. “Here’s a fun unexpected (or maybe quite expected) occurrence!” they reported later. “The plastic mount for the shaft collar [used in the motor’s power transmission] ripped itself apart!” Undaunted, Horowitz jury-rigged a replacement out of sheet metal.
Horowitz replaced broken wires in the winch-like device that spooled the cable. They added a filter at the intake to prevent sand and plants from clogging the pump.
With a few more tweaks, the ROV was ready to descend into frigid waters. Last summer, it was successfully deployed on a field run in the Canadian high Arctic. A few months later, Horowitz was slated to attend OCEANS 2022 Hampton Roads, their first professional conference, to present a poster on their contribution to the WHOI permafrost research.
Ultimately, Horowitz hopes to pursue a career in renewable energy, sustainable design, or sustainable agriculture, or perhaps graduate studies in data science or econometrics to quantify environmental justice issues such as the disproportionate exposure to pollution among certain populations and the effect of systemic changes designed to tackle these issues.
After completing their degree this month, Horowitz will spend six months with MIT International Science and Technology Initiatives (MISTI), which fosters partnerships with industry leaders and host organizations around the world.
Horowitz is thinking of working with a renewable energy company in Denmark, one of the countries they toured during a summer 2019 field trip led by the MIT Energy Initiative’s Director of Education Antje Danielson. They were particularly struck by Samsø, the world’s first carbon-neutral island, run entirely on renewable energy. “It inspired me to see what’s out there when I was a sophomore,” Horowitz says. They’re ready to see where inspiration takes them next.
This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative.
Prochlorococcus, the world’s most abundant photosynthetic organism, reveals a gene-transfer mechanism that may be key to its abundance and diversity.
Read more about this article :
From the tropics to the poles, from the sea surface to hundreds of feet below, the world’s oceans are teeming with one of the tiniest of organisms: a type of bacteria called Prochlorococcus, which despite their minute size are collectively responsible for a sizable portion of the oceans’ oxygen production. But the remarkable ability of these diminutive organisms to diversify and adapt to such profoundly different environments has remained something of a mystery.
Now, new research reveals that these tiny bacteria exchange genetic information with one another, even when widely separated, by a previously undocumented mechanism. This enables them to transmit whole blocks of genes, such as those conferring the ability to metabolize a particular kind of nutrient or to defend themselves from viruses, even in regions where their population in the water is relatively sparse.
The findings describe a new class of genetic agents involved in horizontal gene transfer, in which genetic information is passed directly between organisms — whether of the same or different species — through means other than lineal descent. The researchers have dubbed the agents that carry out this transfer “tycheposons,” which are sequences of DNA that can include several entire genes as well as surrounding sequences, and can spontaneously separate out from the surrounding DNA. Then, they can be transported to other organisms by one or another possible carrier system including tiny bubbles known as vesicles that cells can produce from their own membranes.
The research, which included studying hundreds of Prochlorococcus genomes from different ecosystems around the world, as well as lab-grown samples of different variants, and even evolutionary processes carried out and observed in the lab, is reported today in the journal Cell, in a paper by former MIT postdocs Thomas Hackl and Raphaël Laurenceau, visiting postdoc Markus Ankenbrand, Institute Professor Sallie “Penny” Chisholm, and 16 others at MIT and other institutions.
Chisholm, who played a role in the discovery of these ubiquitous organisms in 1988, says of the new findings, “We’re very excited about it because it’s a new horizontal gene-transfer agent for bacteria, and it explains a lot of the patterns that we see in Prochlorococcus in the wild, the incredible diversity.” Now thought to be the world’s most abundant photosynthetic organism, the tiny variants of what are known as cyanobacteria are also the smallest of all photosynthesizers.
Hackl, who is now at the University of Groningen in the Netherlands, says the work began by studying the 623 reported genome sequences of different species of Prochlorococcus from different regions, trying to figure out how they were able to so readily lose or gain particular functions despite their apparent lack of any of the known systems that promote/boost horizontal gene transfer, such as plasmids or viruses known as prophages.
What Hackl, Laurenceau, and Ankenbrand investigated were “islands” of genetic material that seemed to be hotspots of variability and often contained genes that were associated with known key survival processes such as the ability to assimilate essential, and often limiting, nutrients such as iron, or nitrogen, or phosphates. These islands contained genes that varied enormously between different species, but they always occurred in the same parts of the genome and sometimes were nearly identical even in widely different species — a strong indicator of horizontal transfer.
But the genomes showed none of the usual features associated with what are known as mobile genetic elements, so initially this remained a puzzle. It gradually became apparent that this system of gene transfer and diversification was different from any of the several other mechanisms that have been observed in other organisms, including in humans.
Hackl describes what they found as being something like a genetic LEGO set, with chunks of DNA bundled together in ways that could almost instantly confer the ability to adapt to a particular environment. For example, a species limited by the availability of particular nutrients could acquire genes necessary to enhance the uptake of that nutrient.
The microbes appear to use a variety of mechanisms to transport these tycheposons (a name derived from the name of the Greek goddess Tyche, daughter of Oceanus). One is the use of membrane vesicles, little bubbles pouched off from the surface of a bacterial cell and released with tycheposons inside it. Another is by “hijacking” virus or phage infections and allowing them to carry the tycheposons along with their own infectious particles, called capsids. These are efficient solutions, Hackl says, “because in the open ocean, these cells rarely have cell-to-cell contacts, so it’s difficult for them to exchange genetic information without a vehicle.”
And sure enough, when capsids or vesicles collected from the open ocean were studied, “they’re actually quite enriched” in these genetic elements, Hackl says. The packets of useful genetic coding are “actually swimming around in these extracellular particles and potentially being able to be taken up by other cells.”
Chisholm says that “in the world of genomics, there’s a lot of different types of these elements” — sequences of DNA that are capable of being transferred from one genome to another. However, “this is a new type,” she says. Hackl adds that “it’s a distinct family of mobile genetic elements. It has similarities to others, but no really tight connections to any of them.”
While this study was specific to Prochlorococcus, Hackl says the team believes the phenomenon may be more generalized. They have already found similar genetic elements in other, unrelated marine bacteria, but have not yet analyzed these samples in detail. “Analogous elements have been described in other bacteria, and we now think that they may function similarly,” he says.
“It’s kind of a plug-and-play mechanism, where you can have pieces that you can play around with and make all these different combinations,” he says. “And with the enormous population size of Prochlorococcus, it can play around a lot, and try a lot of different combinations.”
Nathan Ahlgren, an assistant professor of biology at Clark University who was not associated with this research, says “The discovery of tycheposons is important and exciting because it provides a new mechanistic understanding of how Prochlorococcus are able to swap in and out new genes, and thus ecologically important traits. Tycheposons provide a new mechanistic explanation for how it’s done.” He says “they took a creative way to fish out and characterize these new genetic elements ‘hiding’ in the genomes of Prochlorococcus.”
He adds that genomic islands, the portions of the genome where these tycheposons were found, “are found in many bacteria, not just marine bacteria, so future work on tycheposons has wider implications for our understanding of the evolution of bacterial genomes.”
The team included researchers at MIT’s Department of Civil and Environmental Engineering, the University of Wuerzburg in Germany, the University of Hawaii at Manoa, Ohio State University, Oxford Nanopore Technologies in California, Bigelow Laboratory for Ocean Sciences in Maine, and Wellesley College. The work was supported by the Simons Foundation, the Gordon and Betty Moore Foundation, the U.S. Department of Energy, and the U.S. National Science Foundation.
Author : Mary Beth Gallagher | Department of Mechanical Engineering
In class 2.702 (Systems Engineering and Naval Ship Design), naval officers and other graduate students get hands-on experience in project management skills that will be central to their future careers.
Read more about this article :
Since 1901, MIT has offered a graduate program unlike any other at the Institute. The Naval Construction and Engineering program in the Department of Mechanical Engineering educates active duty officers in the U.S. Navy, U.S Coast Guard, and foreign navies. Every year, the U.S. Navy chooses 10 officers to enroll in the program, which is often referred to as Course 2N.
“This is a valuable relationship, both from MIT’s perspective and the Navy’s perspective. We have access to the greatest engineering school in the world with a level of expertise that just doesn’t exist anywhere else. It brings a technical rigor and competence to our naval officers that just can’t be matched anywhere else,” says Commander Douglas Jonart Eng ’14, SM ’14, PhD ’16, associate professor of the practice at MIT and Course 2N instructor.
In addition to earning a master’s degree in Naval Architecture and Marine Engineering or a Naval Engineer degree, students often earn an additional master’s degree in a relevant field such as mechanical engineering, civil engineering, or nuclear engineering.
The Course 2N curriculum is structured around a sequence of courses known as the “2.70X series.” In their first semester at MIT, students take 2.701 (Principles of Naval Architecture), which offers an introduction to topics such as ship geometry, hull structure, and ship resistance. The following semester, students enroll in 2.702 (Systems Engineering and Naval Ship Design).
“The main takeaway for 2.702 is to break away from the pure engineering aspect of shipbuilding and design, and start to focus on skills like team building, project management, cost estimates, and developing metrics,” says Captain Jeremy Leghorn Eng ’09, SM ’09, professor of the practice and one of the course’s instructors, alongside Jonart.
Jonart and Leghorn introduce students to topics ranging from systems engineering to understanding the needs of stakeholders and translating those needs into a design. Students are tasked with putting these principles into practice during a semester-long design project that culminates with an exercise in the MIT Towing Tank.
Putting vessel designs to the test
Students in 2.702 are divided into teams to work on a simplified design project for a surface ship vessel. With a budget of just $75, students buy materials to construct a remote-controlled vessel within an initial set of design parameters. The vessels must have the ability to complete a series of tasks in the MIT Towing Tank at the end of the semester.
“With the design project, we really want students to see how the decisions they make early in a project can affect them later, what the ramifications are, and how they have to recover — those are important lessons to learn when you manage big projects,” adds Jonart.
For many students, the project is the first time they will get to roll up their sleeves and build something at MIT.
“The class was great because it was one of the first times I got to do a lot of hands-on stuff and I learned a great deal from it. It takes students through the whole life cycle of a ship, which is the root of systems engineering,” says student Thinh Hoang, a lieutenant in the U.S. Navy.
The project also enables students to play around with their creativity. This past semester, the vessels included a paddle-driven boat and a boat powered by a remote control system taken from a toy.
For the main event, the entire class gathers as each vessel is put to the test in the MIT Towing Tank, a 100-foot long tank complete with a wave maker.
In addition to basic parameters for buoyancy, the hull, and propulsion, each vessel is challenged with completing tasks that represent the anti-submarine, anti-surface, and anti-land warfare capabilities that would be standard on any real naval ship. Vessels aim golf balls at a small circle on the tow floor or launch ping-pong balls into floating hoops, meant to represent surface and land targets.
On test day, the Towing Tank is buzzing with activity and excitement. Students cheer each other on, and there’s an air of friendly competition as each vessel completes the required tasks.
According to Jonart, something inevitably will not perform as expected on test day, forcing teams to scramble and problem-solve.
“It’s really nice to see the projects come together, put them in the Towing Tank, have something go terribly wrong, and see these engineers figure out how to fix it,” Jonart says.
Hoang and his team ran into one such problem when the shaft fell off their motor. They quickly swung into action and managed to duct tape a backup propulsion system to their boat.
“One of the big takeaways for me is that what you plan initially can fall apart really quickly. It showed us how important it is to be cognizant of decisions early on that can affect your project overall,” says Hoang.
This ability to anticipate problems, troubleshoot, and quickly come up with a solution is something naval officers are familiar with.
“One thing you’ll learn in the Navy, is things never go as planned. You can prepare a plan, but when it doesn’t go the way that you want it to go, you have to be able to be flexible,” adds Lieutenant Asia Allison, a Course 2N student.
At the end of 2.702, students are able to apply the underlying concepts of systems engineering to make informed decisions in uncertain environments. The rest of the “2.70X” series covers naval ship design and naval ship conversion. The 2N curriculum culminates in 2.705 (Projects in New Concept Naval Ship Design), a year-long original design project.
Future naval leaders
Upon graduating from the Course 2N program, many graduates will go on a “qualification tour” within the U.S. Navy, where they hone their skills and focus on a specific area of expertise.
Oftentimes, students will move on to program manager roles within the U.S. Navy. In particular, they frequently manage acquisition programs. According to Leghorn, today’s Course 2N students could someday be responsible for future aircraft carriers or submarine programs.
“It’s amazing to think that our students will be involved in the procurement of an aircraft carrier that could have a 50-year life cycle,” adds Leghorn.
Captain Leghorn credits the 2N program, and courses like 2.702, for equipping students with the confidence needed to succeed in naval acquisition programs.
“The program here at MIT gives students the confidence to communicate with engineers within the Navy chain of command. They have that technical competence that allows them to be very effective leaders within our Navy programs,” he says. “This MIT educational program is the building block for the rest of those Navy officers’ careers.”
Up to one-third of the carbon consumed by Prochlorococcus may come from sources other than photosynthesis.
Read more about this article :
One of the smallest and mightiest organisms on the planet is a plant-like bacterium known to marine biologists as Prochlorococcus. The green-tinted microbe measures less than a micron across, and its populations suffuse through the upper layers of the ocean, where a single teaspoon of seawater can hold millions of the tiny organisms.
Prochlorococcus grows through photosynthesis, using sunlight to convert the atmosphere’s carbon dioxide into organic carbon molecules. The microbe is responsible for 5 percent of the world’s photosynthesizing activity, and scientists have assumed that photosynthesis is the microbe’s go-to strategy for acquiring the carbon it needs to grow.
But a new MIT study in Nature Microbiology today has found that Prochlorococcus relies on another carbon-feeding strategy, more than previously thought.
Organisms that use a mix of strategies to provide carbon are known as mixotrophs. Most marine plankton are mixotrophs. And while Prochlorococcus is known to occasionally dabble in mixotrophy, scientists have assumed the microbe primarily lives a phototrophic lifestyle.
The new MIT study shows that in fact, Prochlorococcus may be more of a mixotroph than it lets on. The microbe may get as much as one-third of its carbon through a second strategy: consuming the dissolved remains of other dead microbes.
The new estimate may have implications for climate models, as the microbe is a significant force in capturing and “fixing” carbon in the Earth’s atmosphere and ocean.
“If we wish to predict what will happen to carbon fixation in a different climate, or predict where Prochlorococcus will or will not live in the future, we probably won’t get it right if we’re missing a process that accounts for one-third of the population’s carbon supply,” says Mick Follows, a professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and its Department of Civil and Environmental Engineering.
The study’s co-authors include first author and MIT postdoc Zhen Wu, along with collaborators from the University of Haifa, the Leibniz-Institute for Baltic Sea Research, the Leibniz-Institute of Freshwater Ecology and Inland Fisheries, and Potsdam University.
Persistent plankton
Since Prochlorococcus was first discovered in the Sargasso Sea in 1986, by MIT Institute Professor Sallie “Penny” Chisholm and others, the microbe has been observed throughout the world’s oceans, inhabiting the upper sunlit layers ranging from the surface down to about 160 meters. Within this range, light levels vary, and the microbe has evolved a number of ways to photosynthesize carbon in even low-lit regions.
The organism has also evolved ways to consume organic compounds including glucose and certain amino acids, which could help the microbe survive for limited periods of time in dark ocean regions. But surviving on organic compounds alone is a bit like only eating junk food, and there is evidence that Prochlorococcus will die after a week in regions where photosynthesis is not an option.
And yet, researchers including Daniel Sher of the University of Haifa, who is a co-author of the new study, have observed healthy populations of Prochlorococcus that persist deep in the sunlit zone, where the light intensity should be too low to maintain a population. This suggests that the microbes must be switching to a non-photosynthesizing, mixotrophic lifestyle in order to consume other organic sources of carbon.
“It seems that at least some Prochlorococcus are using existing organic carbon in a mixotrophic way,” Follows says. “That stimulated the question: How much?”
What light cannot explain
In their new paper, Follows, Wu, Sher, and their colleagues looked to quantify the amount of carbon that Prochlorococcus is consuming through processes other than photosynthesis.
The team looked first to measurements taken by Sher’s team, which previously took ocean samples at various depths in the Mediterranean Sea and measured the concentration of phytoplankton, including Prochlorococcus, along with the associated intensity of light and the concentration of nitrogen — an essential nutrient that is richly available in deeper layers of the ocean and that plankton can assimilate to make proteins.
Wu and Follows used this data, and similar information from the Pacific Ocean, along with previous work from Chisholm’s lab, which established the rate of photosynthesis that Prochlorococcus could carry out in a given intensity of light.
“We converted that light intensity profile into a potential growth rate — how fast the population of Prochlorococcus could grow if it was acquiring all it’s carbon by photosynthesis, and light is the limiting factor,” Follows explains.
The team then compared this calculated rate to growth rates that were previously observed in the Pacific Ocean by several other research teams.
“This data showed that, below a certain depth, there’s a lot of growth happening that photosynthesis simply cannot explain,” Follows says. “Some other process must be at work to make up the difference in carbon supply.”
The researchers inferred that, in deeper, darker regions of the ocean, Prochlorococcus populations are able to survive and thrive by resorting to mixotrophy, including consuming organic carbon from detritus. Specifically, the microbe may be carrying out osmotrophy — a process by which an organism passively absorbs organic carbon molecules via osmosis.
Judging by how fast the microbe is estimated to be growing below the sunlit zone, the team calculates that Prochlorococcus obtains up to one-third of its carbon diet through mixotrophic strategies.
“It’s kind of like going from a specialist to a generalist lifestyle,” Follows says. “If I only eat pizza, then if I’m 20 miles from a pizza place, I’m in trouble, whereas if I eat burgers as well, I could go to the nearby McDonald’s. People had thought of Prochlorococcus as a specialist, where they do this one thing (photosynthesis) really well. But it turns out they may have more of a generalist lifestyle than we previously thought.”
Chisholm, who has both literally and figuratively written the book on Prochlorococcus, says the group’s findings “expand the range of conditions under which their populations can not only survive, but also thrive. This study changes the way we think about the role of Prochlorococcus in the microbial food web.”
This research was supported, in part, by the Israel Science Foundation, the U.S. National Science Foundation, and the Simons Foundation.
The device could help scientists explore unknown regions of the ocean, track pollution, or monitor the effects of climate change.
Read more about this article :
Scientists estimate that more than 95 percent of Earth’s oceans have never been observed, which means we have seen less of our planet’s ocean than we have the far side of the moon or the surface of Mars.
The high cost of powering an underwater camera for a long time, by tethering it to a research vessel or sending a ship to recharge its batteries, is a steep challenge preventing widespread undersea exploration.
MIT researchers have taken a major step to overcome this problem by developing a battery-free, wireless underwater camera that is about 100,000 times more energy-efficient than other undersea cameras. The device takes color photos, even in dark underwater environments, and transmits image data wirelessly through the water.
The autonomous camera is powered by sound. It converts mechanical energy from sound waves traveling through water into electrical energy that powers its imaging and communications equipment. After capturing and encoding image data, the camera also uses sound waves to transmit data to a receiver that reconstructs the image.
Because it doesn’t need a power source, the camera could run for weeks on end before retrieval, enabling scientists to search remote parts of the ocean for new species. It could also be used to capture images of ocean pollution or monitor the health and growth of fish raised in aquaculture farms.
“One of the most exciting applications of this camera for me personally is in the context of climate monitoring. We are building climate models, but we are missing data from over 95 percent of the ocean. This technology could help us build more accurate climate models and better understand how climate change impacts the underwater world,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the MIT Media Lab, and senior author of a new paper on the system.
Joining Adib on the paper are co-lead authors and Signal Kinetics group research assistants Sayed Saad Afzal, Waleed Akbar, and Osvy Rodriguez, as well as research scientist Unsoo Ha, and former group researchers Mario Doumet and Reza Ghaffarivardavagh. The paper is published today in Nature Communications.
Going battery-free
To build a camera that could operate autonomously for long periods, the researchers needed a device that could harvest energy underwater on its own while consuming very little power.
The camera acquires energy using transducers made from piezoelectric materials that are placed around its exterior. Piezoelectric materials produce an electric signal when a mechanical force is applied to them. When a sound wave traveling through the water hits the transducers, they vibrate and convert that mechanical energy into electrical energy.
Those sound waves could come from any source, like a passing ship or marine life. The camera stores harvested energy until it has built up enough to power the electronics that take photos and communicate data.
To keep power consumption as a low as possible, the researchers used off-the-shelf, ultra-low-power imaging sensors. But these sensors only capture grayscale images. And since most underwater environments lack a light source, they needed to develop a low-power flash, too.
“We were trying to minimize the hardware as much as possible, and that creates new constraints on how to build the system, send information, and perform image reconstruction. It took a fair amount of creativity to figure out how to do this,” Adib says.
They solved both problems simultaneously using red, green, and blue LEDs. When the camera captures an image, it shines a red LED and then uses image sensors to take the photo. It repeats the same process with green and blue LEDs.
Even though the image looks black and white, the red, green, and blue colored light is reflected in the white part of each photo, Akbar explains. When the image data are combined in post-processing, the color image can be reconstructed.
“When we were kids in art class, we were taught that we could make all colors using three basic colors. The same rules follow for color images we see on our computers. We just need red, green, and blue — these three channels — to construct color images,” he says.
Sending data with sound
Once image data are captured, they are encoded as bits (1s and 0s) and sent to a receiver one bit at a time using a process called underwater backscatter. The receiver transmits sound waves through the water to the camera, which acts as a mirror to reflect those waves. The camera either reflects a wave back to the receiver or changes its mirror to an absorber so that it does not reflect back.
A hydrophone next to the transmitter senses if a signal is reflected back from the camera. If it receives a signal, that is a bit-1, and if there is no signal, that is a bit-0. The system uses this binary information to reconstruct and post-process the image.
“This whole process, since it just requires a single switch to convert the device from a nonreflective state to a reflective state, consumes five orders of magnitude less power than typical underwater communications systems,” Afzal says.
The researchers tested the camera in several underwater environments. In one, they captured color images of plastic bottles floating in a New Hampshire pond. They were also able to take such high-quality photos of an African starfish that tiny tubercles along its arms were clearly visible. The device was also effective at repeatedly imaging the underwater plant Aponogeton ulvaceus in a dark environment over the course of a week to monitor its growth.
Now that they have demonstrated a working prototype, the researchers plan to enhance the device so it is practical for deployment in real-world settings. They want to increase the camera’s memory so it could capture photos in real-time, stream images, or even shoot underwater video.
They also want to extend the camera’s range. They successfully transmitted data 40 meters from the receiver, but pushing that range wider would enable the camera to be used in more underwater settings.
“This will open up great opportunities for research both in low-power IoT devices as well as underwater monitoring and research,” says Haitham Al-Hassanieh, an assistant professor of electrical and computer engineering at the University of Illinois Urbana-Champaign, who was not involved with this research.
This research is supported, in part, by the Office of Naval Research, the Sloan Research Fellowship, the National Science Foundation, the MIT Media Lab, and the Doherty Chair in Ocean Utilization.
A new field study reveals a previously unobserved fluid dynamic process that is key to assessing impact of deep-sea mining operations.
Read more about this article :
What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.
The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.
Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.
The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.
Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.
“It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”
The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.
Deep-sea maneuvers
To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.
Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.
In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.
While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”
Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.
For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.
Out of steam
Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.
“We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”
From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.
“The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”
By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.
“Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”
“Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”
This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states.
Author : Department of Civil and Environmental Engineering
Researchers reveal how an algae-eating bacterium solves an environmental engineering challenge.
Read more about this article :
Cooperation is a core part of life for many organisms, ranging from microbes to complex multicellular life. It emerges when individuals share resources or partition a task in such a way that each derives a greater benefit when acting together than they could on their own. For example, birds and fish flock to evade predators, slime mold swarms to hunt for food and reproduce, and bacteria form biofilms to resist stress.
Individuals must live in the same “neighborhood” to cooperate. For bacteria, this neighborhood can be as small as tens of microns. But in environments like the ocean, it’s rare for cells with the same genetic makeup to co-occur in the same neighborhood on their own. And this necessity poses a puzzle to scientists: In environments where survival hinges on cooperation, how do bacteria build their neighborhood?
To study this problem, MIT professor Otto X. Cordero and colleaguestook inspiration from nature: They developed a model system around a common coastal seawater bacterium that requires cooperation to eat sugars from brown algae. In the system, single cells were initially suspended in seawater too far away from other cells to cooperate. To share resources and grow, the cells had to find a mechanism of creating a neighborhood. “Surprisingly, each cell was able to divide and create its own neighborhood of clones by forming tightly packed clusters,” says Cordero, associate professor in the Department of Civil and Environmental Engineering.
A new paper, published today in Current Biology, demonstrates how an algae-eating bacterium solves the engineering challenge of creating local cell density starting from a single-celled state.
“A key discovery was the importance of phenotypic heterogeneity in supporting this surprising mechanism of clonal cooperation,” says Cordero, lead author of the new paper.
Using a combination of microscopy, transcriptomics, and labeling experiments to profile a cellular metabolic state, the researchers found that cells phenotypically differentiate into a sticky “shell” population and a motile, carbon-storing “core.” The researchers propose that shell cells create the cellular neighborhood needed to sustain cooperation while core cells accumulate stores of carbon that support further clonal reproduction when the multicellular structure ruptures.
This work addresses a key piece in the bigger challenge of understanding the bacterial processes that shape our earth, such as the cycling of carbon from dead organic matter back into food webs and the atmosphere. “Bacteria are fundamentally single cells, but often what they accomplish in nature is done through cooperation. We have much to uncover about what bacteria can accomplish together and how that differs from their capacity as individuals,” adds Cordero.
Co-authors include Julia Schwartzman and Ali Ebrahimi, former postdocs in the Cordero Lab. Other co-authors are Gray Chadwick, a former graduate student at Caltech; Yuya Sato, a senior researcher at Japan’s National Institute of Advanced Industrial Science and Technology; Benjamin Roller, a current postdoc at the University of Vienna; and Victoria Orphan of Caltech.
Funding was provided by the Simons Foundation. Individual authors received support from the Swiss National Science Foundation, Japan Society for the Promotion of Science, the U.S. National Science Foundation, the Kavli Institute of Theoretical Physics, and the National Institutes of Health.
Ed Boyle to step down as director; Mick Follows will take over the directorship in July.
Read more about this article :
After 13 years as director of the MIT-Woods Hole Oceanographic Institution (WHOI) Joint Program in Oceanography/Applied Ocean Science and Engineering, Ed Boyle, professor of ocean geochemistry in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), is stepping down at the end of June. Professor Mick Follows, who holds joint appointments in EAPS and the Department of Civil and Environmental Engineering, will take on the directorship beginning July 1.
The leadership succession was announced by MIT Vice President for Research Maria Zuber in an email to the MIT-WHOI Joint Program community.
In her letter, Zuber noted that, “under Ed’s leadership, the Joint Program has continued to be recognized as one of the world’s premier graduate programs in oceanography, a national and global asset to education and research in ocean science. Ed’s positive impact on the program will benefit students, faculty, and staff for years to come.”
Boyle received his PhD in oceanography from the MIT-WHOI Joint Program in 1976 and joined the MIT faculty the following year. As a marine geochemist, his research focuses on the oceanic dispersal of anthropogenic emissions and the evolution of the Earth’s climate. Boyle is a member of the National Academy of Sciences and a recipient of the Urey Medal of the European Association of Geochemistry. He assumed the role of director of the Joint Program in 2009.
Follows, who joined the MIT faculty in 2013, has been closely involved with the MIT-WHOI Joint Program for many years, advising students and contributing to program development. In addition to his new position as the program’s director, Follows is lead investigator for both the MIT Darwin Project and the Simons Collaboration on Computational Biogeochemical Modeling of Marine Ecosystems, where he studies the biogeochemical cycles of carbon and nutrients in the ocean.
Follows “is fully invested in the program’s ongoing success, and will make an excellent director,” Zuber wrote in her email.
A distributed sensor network may help researchers identify the physical processes contributing to diminishing sea ice in the planet’s fastest-warming region.
Read more about this article :
Despite its below-freezing temperatures, the Arctic is warming twice as fast as the rest of the planet. As Arctic sea ice melts, fewer bright surfaces are available to reflect sunlight back into space. When fractures open in the ice cover, the water underneath gets exposed. Dark, ice-free water absorbs the sun’s energy, heating the ocean and driving further melting — a vicious cycle. This warming in turn melts glacial ice, contributing to rising sea levels.
Warming climate and rising sea levels endanger the nearly 40 percent of the U.S. population living in coastal areas, the billions of people who depend on the ocean for food and their livelihoods, and species such as polar bears and Artic foxes. Reduced ice coverage is also making the once-impassable region more accessible, opening up new shipping lanes and ports. Interest in using these emerging trans-Arctic routes for product transit, extraction of natural resources (e.g., oil and gas), and military activity is turning an area traditionally marked by low tension and cooperation into one of global geopolitical competition.
As the Arctic opens up, predicting when and where the sea ice will fracture becomes increasingly important in strategic decision-making. However, huge gaps exist in our understanding of the physical processes contributing to ice breakup. Researchers at MIT Lincoln Laboratory seek to help close these gaps by turning a data-sparse environment into a data-rich one. They envision deploying a distributed set of unattended sensors across the Arctic that will persistently detect and geolocate ice fracturing events. Concurrently, the network will measure various environmental conditions, including water temperature and salinity, wind speed and direction, and ocean currents at different depths. By correlating these fracturing events and environmental conditions, they hope to discover meaningful insights about what is causing the sea ice to break up. Such insights could help predict the future state of Arctic sea ice to inform climate modeling, climate change planning, and policy decision-making at the highest levels.
“We’re trying to study the relationship between ice cracking, climate change, and heat flow in the ocean,” says Andrew March, an assistant leader of Lincoln Laboratory’s Advanced Undersea Systems and Technology Group. “Do cracks in the ice cause warm water to rise and more ice to melt? Do undersea currents and waves cause cracking? Does cracking cause undersea waves? These are the types of questions we aim to investigate.”
Arctic access
In March 2022, Ben Evans and Dave Whelihan, both researchers in March’s group, traveled for 16 hours across three flights to Prudhoe Bay, located on the North Slope of Alaska. From there, they boarded a small specialized aircraft and flew another 90 minutes to a three-and-a-half-mile-long sheet of ice floating 160 nautical miles offshore in the Arctic Ocean. In the weeks before their arrival, the U.S. Navy’s Arctic Submarine Laboratory had transformed this inhospitable ice floe into a temporary operating base called Ice Camp Queenfish, named after the first Sturgeon-class submarine to operate under the ice and the fourth to reach the North Pole. The ice camp featured a 2,500-foot-long runway, a command center, sleeping quarters to accommodate up to 60 personnel, a dining tent, and an extremely limited internet connection.
At Queenfish, for the next four days, Evans and Whelihan joined U.S. Navy, Army, Air Force, Marine Corps, and Coast Guard members, and members of the Royal Canadian Air Force and Navy and United Kingdom Royal Navy, who were participating in Ice Exercise (ICEX) 2022. Over the course of about three weeks, more than 200 personnel stationed at Queenfish, Prudhoe Bay, and aboard two U.S. Navy submarines participated in this biennial exercise. The goals of ICEX 2022 were to assess U.S. operational readiness in the Arctic; increase our country’s experience in the region; advance our understanding of the Arctic environment; and continue building relationships with other services, allies, and partner organizations to ensure a free and peaceful Arctic. The infrastructure provided for ICEX concurrently enables scientists to conduct research in an environment — either in person or by sending their research equipment for exercise organizers to deploy on their behalf — that would be otherwise extremely difficult and expensive to access.
In the Arctic, windchill temperatures can plummet to as low as 60 degrees Fahrenheit below zero, cold enough to freeze exposed skin within minutes. Winds and ocean currents can drift the entire camp beyond the reach of nearby emergency rescue aircraft, and the ice can crack at any moment. To ensure the safety of participants, a team of Navy meteorological specialists continually monitors the ever-changing conditions. The original camp location for ICEX 2022 had to be evacuated and relocated after a massive crack formed in the ice, delaying Evans’ and Whelihan’s trip. Even the newly selected site had a large crack form behind the camp and another crack that necessitated moving a number of tents.
“Such cracking events are only going to increase as the climate warms, so it’s more critical now than ever to understand the physical processes behind them,” Whelihan says. “Such an understanding will require building technology that can persist in the environment despite these incredibly harsh conditions. So, it’s a challenge not only from a scientific perspective but also an engineering one.”
“The weather always gets a vote, dictating what you’re able to do out here,” adds Evans. “The Arctic Submarine Laboratory does a lot of work to construct the camp and make it a safe environment where researchers like us can come to do good science. ICEX is really the only opportunity we have to go onto the sea ice in a place this remote to collect data.”
A legacy of sea ice experiments
Though this trip was Whelihan’s and Evans’ first to the Arctic region, staff from the laboratory’s Advanced Undersea Systems and Technology Group have been conducting experiments at ICEX since 2018. However, because of the Arctic’s remote location and extreme conditions, data collection has rarely been continuous over long periods of time or widespread across large areas. The team now hopes to change that by building low-cost, expendable sensing platforms consisting of co-located devices that can be left unattended for automated, persistent, near-real-time monitoring.
“The laboratory’s extensive expertise in rapid prototyping, seismo-acoustic signal processing, remote sensing, and oceanography make us a natural fit to build this sensor network,” says Evans.
In the months leading up to the Arctic trip, the team collected seismometer data at Firepond, part of the laboratory’s Haystack Observatory site in Westford, Massachusetts. Through this local data collection, they aimed to gain a sense of what anthropogenic (human-induced) noise would look like so they could begin to anticipate the kinds of signatures they might see in the Arctic. They also collected ice melting/fracturing data during a thaw cycle and correlated these data with the weather conditions (air temperature, humidity, and pressure). Through this analysis, they detected an increase in seismic signals as the temperature rose above 32 F — an indication that air temperature and ice cracking may be related.
A sensing network
At ICEX, the team deployed various commercial off-the-shelf sensors and new sensors developed by the laboratory and University of New Hampshire (UNH) to assess their resiliency in the frigid environment and to collect an initial dataset.
“One aspect that differentiates these experiments from those of the past is that we concurrently collected seismo-acoustic data and environmental parameters,” says Evans.
The commercial technologies were seismometers to detect the vibrational energy released when sea ice fractures or collides with other ice floes; a hydrophone (underwater microphone) array to record the acoustic energy created by ice-fracturing events; a sound speed profiler to measure the speed of sound through the water column; and a conductivity, temperature, and depth (CTD) profiler to measure the salinity (related to conductivity), temperature, and pressure (related to depth) throughout the water column. The speed of sound in the ocean primarily depends on these three quantities.
To precisely measure the temperature across the entire water column at one location, they deployed an array of transistor-based temperature sensors developed by the laboratory’s Advanced Materials and Microsystems Group in collaboration with the Advanced Functional Fabrics of America Manufacturing Innovation Institute. The small temperature sensors run along the length of a thread-like polymer fiber embedded with multiple conductors. This fiber platform, which can support a broad range of sensors, can be unspooled hundreds of feet below the water’s surface to concurrently measure temperature or other water properties — the fiber deployed in the Arctic also contained accelerometers to measure depth — at many points in the water column. Traditionally, temperature profiling has required moving a device up and down through the water column.
The team also deployed a high-frequency echosounder supplied by Anthony Lyons and Larry Mayer, collaborators at UNH’s Center for Coastal and Ocean Mapping. This active sonar uses acoustic energy to detect internal waves, or waves occurring beneath the ocean’s surface.
“You may think of the ocean as a homogenous body of water, but it’s not,” Evans explains. “Different currents can exist as you go down in depth, much like how you can get different winds when you go up in altitude. The UNH echosounder allows us to see the different currents in the water column, as well as ice roughness when we turn the sensor to look upward.”
“The reason we care about currents is that we believe they will tell us something about how warmer water from the Atlantic Ocean is coming into contact with sea ice,” adds Whelihan. “Not only is that water melting ice but it also has lower salt content, resulting in oceanic layers and affecting how long ice lasts and where it lasts.”
Back home, the team has begun analyzing their data. For the seismic data, this analysis involves distinguishing any ice events from various sources of anthropogenic noise, including generators, snowmobiles, footsteps, and aircraft. Similarly, the researchers know their hydrophone array acoustic data are contaminated by energy from a sound source that another research team participating in ICEX placed in the water. Based on their physics, icequakes — the seismic events that occur when ice cracks — have characteristic signatures that can be used to identify them. One approach is to manually find an icequake and use that signature as a guide for finding other icequakes in the dataset.
From their water column profiling sensors, they identified an interesting evolution in the sound speed profile 30 to 40 meters below the ocean surface, related to a mass of colder water moving in later in the day. The group’s physical oceanographer believes this change in the profile is due to water coming up from the Bering Sea, water that initially comes from the Atlantic Ocean. The UNH-supplied echosounder also generated an interesting signal at a similar depth.
“Our supposition is that this result has something to do with the large sound speed variation we detected, either directly because of reflections off that layer or because of plankton, which tend to rise on top of that layer,” explains Evans.
A future predictive capability
Going forward, the team will continue mining their collected data and use these data to begin building algorithms capable of automatically detecting and localizing — and ultimately predicting — ice events correlated with changes in environmental conditions. To complement their experimental data, they have initiated conversations with organizations that model the physical behavior of sea ice, including the National Oceanic and Atmospheric Administration and the National Ice Center. Merging the laboratory’s expertise in sensor design and signal processing with their expertise in ice physics would provide a more complete understanding of how the Arctic is changing.
The laboratory team will also start exploring cost-effective engineering approaches for integrating the sensors into packages hardened for deployment in the harsh environment of the Arctic.
“Until these sensors are truly unattended, the human factor of usability is front and center,” says Whelihan. “Because it’s so cold, equipment can break accidentally. For example, at ICEX 2022, our waterproof enclosure for the seismometers survived, but the enclosure for its power supply, which was made out of a cheaper plastic, shattered in my hand when I went to pick it up.”
The sensor packages will not only need to withstand the frigid environment but also be able to “phone home” over some sort of satellite data link and sustain their power. The team plans to investigate whether waste heat from processing can keep the instruments warm and how energy could be harvested from the Arctic environment.
Before the next ICEX scheduled for 2024, they hope to perform preliminary testing of their sensor packages and concepts in Arctic-like environments. While attending ICEX 2022, they engaged with several other attendees — including the U.S. Navy, Arctic Submarine Laboratory, National Ice Center, and University of Alaska Fairbanks (UAF) — and identified cold room experimentation as one area of potential collaboration. Testing can also be performed at outdoor locations a bit closer to home and more easily accessible, such as the Great Lakes in Michigan and a UAF-maintained site in Utqiagvik (formerly named Barrow), Alaska. In the future, the laboratory team may have an opportunity to accompany U.S. Coast Guard personnel on ice-breaking vessels traveling from Alaska to Greenland. The team is also thinking about possible venues for collecting data far removed from human noise sources.
“Since I’ve told colleagues, friends, and family I was going to the Arctic, I’ve had a lot of interesting conversations about climate change and what we’re doing there and why we’re doing it,” Whelihan says. “People don’t have an intrinsic, automatic understanding of this environment and its impact because it’s so far removed from us. But the Arctic plays a crucial role in helping to keep the global climate in balance, so it’s imperative we understand the processes leading to sea ice fractures.”
This work is funded through Lincoln Laboratory’s internally administered R&D portfolio on climate.
Their model’s predictions should help researchers improve ocean climate simulations and hone the design of offshore structures.
Read more about this article :
Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfer’s point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.
Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.
The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a wave’s steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.
Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.
“Wave breaking is what puts air into the ocean,” says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. “It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.”
The study’s co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.
Learning tank
To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.
The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by “training” the model on data of breaking waves from actual experiments.
“We had a simple model that doesn’t capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking,” Eeltink explains. “Then we wanted to use machine learning to learn the difference between the two.”
The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the water’s height as waves propagated down the tank.
“It takes a lot of time to run these experiments,” Eeltink says. “Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.”
Safe harbor
In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.
After training the algorithm on their experimental data, the team introduced the model to entirely new data — in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking wave’s steepness.
The new model also captured an essential property of breaking waves known as the “downshift,” in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.
“When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong,” Eeltink says.
The team’s updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the ocean’s potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.
“The number one purpose of this model is to predict what a wave will do,” Sapsis says. “If you don’t model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.”
This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research.
MIT scientists hope to deploy a fleet of drones to get a better sense of how much carbon the ocean is absorbing, and how much more it can take.
Read more about this article :
Without the ocean, the climate crisis would be even worse than it is. Each year, the ocean absorbs billions of tons of carbon from the atmosphere, preventing warming that greenhouse gas would otherwise cause. Scientists estimate about 25 to 30 percent of all carbon released into the atmosphere by both human and natural sources is absorbed by the ocean.
“But there’s a lot of uncertainty in that number,” says Ryan Woosley, a marine chemist and a principal research scientist in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at MIT. Different parts of the ocean take in different amounts of carbon depending on many factors, such as the season and the amount of mixing from storms. Current models of the carbon cycle don’t adequately capture this variation.
To close the gap, Woosley and a team of other MIT scientists developed a research proposal for the MIT Climate Grand Challenges competition — an Institute-wide campaign to catalyze and fund innovative research addressing the climate crisis. The team's proposal, “Ocean Vital Signs,” involves sending a fleet of sailing drones to cruise the oceans taking detailed measurements of how much carbon the ocean is really absorbing. Those data would be used to improve the precision of global carbon cycle models and improve researchers’ ability to verify emissions reductions claimed by countries.
“If we start to enact mitigation strategies—either through removing CO2 from the atmosphere or reducing emissions — we need to know where CO2 is going in order to know how effective they are,” says Woosley. Without more precise models there’s no way to confirm whether observed carbon reductions were thanks to policy and people, or thanks to the ocean.
“So that’s the trillion-dollar question,” says Woosley. “If countries are spending all this money to reduce emissions, is it enough to matter?”
In February, the team’s Climate Grand Challenges proposal was named one of 27 finalists out of the almost 100 entries submitted. From among this list of finalists, MIT will announce in April the selection of five flagship projects to receive further funding and support.
Woosley is leading the team along with Christopher Hill, a principal research engineer in EAPS. The team includes physical and chemical oceanographers, marine microbiologists, biogeochemists, and experts in computational modeling from across the department, in addition to collaborators from the Media Lab and the departments of Mathematics, Aeronautics and Astronautics, and Electrical Engineering and Computer Science.
Today, data on the flux of carbon dioxide between the air and the oceans are collected in a piecemeal way. Research ships intermittently cruise out to gather data. Some commercial ships are also fitted with sensors. But these present a limited view of the entire ocean, and include biases. For instance, commercial ships usually avoid storms, which can increase the turnover of water exposed to the atmosphere and cause a substantial increase in the amount of carbon absorbed by the ocean.
“It’s very difficult for us to get to it and measure that,” says Woosley. “But these drones can.”
If funded, the team’s project would begin by deploying a few drones in a small area to test the technology. The wind-powered drones — made by a California-based company called Saildrone — would autonomously navigate through an area, collecting data on air-sea carbon dioxide flux continuously with solar-powered sensors. This would then scale up to more than 5,000 drone-days’ worth of observations, spread over five years, and in all five ocean basins.
Those data would be used to feed neural networks to create more precise maps of how much carbon is absorbed by the oceans, shrinking the uncertainties involved in the models. These models would continue to be verified and improved by new data. “The better the models are, the more we can rely on them,” says Woosley. “But we will always need measurements to verify the models.”
Improved carbon cycle models are relevant beyond climate warming as well.“CO2 is involved in so much of how the world works,” says Woosley. “We're made of carbon, and all the other organisms and ecosystems are as well. What does the perturbation to the carbon cycle do to these ecosystems?”
One of the best understood impacts is ocean acidification. Carbon absorbed by the ocean reacts to form an acid. A more acidic ocean can have dire impacts on marine organisms like coral and oysters, whose calcium carbonate shells and skeletons can dissolve in the lower pH. Since the Industrial Revolution, the ocean has become about 30 percent more acidic on average.
“So while it's great for us that the oceans have been taking up the CO2, it's not great for the oceans,” says Woosley. “Knowing how this uptake affects the health of the ocean is important as well.”
Author : Michaela Jarvis | Department of Mechanical Engineering
A new solution to beach-fouling seaweed, developed by MBA candidate Andrés Bisonó León and Luke Gray ’18, SM ’20, is designed to cut greenhouse gas emissions.
Read more about this article :
Born and raised amid the natural beauty of the Dominican Republic, Andrés Bisonó León feels a deep motivation to help solve a problem that has been threatening the Caribbean island nation’s tourism industry, its economy, and its people.
As Bisonó León discussed with his long-time friend and mentor, the Walter M. May and A. Hazel May Professor of Mechanical Engineering (MechE) Alexander Slocum Sr., ugly mats of toxic sargassum seaweed have been encroaching on the Dominican Republic’s pristine beaches and other beaches in the Caribbean region, and public and private organizations have fought a losing battle using expensive, environmentally damaging methods to clean it up. Slocum, who was on the U.S. Department of Energy's Deepwater Horizon team, has extensive experience with systems that operate in the ocean.
“In the last 10 years,” says Bisonó León, now an MBA candidate in the MIT Sloan School of Management, “sargassum, a toxic seaweed invasion, has cost the Caribbean as much as $120 million a year in cleanup and has meant a 30 to 35 percent tourism reduction, affecting not only the tourism industry, but also the environment, marine life, local economies, and human health.”
One of Bisonó León’s discussions with Slocum took place within earshot of MechE alumnus Luke Gray ’18, SM ’20, who had worked with Slocum on other projects and was at the time was about to begin his master’s program.
“Professor Slocum and Andrés happened to be discussing the sargassum problem in Andrés’ home country,” Gray says. “A week later I was on a plane to the DR to collect sargassum samples and survey the problem in Punta Cana. When I returned, my master’s program was underway, and I already had my thesis project!”
Gray also had started a working partnership with Bisonó León, which both say proceeded seamlessly right from the first moment.
“I feel that Luke right away understood the magnitude of the problem and the value we could create in the Dominican Republic and across the Caribbean by teaming up,” Bisonó León says.
Both Bisonó León and Gray also say they felt a responsibility to work toward helping the global environment.
“All of my major projects up until now have involved machines for climate restoration and/or adaptation,” says Gray.
The technologies Bisonó León and Gray arrived at after 18 months of R&D were designed to provide solutions both locally and globally.
Their Littoral Collection Module (LCM) skims sargassum seaweed off the surface of the water with nets that can be mounted on any boat. The device sits across the boat, with two large hoops holding the nets open, one on each side. As the boat travels forward, it cuts through the seaweed, which flows to the sides of the vessel and through the hoops into the nets. Effective at sweeping the seaweed from the water, the device can be employed by anyone with a boat, including local fishermen whose livelihoods have been disrupted by the seaweed’s damaging effect on tourism and the local economy.
The sargassum can then be towed out to sea, where Bisonó León’s and Gray’s second technology can come into play. By pumping the seaweed into very deep water, where it then sinks to the bottom of the ocean, the carbon in the seaweed can be sequestered. Other methods for disposing of the seaweed generally involve putting it into landfills, where it emits greenhouse gases such as methane and carbon dioxide as it breaks down. Although some seaweed can be put to other uses, including as fertilizer, sargassum has been found to contain hard-to-remove toxic substances such as arsenic and heavy metals.
In spring 2020, Bisonó León and Gray formed a company, SOS (Sargassum Ocean Sequestration) Carbon.
Bisonó León says he comes from a long line of entrepreneurs who often expressed much commitment to social impact. His family has been involved in several different industries, his grandfather and great uncles having opened the first cigar factory in the Dominican Republic in 1903.
Gray says internships with startup companies and the undergraduate projects he did with Slocum developed his interest in entrepreneurship, and his involvement with the sargassum problem only reinforced that inclination. During his master’s program, he says he became “obsessed” with finding a solution.
“Professor Slocum let me think extremely big, and so it was almost inevitable that the distillation of our two years of work would continue in some form, and starting a company happened to be the right path. My master’s experience of taking an essentially untouched problem like sargassum and then one year later designing, building, and sending 15,000 pounds of custom equipment to test for three months on a Dominican Navy ship made me realize I had discovered a recipe I could repeat — and machine design had become my core competency,” Gray says.
During the initial research and development of their technologies, Bisonó León and Gray raised $258,000 from 20 different organizations. Between June and December 2021, they succeeded in removing 3.5 million pounds of sargassum and secured contracts with Grupo Puntacana, which operates several tourist resorts, and with other hotels such as Club Med in Punta Cana. The company subcontracts with the association of fishermen in Punta Cana, employing 15 fishermen who operate LCMs and training 35 others to join as the operation expands.
Their success so far demonstrates “'mens et manus' at work,” says Slocum, referring to MIT's motto, which is Latin for "mind and hand." “Geeks hear about a very real problem that affects very real people who have no other option for their livelihoods, and they respond by inventing a solution so elegant that it can be readily deployed by those most hurt by the problem to address the problem.
“The team was always focused on the numbers, from physics to finance, and did not let hype or doubts deter their determination to rationally solve this huge problem.”
Slocum says he could predict Bisonó León and Gray would work well together “because they started out as good, smart people with complementary skills whose hearts and minds were in the right place.”
“We are working on having a global impact to reduce millions of tons of CO2 per year,” says Bisonó León. “With training from Sloan and cross-disciplinary collaborative spirit, we will be able to further expand environmental and social impact platforms much needed in the Caribbean to be able to drive real change regionally and globally.”
“I hope SOS Carbon can serve as a model and inspire similar entrepreneurial efforts," Gray says.
A Museum of Science, Boston exhibit benefits from oceanographer Paola Malanotte-Rizzoli’s work on the Venetian Lagoon’s MOSE barrier project.
Read more about this article :
Museum exhibits can be a unique way to communicate science concepts and information. Recently, MIT faculty have served as sounding boards for curators at the Museum of Science, Boston, a close neighbor of the MIT campus.
In January, Professor Emerita Paola Malanotte-Rizzoli and Cecil and Ida Green Professor Raffaele Ferrari of the Department of Earth, Atmospheric and Planetary Science (EAPS) visited the museum to view the newly opened pilot exhibit, “Resilient Venice: Adapting to Climate Change.”
When Malanotte-Rizzoli was asked to contribute her expertise on the efforts in Venice, Italy, to mitigate flood damage, she was more than willing to offer her knowledge. “I love Venice. It is fun to tell people all of the challenges which you see the lagoon has … how much must be done to preserve, not only the city, but the environment, the islands and buildings,” she says.
The installation is the second Museum of Science exhibit to be developed in recent years in consultation with EAPS scientists. In December 2020, “Arctic Adventure: Exploring with Technology” opened with the help of Cecil and Ida Green Career Development Professor Brent Minchew, who lent his expertise in geophysics and glaciology to the project. But for Malanotte-Rizzoli, the new exhibit hits a little closer to home.
“My house is there,” Malanotte-Rizzoli excitedly pointed out on the exhibit’s aerial view of Venice, which includes a view above St. Mark’s Square and some of the surrounding city.
“Resilient Venice” focuses on Malanotte-Rizzoli’s hometown, a city known for flooding. Built on a group of islands in the Venetian Lagoon, Venice has always experienced flooding, but climate change has brought unprecedented tide levels, causing billions of dollars in damages and even causing two deaths in the flood of 2019.
The dark exhibit hall is lined with immersive images created by Iconem, a startup whose mission is digital preservation of endangered World Heritage Sites. The firm took detailed 3D scans and images of Venice to put together the displays and video.
The video on which Malanotte-Rizzoli pointed to her home shows the potential sea level rise by 2100 if action isn’t taken. It shows the entrance to St. Mark’s Basilica completely submerged in water; she compares it to the disaster movie “The Day After Tomorrow.”
The MOSE system
Between critiques of the choice of music (“that’s not very Venice-inspired,” joked Ferrari, who is also Italian) and bits of conversation exchanged in Italian, the two scientists do what scientists do: discuss technicalities.
Ferrari pointed to a model of a gate system and asked Malanotte-Rizzoli if the hydraulic jump seen in the model is present in the MOSE system; she confirmed it is not.
This is the part of the exhibit that Malanotte-Rizzoli was consulted on. One of the plans Venice has implemented to address the flooding is the MOSE system — short for Modulo Sperimentale Elettromeccanico, or the Experimental Electromechanical Module. The MOSE is a system of flood barriers designed to protect the city from extremely high tides. Construction began in 2003, and its first successful operation happened on Oct. 3, 2020, when it prevented a tide 53 inches above normal from flooding the city.
The barriers are made of a series of gates, each 66-98 feet in length and 66 feet wide, which sit in chambers built into the sea floor when not in use to allow boats and wildlife to travel between the ocean and lagoon. The gates are filled with water to keep them submerged; when activated, air is pumped into them, pushing out the water and allowing them to rise. The entire process takes 30 minutes to complete, and half that time to return to the sea floor.
The top of the gates in the MOSE come out of the water completely and are individually controlled so that sections can remain open to allow ships to pass through. In the model, the gate remains partially submerged, and as the high-velocity water passes over it into an area of low velocity, it creates a small rise of water before it falls over the edge of the barrier, creating a hydraulic jump.
But Malanotte-Rizzoli joked that only scientists will care about that; otherwise, the model does a good job demonstrating how the MOSE gates rise and fall.
The MOSE system is only one of many plans taken to mitigate the rising water levels in Venice and to protect the lagoon and the surrounding area, and this is an important point for Malanotte-Rizzoli, who worked on the project from 1995 to 2013.
“It is not the MOSE or,” she emphasized. “It is the MOSE and.” Other complementary plans have been implemented to reduce harm to both economic sectors, such as shipping and tourism, as well as the wildlife that live in the lagoons.
Beyond barriers
There’s more to protecting Venice than navigating flooded streets — it’s not just “putting on rainboots,” as Malanotte-Rizzoli put it.
“It’s destroying the walls,” she said, pointing out the corrosive effects of water on a model building, which emphasizes the damage to architecture caused by the unusually high flood levels. “People don’t think about this.” The exhibit also emphasizes the economic costs of businesses lost by having visitors take down and rebuild a flood barrier for a gelato shop with the rising and falling water levels.
Malanotte-Rizzoli gave the exhibit her seal of approval, but the Venice section is only a small portion of what the finished exhibit will look like. The current plan involves expanding it to include a few other World Heritage Sites.
“How do we make people care about a site that they haven’t been to?” asked Julia Tate, the project manager of touring exhibits and exhibit production at the museum. She said that it’s easy to start with a city like Venice, since it’s a popular tourist destination. But it becomes trickier to get people to care about a site that they maybe haven’t been to, such as the Easter Islands, that are just as much at risk. The plan is to incorporate a few more sites before turning it into a traveling exhibit that will end by asking visitors to think about climate change in their own towns.
“We want them to think about solutions and how to do better,” said Tate. Hope is the alternative message: It’s not too late to act.
Malanotte-Rizzoli thinks it’s important for Bostonians to see their own city in Venice, as Boston is also at risk from sea level rise. The history of Boston reminds Malanotte-Rizzoli about her hometown and is one of the reasons why she was willing to emigrate. The history encompassed in Boston makes the need for preservation even more important.
“Those things that cannot be replaced, they must be respected in the process of preservation,” she said. “Modern things and engineering can be done even in a city which is so fragile, so delicate.”
Author : Mary Beth Gallagher | Department of Mechanical Engineering
MIT ocean and mechanical engineers are using advances in scientific computing to address the ocean’s many challenges, and seize its opportunities.
Read more about this article :
There are few environments as unforgiving as the ocean. Its unpredictable weather patterns and limitations in terms of communications have left large swaths of the ocean unexplored and shrouded in mystery.
“The ocean is a fascinating environment with a number of current challenges like microplastics, algae blooms, coral bleaching, and rising temperatures,” says Wim van Rees, the ABS Career Development Professor at MIT. “At the same time, the ocean holds countless opportunities — from aquaculture to energy harvesting and exploring the many ocean creatures we haven’t discovered yet.”
Ocean engineers and mechanical engineers, like van Rees, are using advances in scientific computing to address the ocean’s many challenges, and seize its opportunities. These researchers are developing technologies to better understand our oceans, and how both organisms and human-made vehicles can move within them, from the micro scale to the macro scale.
Bio-inspired underwater devices
An intricate dance takes place as fish dart through water. Flexible fins flap within currents of water, leaving a trail of eddies in their wake.
“Fish have intricate internal musculature to adapt the precise shape of their bodies and fins. This allows them to propel themselves in many different ways, well beyond what any man-made vehicle can do in terms of maneuverability, agility, or adaptivity,” explains van Rees.
According to van Rees, thanks to advances in additive manufacturing, optimization techniques, and machine learning, we are closer than ever to replicating flexible and morphing fish fins for use in underwater robotics. As such, there is a greater need to understand how these soft fins impact propulsion.
Van Rees and his team are developing and using numerical simulation approaches to explore the design space for underwater devices that have an increase in degrees of freedom, for instance due to fish-like, deformable fins.
These simulations help the team better understand the interplay between the fluid and structural mechanics of fish’s soft, flexible fins as they move through a fluid flow. As a result, they are able to better understand how fin shape deformations can harm or improve swimming performance. “By developing accurate numerical techniques and scalable parallel implementations, we can use supercomputers to resolve what exactly happens at this interface between the flow and the structure,” adds van Rees.
Through combining his simulation algorithms for flexible underwater structures with optimization and machine learning techniques, van Rees aims to develop an automated design tool for a new generation of autonomous underwater devices. This tool could help engineers and designers develop, for example, robotic fins and underwater vehicles that can smartly adapt their shape to better achieve their immediate operational goals — whether it’s swimming faster and more efficiently or performing maneuvering operations.
“We can use this optimization and AI to do inverse design inside the whole parameter space and create smart, adaptable devices from scratch, or use accurate individual simulations to identify the physical principles that determine why one shape performs better than another,” explains van Rees.
Swarming algorithms for robotic vehicles
Like van Rees, Principal Research Scientist Michael Benjamin wants to improve the way vehicles maneuver through the water. In 2006, then a postdoc at MIT, Benjamin launched an open-source software project for an autonomous helm technology he developed. The software, which has been used by companies like Sea Machines, BAE/Riptide, Thales UK, and Rolls Royce, as well as the United States Navy, uses a novel method of multi-objective optimization. This optimization method, developed by Benjamin during his PhD work, enables a vehicle to autonomously choose the heading, speed, depth, and direction it should go in to achieve multiple simultaneous objectives.
Now, Benjamin is taking this technology a step further by developing swarming and obstacle-avoidance algorithms. These algorithms would enable dozens of uncrewed vehicles to communicate with one another and explore a given part of the ocean.
To start, Benjamin is looking at how to best disperse autonomous vehicles in the ocean.
“Let’s suppose you want to launch 50 vehicles in a section of the Sea of Japan. We want to know: Does it make sense to drop all 50 vehicles at one spot, or have a mothership drop them off at certain points throughout a given area?” explains Benjamin.
He and his team have developed algorithms that answer this question. Using swarming technology, each vehicle periodically communicates its location to other vehicles nearby. Benjamin’s software enables these vehicles to disperse in an optimal distribution for the portion of the ocean in which they are operating.
Central to the success of the swarming vehicles is the ability to avoid collisions. Collision avoidance is complicated by international maritime rules known as COLREGS — or “Collision Regulations.” These rules determine which vehicles have the “right of way” when crossing paths, posing a unique challenge for Benjamin’s swarming algorithms.
The COLREGS are written from the perspective of avoiding another single contact, but Benjamin’s swarming algorithm had to account for multiple unpiloted vehicles trying to avoid colliding with one another.
To tackle this problem, Benjamin and his team created a multi-object optimization algorithm that ranked specific maneuvers on a scale from zero to 100. A zero would be a direct collision, while 100 would mean the vehicles completely avoid collision.
“Our software is the only marine software where multi-objective optimization is the core mathematical basis for decision-making,” says Benjamin.
While researchers like Benjamin and van Rees use machine learning and multi-objective optimization to address the complexity of vehicles moving through ocean environments, others like Pierre Lermusiaux, the Nam Pyo Suh Professor at MIT, use machine learning to better understand the ocean environment itself.
Improving ocean modeling and predictions
Oceans are perhaps the best example of what’s known as a complex dynamical system. Fluid dynamics, changing tides, weather patterns, and climate change make the ocean an unpredictable environment that is different from one moment to the next. The ever-changing nature of the ocean environment can make forecasting incredibly difficult.
Researchers have been using dynamical system models to make predictions for ocean environments, but as Lermusiaux explains, these models have their limitations.
“You can’t account for every molecule of water in the ocean when developing models. The resolution and accuracy of models, and the ocean measurements are limited. There could be a model data point every 100 meters, every kilometer, or, if you are looking at climate models of the global ocean, you may have a data point every 10 kilometers or so. That can have a large impact on the accuracy of your prediction,” explains Lermusiaux.
Graduate student Abhinav Gupta and Lermusiaux have developed a new machine-learning framework to help make up for the lack of resolution or accuracy in these models. Their algorithm takes a simple model with low resolution and can fill in the gaps, emulating a more accurate, complex model with a high degree of resolution.
For the first time, Gupta and Lermusiaux’s framework learns and introduces time delays in existing approximate models to improve their predictive capabilities.
“Things in the natural world don’t happen instantaneously; however, all the prevalent models assume things are happening in real time,” says Gupta. “To make an approximate model more accurate, the machine learning and data you are inputting into the equation need to represent the effects of past states on the future prediction.”
The team’s “neural closure model,” which accounts for these delays, could potentially lead to improved predictions for things such as a Loop Current eddy hitting an oil rig in the Gulf of Mexico, or the amount of phytoplankton in a given part of the ocean.
As computing technologies such as Gupta and Lermusiaux’s neural closure model continue to improve and advance, researchers can start unlocking more of the ocean’s mysteries and develop solutions to the many challenges our oceans face.
A pioneer of technologies associated with oceans, Milgram shaped oceanography and fluid mechanics education at MIT.
Read more about this article :
Jerome Milgram ’61, PhD ’65, professor emeritus of ocean engineering at MIT, passed away at the age of 83 on Dec. 21 with family by his side. Milgram pioneered ship design, hydrodynamics, and applied physical oceanography.
Jerome, also known as Jerry, was born in Melrose Park, Pennsylvania, on Sept. 23, 1938. His love of sailing began at the very early stages of his life. Milgram received his undergraduate degree from MIT in 1961, where he also served as captain of the sailing team. In 1965, he earned his PhD with a thesis that provided a theoretical and experimental foundation for absorbing plane water waves by means of a moving boundary at one end of a channel.
In 1967, Milgram joined the faculty at MIT, where he would spend the remainder of his career. As the W.I. Koch Professor of Marine Technology, Milgram taught courses such as 2.20 (Marine Hydrodynamics) and 2.25 (Fluid Mechanics) in mechanical engineering in addition to 6.003 (Signals and Systems) in electrical engineering for over 50 years.
Milgram also worked closely with the United States Navy and the Coast Guard. His research focused heavily on ship development, the behavior of oil spills in the marine involvement and cleanup technology (for which he holds 12 patents), the behavior of sea waves, the dynamics of underwater vehicles, and other topics. In 1992, he was the design director and chief computer modeler for America3, which won the America’s Cup. Milgram was instrumental to the team’s victory.
Over the course of his career, Milgram had more than 100 publications. A recent project detecting small plant and animal life forms in the ocean by computer-enhanced holography exemplifies the breadth of his interests.
Milgram is a life fellow of the Society of Naval Architects and Marine Engineers and a life member of the National Academy of Engineering. In 2017 the National Academy of Science awarded Milgram the Gibbs Brothers Medal for outstanding naval architecture and marine engineering contributions.
Milgram is survived by his wife, Robin; his stepson and daughter-in-law, Eben and Uromi Manage Goodale; his grandson, David Parakrama Goodale; his sister Linda (Milgram) Becker; his nephew Eric Ring and his wife Melissa Wallen; his late nephew Steven Ring and his wife Mary Ring; and their children, Andrew and Melissa.
In lieu of flowers, donations can be made in Milgram’s name to Oceana, the largest international ocean conservation organization, or to MIT. There will be an online memorial service on Saturday, Jan. 8, at 10 a.m.; see postings on legacy.com for details.
New findings may help researchers hone predictions for where phytoplankton will migrate with climate change.
Read more about this article :
Prochlorococcus are the smallest and most abundant photosynthesizing organisms on the planet. A single Prochlorococcus cell is dwarfed by a human red blood cell, yet globally the microbes number in the octillions and are responsible for a large fraction of the world’s oxygen production as they turn sunlight into energy.
Prochlorococcus can be found in the ocean’s warm surface waters, and their population drops off dramatically in regions closer to the poles. Scientists have assumed that, as with many marine species, Prochlorococcus’ range is set by temperature: The colder the waters, the less likely the microbes are to live there.
But MIT scientists have found that where the microbe lives is not determined primarily by temperature. While Prochlorococcus populations do drop off in colder waters, it’s a relationship with a shared predator, and not temperature, that sets the microbe’s range. These findings, published today in the Proceedings of the National Academy of Sciences, could help scientists predict how the microbes’ populations will shift with climate change.
“People assume that if the ocean warms up, Prochlorococcus will move poleward. And that may be true, but not for the reason they’re predicting,” says study co-author Stephanie Dutkiewicz, senior research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, temperature is a bit of a red herring.”
Dutkiewicz’s co-authors on the study are lead author and EAPS Research Scientist Christopher Follett, EAPS Professor Mick Follows, François Ribalet and Virginia Armbrust of the University of Washington, and Emily Zakem and David Caron of the University of Southern California at Los Angeles.
Temperature’s collapse
While temperature is thought to set the range of Prochloroccus and other phytoplankton in the ocean, Follett, Dutkiewicz, and their colleagues noticed a curious dissonance in data.
The team examined observations from several research cruises that sailed through the northeast Pacific Ocean in 2003, 2016, and 2017. Each vessel traversed different latitudes, sampling waters continuously and measuring concentrations of various species of bacteria and phytoplankton, including Prochlorococcus.
The MIT team used the publicly archived cruise data to map out the locations where Prochlorococcus noticeably decreased or collapsed, along with each location’s ocean temperature. Surprisingly, they found that Prochlorococcus’ collapse occurred in regions of widely varying temperatures, ranging from around 13 to 18 degrees Celsius. Curiously, the upper end of this range has been shown in lab experiments to be suitable conditions for Prochlorococcus to grow and thrive.
“Temperature itself was not able to explain where we saw these drop-offs,” Follett says.
Follett was also working out an alternate idea related to Prochlorococcus and nutrient supply. As a byproduct of its photosynthesis, the microbe produces carbohydrate — an essential nutrient for heterotrophic bacteria, which are single-celled organisms that do not photosynthesize but live off the organic matter produced by phytoplankton.
“Somewhere along the way, I wondered, what would happen if this food source Prochlorococcus was producing increased? What if we took that knob and spun it?” Follett says.
In other words, how would the balance of Prochlorococcus and bacteria shift if the bacteria’s food increased as a result of, say, an increase in other carbohydrate-producing phytoplankton? The team also wondered: If the bacteria in question were about the same size as Prochlorococcus, the two would likely share a common grazer, or predator. How would the grazer’s population also shift with a change in carbohydrate supply?
“Then we went to the whiteboard and started writing down equations and solving them for various cases, and realized that as soon as you reach an environment where other species add carbohydrates to the mix, bacteria and grazers grow up and annihilate Prochlorococcus,” Dutkiewicz says.
Nutrient shift
To test this idea, the researchers employed simulations of ocean circulation and marine ecosystem interactions. The team ran the MITgcm, a general circulation model that simulates, in this case, the ocean currents and regions of upwelling waters around the world. They overlaid a biogeochemistry model that simulates how nutrients are redistributed in the ocean. To all of this, they linked a complex ecosystem model that simulates the interactions between many different species of bacteria and phytoplankton, including Prochlorococcus.
When they ran the simulations without incorporating a representation of bacteria, they found that Prochlorococcus persisted all the way to the poles, contrary to theory and observations. When they added in the equations outlining the relationship between the microbe, bacteria, and a shared predator, Prochlorococcus’ range shifted away from the poles, matching the observations of the original research cruises.
In particular, the team observed that Prochlorococcus thrived in waters with very low nutrient levels, and where it is the dominant source of food for bacteria. These waters also happen to be warm, and Prochlorococcus and bacteria live in balance, along with their shared predator. But in more nutrient-rich enviroments, such as polar regions, where cold water and nutrients are upwelled from the deep ocean, many more species of phytoplankton can thrive. Bacteria can then feast and grow on more food sources, and in turn feed and grow more of its shared predator. Prochlorococcus, unable to keep up, is quickly decimated.
The results show that a relationship with a shared predator, and not temperature, sets Prochlorococcus’ range. Incorporating this mechanism into models will be crucial in predicting how the microbe — and possibly other marine species — will shift with climate change.
“Prochlorococcus is a big harbinger of changes in the global ocean,” Dutkiewicz says. “If its range expands, that’s a canary — a sign that things have changed in the ocean by a great deal.”
“There are reasons to believe its range will expand with a warming world,” Follett adds.” But we have to understand the physical mechanisms that set these ranges. And predictions just based on temperature will not be correct.”
This research was supported in part by the Simons Collaboration on Computational Biogeochemical Modeling of Marine Ecosystems (CBIOMES), and by NASA.
The 3D maps may help researchers track and predict the ocean’s response to climate change.
Read more about this article :
Life is teeming nearly everywhere in the oceans, except in certain pockets where oxygen naturally plummets and waters become unlivable for most aerobic organisms. These desolate pools are “oxygen-deficient zones,” or ODZs. And though they make up less than 1 percent of the ocean’s total volume, they are a significant source of nitrous oxide, a potent greenhouse gas. Their boundaries can also limit the extent of fisheries and marine ecosystems.
Now MIT scientists have generated the most detailed, three-dimensional “atlas” of the largest ODZs in the world. The new atlas provides high-resolution maps of the two major, oxygen-starved bodies of water in the tropical Pacific. These maps reveal the volume, extent, and varying depths of each ODZ, along with fine-scale features, such as ribbons of oxygenated water that intrude into otherwise depleted zones.
The team used a new method to process over 40 years’ worth of ocean data, comprising nearly 15 million measurements taken by many research cruises and autonomous robots deployed across the tropical Pacific. The researchers compiled then analyzed this vast and fine-grained data to generate maps of oxygen-deficient zones at various depths, similar to the many slices of a three-dimensional scan.
From these maps, the researchers estimated the total volume of the two major ODZs in the tropical Pacific, more precisely than previous efforts. The first zone, which stretches out from the coast of South America, measures about 600,000 cubic kilometers — roughly the volume of water that would fill 240 billion Olympic-sized pools. The second zone, off the coast of Central America, is roughly three times larger.
The atlas serves as a reference for where ODZs lie today. The team hopes scientists can add to this atlas with continued measurements, to better track changes in these zones and predict how they may shift as the climate warms.
“It’s broadly expected that the oceans will lose oxygen as the climate gets warmer. But the situation is more complicated in the tropics where there are large oxygen-deficient zones,” says Jarek Kwiecinski ’21, who developed the atlas along with Andrew Babbin, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It’s important to create a detailed map of these zones so we have a point of comparison for future change.”
The team’s study appears today in the journal Global Biogeochemical Cycles.
Airing out artifacts
Oxygen-deficient zones are large, persistent regions of the ocean that occur naturally, as a consequence of marine microbes gobbling up sinking phytoplankton along with all the available oxygen in the surroundings. These zones happen to lie in regions that miss passing ocean currents, which would normally replenish regions with oxygenated water. As a result, ODZs are locations of relatively permanent, oxygen-depleted waters, and can exist at mid-ocean depths of between roughly 35 to 1,000 meters below the surface. For some perspective, the oceans on average run about 4,000 meters deep.
Over the last 40 years, research cruises have explored these regions by dropping bottles down to various depths and hauling up seawater that scientists then measure for oxygen.
“But there are a lot of artifacts that come from a bottle measurement when you’re trying to measure truly zero oxygen,” Babbin says. “All the plastic that we deploy at depth is full of oxygen that can leach out into the sample. When all is said and done, that artificial oxygen inflates the ocean’s true value.”
Rather than rely on measurements from bottle samples, the team looked at data from sensors attached to the outside of the bottles or integrated with robotic platforms that can change their buoyancy to measure water at different depths. These sensors measure a variety of signals, including changes in electrical currents or the intensity of light emitted by a photosensitive dye to estimate the amount of oxygen dissolved in water. In contrast to seawater samples that represent a single discrete depth, the sensors record signals continuously as they descend through the water column.
Scientists have attempted to use these sensor data to estimate the true value of oxygen concentrations in ODZs, but have found it incredibly tricky to convert these signals accurately, particularly at concentrations approaching zero.
“We took a very different approach, using measurements not to look at their true value, but rather how that value changes within the water column,” Kwiecinski says. “That way we can identify anoxic waters, regardless of what a specific sensor says.”
Bottoming out
The team reasoned that, if sensors showed a constant, unchanging value of oxygen in a continuous, vertical section of the ocean, regardless of the true value, then it would likely be a sign that oxygen had bottomed out, and that the section was part of an oxygen-deficient zone.
The researchers brought together nearly 15 million sensor measurements collected over 40 years by various research cruises and robotic floats, and mapped the regions where oxygen did not change with depth.
“We can now see how the distribution of anoxic water in the Pacific changes in three dimensions,” Babbin says.
The team mapped the boundaries, volume, and shape of two major ODZs in the tropical Pacific, one in the Northern Hemisphere, and the other in the Southern Hemisphere. They were also able to see fine details within each zone. For instance, oxygen-depleted waters are “thicker,” or more concentrated towards the middle, and appear to thin out toward the edges of each zone.
“We could also see gaps, where it looks like big bites were taken out of anoxic waters at shallow depths,” Babbin says. “There’s some mechanism bringing oxygen into this region, making it oxygenated compared to the water around it.”
Such observations of the tropical Pacific’s oxygen-deficient zones are more detailed than what’s been measured to date.
“How the borders of these ODZs are shaped, and how far they extend, could not be previously resolved,” Babbin says. “Now we have a better idea of how these two zones compare in terms of areal extent and depth.”
“This gives you a sketch of what could be happening,” Kwiecinski says. “There’s a lot more one can do with this data compilation to understand how the ocean’s oxygen supply is controlled.”
This research is supported, in part, by the Simons Foundation.
Author : Michaela Jarvis | Department of Mechanical Engineering
Working directly with oyster farmers, MIT students are developing a robot that can flip heavy, floating bags of oysters, helping the shellfish to grow and stay healthy.
Read more about this article :
When Michelle Kornberg was about to graduate from MIT, she wanted to use her knowledge of mechanical and ocean engineering to make the world a better place. Luckily, she found the perfect senior capstone class project: supporting sustainable seafood by helping aquaculture farmers grow oysters.
“It’s our responsibility to use our skills and opportunities to work on problems that really matter,” says Kornberg, who now works for an aquaculture company called Innovasea. “Food sustainability is incredibly important from an environmental standpoint, of course, but it also matters on a social level. The most vulnerable will be hurt worst by the climate crisis, and I think food sustainability and availability really matters on that front.”
The project undertaken by Kornberg's capstone class, 2.017 (Design of Electromechanical Robotic Systems), came out of conversations between Michael Triantafyllou, who is MIT’s Henry L. and Grace Doherty Professor in Ocean Science and Engineering and director of MIT Sea Grant, and Dan Ward. Ward, a seasoned oyster farmer and marine biologist, owns Ward Aquafarms on Cape Cod and has worked extensively to advance the aquaculture industry by seeking solutions to some of its biggest challenges.
Speaking with Triantafyllou at MIT Sea Grant — part of a network of university-based programs established by the federal government to protect the coastal environment and economy — Ward had explained that each of his thousands of floating mesh oyster bags need to be turned over about 11 times a year. The flipping allows algae, barnacles, and other “biofouling” organisms that grow on the part of the bag beneath the water’s surface to be exposed to air and light, so they can dry and chip off. If this task is not performed, water flow to the oysters, which is necessary for their growth, is blocked.
The bags are flipped by a farmworker in a kayak, and the task is monotonous, often performed in rough water and bad weather, and ergonomically injurious. “It’s kind of awful, generally speaking,” Ward says, adding that he pays about $3,500 per year to have the bags turned over at each of his two farm sites — and struggles to find workers who want to do the job of flipping bags that can grow to a weight of 60 or 70 pounds just before the oysters are harvested.
Presented with this problem, the capstone class Kornberg was in — composed of six students in mechanical engineering, ocean engineering, and electrical engineering and computer science — brainstormed solutions. Most of the solutions, Kornberg says, involved an autonomous robot that would take over the bag-flipping. It was during that class that the original version of the “Oystamaran,” a catamaran with a flipping mechanism between its two hulls, was born.
Ward’s involvement in the project has been important to its evolution. He says he has reviewed many projects in his work on advisory boards that propose new technologies for aquaculture. Often, they don’t correspond with the actual challenges faced by the industry.
“It was always ‘I already have this remotely operated vehicle; would it be useful to you as an oyster farmer if I strapped on some kind of sensor?’” Ward says. “They try to fit robotics into aquaculture without any industry collaboration, which leads to a robotic product that doesn’t solve any of the issues we experience out on the farm. Having the opportunity to work with MIT Sea Grant to really start from the ground up has been exciting. Their approach has been, ‘What’s the problem, and what’s the best way to solve the problem?’ We do have a real need for robotics in aquaculture, but you have to come at it from the customer-first, not the technology-first, perspective.”
Triantafyllou says that while the task the robot performs is similar to work done by robots in other industries, the “special difficulty” students faced while designing the Oystamaran was its work environment.
“You have a floating device, which must be self-propelled, and which must find these objects in an environment that is not neat,” Triantafyllou says. “It’s a combination of vision and navigation in an environment that changes, with currents, wind, and waves. Very quickly, it becomes a complicated task.”
Kornberg, who had constructed the original central flipping mechanism and the basic structure of the vessel as a staff member at MIT Sea Grant after graduating in May 2020, worked as a lab instructor for the next capstone class related to the project in spring 2021. Andrew Bennett, education administrator at MIT Sea Grant, co-taught that class, in which students designed an Oystamaran version 2.0, which was tested at Ward Aquafarms and managed to flip several rows of bags while being controlled remotely. Next steps will involve making the vessel more autonomous, so it can be launched, navigate autonomously to the oyster bags, flip them, and return to the launching point. A third capstone class related to the project will take place this spring.
Bennett says an ideal project outcome would be, “We have proven the concept, and now somebody in industry says, ‘You know, there’s money to be made in oysters. I think I’ll take over.’ And then we hand it off to them.”
Meanwhile, he says an unexpected challenge arose with getting the Oystamaran to go between tightly packed rows of oyster bags in the center of an array.
“How does a robot shimmy in between things without wrecking something? It’s got to wiggle in somehow, which is a fascinating controls problem,” Bennett says, adding that the problem is a source of excitement, rather than frustration, to him. “I love a new challenge, and I really love when I find a problem that no one expected. Those are the fun ones.”
Triantafyllou calls the Oystamaran “a first for the industry,” explaining that the project has demonstrated that robots can perform extremely useful tasks in the ocean, and will serve as a model for future innovations in aquaculture.
“Just by showing the way, this may be the first of a number of robots,” he says. “It will attract talent to ocean farming, which is a great challenge, and also a benefit for society to have a reliable means of producing food from the ocean.”
New results show North Atlantic hurricanes have increased in frequency over the last 150 years.
Read more about this article :
When forecasting how storms may change in the future, it helps to know something about their past. Judging from historical records dating back to the 1850s, hurricanes in the North Atlantic have become more frequent over the last 150 years.
However, scientists have questioned whether this upward trend is a reflection of reality, or simply an artifact of lopsided record-keeping. If 19th-century storm trackers had access to 21st-century technology, would they have recorded more storms? This inherent uncertainty has kept scientists from relying on storm records, and the patterns within them, for clues to how climate influences storms.
A new MIT study published today in Nature Communications has used climate modeling, rather than storm records, to reconstruct the history of hurricanes and tropical cyclones around the world. The study finds that North Atlantic hurricanes have indeed increased in frequency over the last 150 years, similar to what historical records have shown.
In particular, major hurricanes, and hurricanes in general, are more frequent today than in the past. And those that make landfall appear to have grown more powerful, carrying more destructive potential.
Curiously, while the North Atlantic has seen an overall increase in storm activity, the same trend was not observed in the rest of the world. The study found that the frequency of tropical cyclones globally has not changed significantly in the last 150 years.
“The evidence does point, as the original historical record did, to long-term increases in North Atlantic hurricane activity, but no significant changes in global hurricane activity,” says study author Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “It certainly will change the interpretation of climate’s effects on hurricanes — that it’s really the regionality of the climate, and that something happened to the North Atlantic that’s different from the rest of the globe. It may have been caused by global warming, which is not necessarily globally uniform.”
Chance encounters
The most comprehensive record of tropical cyclones is compiled in a database known as the International Best Track Archive for Climate Stewardship (IBTrACS). This historical record includes modern measurements from satellites and aircraft that date back to the 1940s. The database’s older records are based on reports from ships and islands that happened to be in a storm’s path. These earlier records date back to 1851, and overall the database shows an increase in North Atlantic storm activity over the last 150 years.
“Nobody disagrees that that’s what the historical record shows,” Emanuel says. “On the other hand, most sensible people don’t really trust the historical record that far back in time.”
Recently, scientists have used a statistical approach to identify storms that the historical record may have missed. To do so, they consulted all the digitally reconstructed shipping routes in the Atlantic over the last 150 years and mapped these routes over modern-day hurricane tracks. They then estimated the chance that a ship would encounter or entirely miss a hurricane’s presence. This analysis found a significant number of early storms were likely missed in the historical record. Accounting for these missed storms, they concluded that there was a chance that storm activity had not changed over the last 150 years.
But Emanuel points out that hurricane paths in the 19th century may have looked different from today’s tracks. What’s more, the scientists may have missed key shipping routes in their analysis, as older routes have not yet been digitized.
“All we know is, if there had been a change (in storm activity), it would not have been detectable, using digitized ship records,” Emanuel says “So I thought, there’s an opportunity to do better, by not using historical data at all.”
Seeding storms
Instead, he estimated past hurricane activity using dynamical downscaling — a technique that his group developed and has applied over the last 15 years to study climate’s effect on hurricanes. The technique starts with a coarse global climate simulation and embeds within this model a finer-resolution model that simulates features as small as hurricanes. The combined models are then fed with real-world measurements of atmospheric and ocean conditions. Emanuel then scatters the realistic simulation with hurricane “seeds” and runs the simulation forward in time to see which seeds bloom into full-blown storms.
For the new study, Emanuel embedded a hurricane model into a climate “reanalysis” — a type of climate model that combines observations from the past with climate simulations to generate accurate reconstructions of past weather patterns and climate conditions. He used a particular subset of climate reanalyses that only accounts for observations collected from the surface — for instance from ships, which have recorded weather conditions and sea surface temperatures consistently since the 1850s, as opposed to from satellites, which only began systematic monitoring in the 1970s.
“We chose to use this approach to avoid any artificial trends brought about by the introduction of progressively different observations,” Emanuel explains.
He ran an embedded hurricane model on three different climate reanalyses, simulating tropical cyclones around the world over the past 150 years. Across all three models, he observed “unequivocal increases” in North Atlantic hurricane activity.
“There’s been this quite large increase in activity in the Atlantic since the mid-19th century, which I didn’t expect to see,” Emanuel says.
Within this overall rise in storm activity, he also observed a “hurricane drought” — a period during the 1970s and 80s when the number of yearly hurricanes momentarily dropped. This pause in storm activity can also be seen in historical records, and Emanuel’s group proposes a cause: sulfate aerosols, which were byproducts of fossil fuel combustion, likely set off a cascade of climate effects that cooled the North Atlantic and temporarily suppressed hurricane formation.
“The general trend over the last 150 years was increasing storm activity, interrupted by this hurricane drought,” Emanuel notes. “And at this point, we’re more confident of why there was a hurricane drought than why there is an ongoing, long-term increase in activity that began in the 19th century. That is still a mystery, and it bears on the question of how global warming might affect future Atlantic hurricanes.”
This research was supported, in part, by the National Science Foundation.
As climate change brings greater threats to coastal ecosystems, new research can help planners leverage the wave-damping benefits of marsh plants.
Read more about this article :
Marsh plants, which are ubiquitous along the world’s shorelines, can play a major role in mitigating the damage to coastlines as sea levels rise and storm surges increase. Now, a new MIT study provides greater detail about how these protective benefits work under real-world conditions shaped by waves and currents.
The study combined laboratory experiments using simulated plants in a large wave tank along with mathematical modeling. It appears in the journal Physical Review — Fluids, in a paper by former MIT visiting doctoral student Xiaoxia Zhang, now a postdoc at Dalian University of Technology, and professor of civil and environmental engineering Heidi Nepf.
It’s already clear that coastal marsh plants provide significant protection from surges and devastating storms. For example, it has been estimated that the damage caused by Hurricane Sandy was reduced by $625 million thanks to the damping of wave energy provided by extensive areas of marsh along the affected coasts. But the new MIT analysis incorporates details of plant morphology, such as the number and spacing of flexible leaves versus stiffer stems, and the complex interactions of currents and waves that may be coming from different directions.
This level of detail could enable coastal restoration planners to determine the area of marsh needed to mitigate expected amounts of storm surge or sea-level rise, and to decide which types of plants to introduce to maximize protection.
“When you go to a marsh, you often will see that the plants are arranged in zones,” says Nepf, who is the Donald and Martha Harleman Professor of Civil and Environmental Engineering. “Along the edge, you tend to have plants that are more flexible, because they are using their flexibility to reduce the wave forces they feel. In the next zone, the plants are a little more rigid and have a bit more leaves.”
As the zones progress, the plants become stiffer, leafier, and more effective at absorbing wave energy thanks to their greater leaf area. The new modeling done in this research, which incorporated work with simulated plants in the 24-meter-long wave tank at MIT’s Parsons Lab, can enable coastal planners to take these kinds of details into account when planning protection, mitigation, or restoration projects.
“If you put the stiffest plants at the edge, they might not survive, because they’re feeling very high wave forces. By describing why Mother Nature organizes plants in this way, we can hopefully design a more sustainable restoration,” Nepf says.
Once established, the marsh plants provide a positive feedback cycle that helps to not only stabilize but also build up these delicate coastal lands, Zhang says. “After a few years, the marsh grasses start to trap and hold the sediment, and the elevation gets higher and higher, which might keep up with sea level rise,” she says.
Awareness of the protective effects of marshland has been growing, Nepf says. For example, the Netherlands has been restoring lost marshland outside the dikes that surround much of the nation’s agricultural land, finding that the marsh can protect the dikes from erosion; the marsh and dikes work together much more effectively than the dikes alone at preventing flooding.
But most such efforts so far have been largely empirical, trial-and-error plans, Nepf says. Now, they could take advantage of this modeling to know just how much marshland with what types of plants would be needed to provide the desired level of protection.
It also provides a more quantitative way to estimate the value provided by marshes, she says. “It could allow you to more accurately say, ‘40 meters of marsh will reduce waves this much and therefore will reduce overtopping of your levee by this much.’ Someone could use that to say, ‘I’m going to save this much money over the next 10 years if I reduce flooding by maintaining this marsh.’ It might help generate some political motivation for restoration efforts.”
Nepf herself is already trying to get some of these findings included in coastal planning processes. She serves on a practitioner panel led by Chris Esposito of the Water Institute of the Gulf, which serves the storm-battered Louisiana coastline. “We’d like to get this work into the coatal simulations that are used for large-scale restoration and coastal planning,” she says.
"Understanding the wave damping process in real vegetation wetlands is of critical value, as it is needed in the assessment of the coastal defense value of these wetlands," says Zhan Hu, an associate professor of marine sciences at Sun Yat-Sen University, who was not associated with this work. "The challenge, however, lies in the quantitative representation of the wave damping process, in which many factors are at play, such as plant flexibility, morphology, and coexisting currents."
The new study, Hu says, "neatly combines experimental findings and analytical modeling to reveal the impact of each factor in the wave damping process. ... Overall, this work is a solid step forward toward a more accurate assessment of wave damping capacity of real coastal wetlands, which is needed for science-based design and management of nature-based coastal protection."
The work was partly supported by the National Science Foundation and the China Scholarship Council.
Themistoklis Sapsis tackles engineering problems associated with the unpredictable ocean environment and its effects on ships and other structures.
Read more about this article :
On his first day of classes at the Technical University of Athens’ School of Naval Architecture and Marine Engineering, Themistoklis Sapsis had a very satisfying realization.
“I realized that ships and other maritime structures are the only ones that operate at the interface of two different media: air and water,” says Sapsis. “This property alone creates so many challenges in terms of mathematical and computational modeling. And, of course, these media are not calm at all — they are random and often surprisingly unpredictable.”
In other words, Sapsis did not have to choose between his two great passions: huge, ocean-going ships and structures on the one hand, and mathematics on the other. Today, Sapsis, an associate professor of mechanical engineering at MIT, uses analytical and computational methods to try to predict behavior — such as that of ocean waves or instability inside a gas turbine — amid uncertain and occasionally extreme dynamics. His goal is to create designs for structures that are robust and safe even in a broad range of conditions. For example, he may study the loads acting on a ship during a storm, or the flow separation and lift reduction around a helicopter rotor blade during a difficult maneuver.
“These events are real — they often lead to big catastrophes and casualties,” Sapsis says. “My goal is to predict them and develop algorithms that can simulate them quickly. If we achieve this goal, then we could start talking about optimization and design of these systems with consideration of these extreme, rare, but possibly catastrophic events.”
Growing up in Athens, where great seafaring and mathematical traditions date back to ancient times, Sapsis’ house was “full of machine elements, spare engines, and engineering blueprints,” the tools of his father’s trade as a superintendent engineer in the maritime industry.
His father traveled internationally to oversee major ship repairs, and Sapsis often went along.
“I think what made the biggest impression on me as a child was the size of these vessels and especially the engines. You had to climb five or six flights of stairs to see the whole thing,” he recalls.
Also in the Sapsis home were math and engineering books — “lots of them,” he says. His father insisted that he study math closely, at the same time that the young Sapsis was conducting physics experiments in the basement.
“This back-and-forth transition between dynamical systems — more generally mathematics — and naval architecture” was frequently on his mind, Sapsis says.
In college, Sapsis ended up taking every math class that was offered. He says he had the good fortune to get in touch early on with the most mathematically inclined professor in the School of Naval Architecture and Marine Engineering, who then mentored Sapsis for three years. In his spare time, Sapsis even attended classes in the university’s School of Applied Mathematics.
His undergraduate thesis was on probabilistic description of dynamical systems subjected to random excitations, a topic important to the understanding of the motions of large ships and loads. One of Sapsis’ most memorable research breakthroughs occurred while he was working on that thesis.
“I was given a nice problem by my thesis advisor,” Sapsis says. “He warned me that most likely I would not be able to get something new, as this was an old problem and many had tried in the past decades without success.”
Over the next six months, Sapsis went over every step of the methods that were in the academic literature, “again and again,” he says, trying to understand why various approaches failed. He started to discern a path toward deriving a new set of equations that could achieve his goal, but there were technical obstacles.
“Without a lot of hope, as I knew that his was an old problem, but with a lot of curiosity, I began working on the different steps,” Sapsis says. “After a few weeks of work, I realized that the steps were complete, and I had a new set of equations!”
“It was certainly one of my most enthusiastic moments,” Sapsis says, “when I heard my advisor saying, ‘Yes, this is new and it is important!’”
Since that early success, the engineering and architecture problems associated with building for the extreme and unpredictable ocean environment have provided Sapsis with plenty of research problems to solve.
“Naval architecture is one of the oldest professions, with many open problems remaining and many more new ones coming,” he says. “The theoretical tools should not be more complex than the problem itself. However, in this case there are some really challenging physical problems that require the development of fundamentally new mathematics and computational methods. I am always trying to begin with the fundamentals and build the right theoretical and computational tools to, hopefully, come closer to the modeling of certain complex phenomena.”
Sapsis, who joined the MIT faculty in 2013 and was tenured in 2019, says he loves the energy and pace of the Institute, where “there are so many things happening here that you can never feel you have achieved enough — but in a healthy way.”
“I always feel humbled by the amazing achievements of my colleagues and our students and postdocs,” he says. “It is a place filled with pure passion and talent, blended together for a good cause, to solve the world’s hardest problems.”
These days, Sapsis says it is his students who experience the pure excitement of finding solutions to problems in the field.
“My students and postdocs are now the ones who have the pleasure to be the first to find out when a new idea works,” Sapsis says. “I have to admit, however, that I save some problems for myself.”
In fact, Sapsis says he relaxes by “thinking about a nice problem: a high-risk and low-expectations one. I think of a strategy to go about it but know that most likely it will not work. This is something I don’t consider work.”
The results could help scientists unravel the processes underlying plate tectonics.
Read more about this article :
If the Earth’s oceans were drained completely, they would reveal a massive chain of undersea volcanoes snaking around the planet. This sprawling ocean ridge system is a product of overturning material in the Earth’s interior, where boiling temperatures can melt and loft rocks up through the crust, splitting the sea floor and reshaping the planet’s surface over hundreds of millions of years.
Now geologists at MIT have analyzed thousands of samples of erupted material along ocean ridges and traced back their chemical history to estimate the temperature of the Earth’s interior.
Their analysis shows that the temperature of the Earth’s underlying ocean ridges is relatively consistent, at around 1,350 degrees Celsius — about as hot as a gas range’s blue flame. There are, however, “hotspots” along the ridge that can reach 1,600 degrees Celsius, comparable to the hottest lava.
The team’s results, appearing in the Journal of Geophysical Research: Solid Earth, provide a temperature map of the Earth’s interior around ocean ridges. With this map, scientists can better understand the melting processes that give rise to undersea volcanoes, and how these processes may drive the pace of plate tectonics over time.
“Convection and plate tectonics have been important processes in shaping Earth history,” says lead author Stephanie Brown Krein, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Knowing the temperature along this whole chain is fundamental to understanding the planet as a heat engine, and how Earth might be different from other planets and able to sustain life.”
Krein’s co-authors include Zachary Molitor, an EAPS graduate student, and Timothy Grove, the R.R. Schrock Professor of Geology at MIT.
A chemical history
The Earth’s interior temperature has played a critical role in shaping the planet’s surface over hundreds of millions of years. But there’s been no way to directly read this temperature tens to hundreds of kilometers below the surface. Scientists have applied indirect means to infer the temperature of the upper mantle — the layer of the Earth just below the crust. But estimates thus far are inconclusive, and scientists disagree about how widely temperatures vary beneath the surface.
For their new study, Krein and her colleagues developed a new algorithm, called ReversePetrogen, that is designed to trace a rock’s chemical history back in time, to identify its original composition of elements and determine the temperature at which the rock initially melted below the surface.
The algorithm is based on years of experiments carried out in Grove’s lab to reproduce and characterize the melting processes of the Earth’s interior. Researchers in the lab have heated up rocks of various compositions, reaching various temperatures and pressures, to observe their chemical evolution. From these experiments, the team has been able to derive equations — and ultimately, the new algorithm — to predict the relationships between a rock’s temperature, pressure, and chemical composition.
Krein and her colleagues applied their new algorithm to rocks collected along the Earth’s ocean ridges — a system of undersea volcanoes spanning more than 70,000 kilometers in length. Ocean ridges are regions where tectonic plates are spread apart by the eruption of material from the Earth’s mantle — a process that is driven by underlying temperatures.
“You could effectively make a model of the temperature of the entire interior of the Earth, based partly on the temperature at these ridges,” Krein says. “The question is, what is the data really telling us about the temperature variation in the mantle along the whole chain?”
Mantle map
The data the team analyzed include more than 13,500 samples collected along the length of the ocean ridge system over several decades, by multiple research cruises. Each sample in the dataset is of an erupted sea glass — lava that erupted in the ocean and was instantly chilled by the surrounding water into a pristine, preserved form.
Scientists previously identified the chemical compositions of each glass in the dataset. Krein and her colleagues ran each sample’s chemical compositions through their algorithm to determine the temperature at which each glass originally melted in the mantle.
In this way, the team was able to generate a map of mantle temperatures along the entire length of the ocean ridge system. From this map, they observed that much of the mantle is relatively homogenous, with an average temperature of around 1,350 degrees Celsius. There are however, “hotspots,” or regions along the ridge, where temperatures in the mantle appear significantly hotter, at around 1,600 degrees Celsius.
“People think of hotspots as regions in the mantle where it’s hotter, and where material may be melting more, and potentially rising faster, and we don’t exactly know why, or how much hotter they are, or what the role of composition is at hotspots,” Krein says. “Some of these hotspots are on the ridge, and now we may get a sense of what the hotspot variation is globally using this new technique. That tells us something fundamental about the temperature of the Earth now, and now we can think of how it’s changed over time.”
Krein adds: “Understanding these dynamics will help us better determine how continents grew and evolved on Earth, and when subduction and plate tectonics started — which are critical for complex life.”
This research was supported, in part, by the National Science Foundation.
Interest is growing in mining the ocean for valuable metals. A new study helps gauge the extent of the impact.
Read more about this article :
In certain parts of the deep ocean, scattered across the seafloor, lie baseball-sized rocks layered with minerals accumulated over millions of years. A region of the central Pacific, called the Clarion Clipperton Fracture Zone (CCFZ), is estimated to contain vast reserves of these rocks, known as “polymetallic nodules,” that are rich in nickel and cobalt — minerals that are commonly mined on land for the production of lithium-ion batteries in electric vehicles, laptops, and mobile phones.
As demand for these batteries rises, efforts are moving forward to mine the ocean for these mineral-rich nodules. Such deep-sea-mining schemes propose sending down tractor-sized vehicles to vacuum up nodules and send them to the surface, where a ship would clean them and discharge any unwanted sediment back into the ocean. But the impacts of deep-sea mining — such as the effect of discharged sediment on marine ecosystems and how these impacts compare to traditional land-based mining — are currently unknown.
Now oceanographers at MIT, the Scripps Institution of Oceanography, and elsewhere have carried out an experiment at sea for the first time to study the turbulent sediment plume that mining vessels would potentially release back into the ocean. Based on their observations, they developed a model that makes realistic predictions of how a sediment plume generated by mining operations would be transported through the ocean.
The model predicts the size, concentration, and evolution of sediment plumes under various marine and mining conditions. These predictions, the researchers say, can now be used by biologists and environmental regulators to gauge whether and to what extent such plumes would impact surrounding sea life.
“There is a lot of speculation about [deep-sea-mining’s] environmental impact,” says Thomas Peacock, professor of mechanical engineering at MIT. “Our study is the first of its kind on these midwater plumes, and can be a major contributor to international discussion and the development of regulations over the next two years.”
The team’s study appears today in Nature Communications: Earth and Environment.
Peacock’s co-authors at MIT include lead author Carlos Muñoz-Royo, Raphael Ouillon, Chinmay Kulkarni, Patrick Haley, Chris Mirabito, Rohit Supekar, Andrew Rzeznik, Eric Adams, Cindy Wang, and Pierre Lermusiaux, along with collaborators at Scripps, the U.S. Geological Survey, and researchers in Belgium and South Korea.
Out to sea
Current deep-sea-mining proposals are expected to generate two types of sediment plumes in the ocean: “collector plumes” that vehicles generate on the seafloor as they drive around collecting nodules 4,500 meters below the surface; and possibly “midwater plumes” that are discharged through pipes that descend 1,000 meters or more into the ocean’s aphotic zone, where sunlight rarely penetrates.
In their new study, Peacock and his colleagues focused on the midwater plume and how the sediment would disperse once discharged from a pipe.
“The science of the plume dynamics for this scenario is well-founded, and our goal was to clearly establish the dynamic regime for such plumes to properly inform discussions,” says Peacock, who is the director of MIT’s Environmental Dynamics Laboratory.
To pin down these dynamics, the team went out to sea. In 2018, the researchers boarded the research vessel Sally Ride and set sail 50 kilometers off the coast of Southern California. They brought with them equipment designed to discharge sediment 60 meters below the ocean’s surface.
“Using foundational scientific principles from fluid dynamics, we designed the system so that it fully reproduced a commercial-scale plume, without having to go down to 1,000 meters or sail out several days to the middle of the CCFZ,” Peacock says.
Over one week the team ran a total of six plume experiments, using novel sensors systems such as a Phased Array Doppler Sonar (PADS) and epsilometer developed by Scripps scientists to monitor where the plumes traveled and how they evolved in shape and concentration. The collected data revealed that the sediment, when initially pumped out of a pipe, was a highly turbulent cloud of suspended particles that mixed rapidly with the surrounding ocean water.
“There was speculation this sediment would form large aggregates in the plume that would settle relatively quickly to the deep ocean,” Peacock says. “But we found the discharge is so turbulent that it breaks the sediment up into its finest constituent pieces, and thereafter it becomes dilute so quickly that the sediment then doesn’t have a chance to stick together.”
Dilution
The team had previously developed a model to predict the dynamics of a plume that would be discharged into the ocean. When they fed the experiment’s initial conditions into the model, it produced the same behavior that the team observed at sea, proving the model could accurately predict plume dynamics within the vicinity of the discharge.
The researchers used these results to provide the correct input for simulations of ocean dynamics to see how far currents would carry the initially released plume.
“In a commercial operation, the ship is always discharging new sediment. But at the same time the background turbulence of the ocean is always mixing things. So you reach a balance. There’s a natural dilution process that occurs in the ocean that sets the scale of these plumes,” Peacock says. “What is key to determining the extent of the plumes is the strength of the ocean turbulence, the amount of sediment that gets discharged, and the environmental threshold level at which there is impact.”
Based on their findings, the researchers have developed formulae to calculate the scale of a plume depending on a given environmental threshold. For instance, if regulators determine that a certain concentration of sediments could be detrimental to surrounding sea life, the formula can be used to calculate how far a plume above that concentration would extend, and what volume of ocean water would be impacted over the course of a 20-year nodule mining operation.
“At the heart of the environmental question surrounding deep-sea mining is the extent of sediment plumes,” Peacock says. “It’s a multiscale problem, from micron-scale sediments, to turbulent flows, to ocean currents over thousands of kilometers. It’s a big jigsaw puzzle, and we are uniquely equipped to work on that problem and provide answers founded in science and data.”
The team is now working on collector plumes, having recently returned from several weeks at sea to perform the first environmental monitoring of a nodule collector vehicle in the deep ocean in over 40 years.
This research was supported in part by the MIT Environmental Solutions Initiative, the UC Ship Time Program, the MIT Policy Lab, the 11th Hour Project of the Schmidt Family Foundation, the Benioff Ocean Initiative, and Fundación Bancaria “la Caixa.”