- Single-cell database to propel biological studies
- Deriving a theory of defects
- Integrating optical components into existing chip designs
- Gauging the effects of water scarcity on an irrigated planet
- How to bend and stretch a diamond
- For nuclear weapons reduction, a way to verify without revealing
- From blank verse to blockchain
- Machine-learning system processes sounds like humans do
Posted: 19 Apr 2018 01:20 PM PDT
A team at Whitehead Institute and MIT has harnessed single-cell technologies to analyze over 65,000 cells from the regenerative planarian flatworm, Schmidtea mediterranea, revealing the complete suite of actives genes (or "transcriptome") for practically every type of cell in a complete organism. This transcriptome atlas represents a treasure trove of biological information on planarians, which is the subject of intense study in part because of its unique ability to regrow lost or damaged body parts. As described in the April 19 advance online issue of the journal Science, this new, publicly available resource has already fueled important discoveries, including the identification of novel planarian cell types, the characterization of key transition states as cells mature from one type to another, and the identity of new genes that could impart positional cues from muscles cells — a critical component of tissue regeneration.
"We're really at the beginning of an amazing era," says senior author Peter Reddien, a member of Whitehead Institute, professor of biology at MIT, and investigator with the Howard Hughes Medical Institute. "Just as genome sequences became indispensable resources for studying the biology of countless organisms, analyzing the transcriptomes of every cell type will become another fundamental tool — not just for planarians, but for many different organisms."
The ability to systematically reveal which genes in the genome are active within an individual cell flows from a critical technology known as single-cell RNA sequencing. Recent advances in the technique have dramatically reduced the per-cell cost, making it feasible for a single laboratory to analyze a suitably large number of cells to capture the cell type diversity present in complex, multi-cellular organisms.
Reddien has maintained a careful eye on the technology from its earliest days because he believed it offered an ideal way to unravel planarian biology. "Planarians are relatively simple, so it would be theoretically possible for us to capture every cell type. Yet they still have a sufficiently large number of cells — including types we know little or even nothing about," he explains. "And because of the unusual aspects of planarian biology — essentially, adults maintain developmental information and progenitor cells that in other organisms might be present transiently only in embryos — we could capture information about mature cells, progenitor cells, and information guiding cell decisions by sampling just one stage, the adult."
Two and a half years ago, Reddien and his colleagues — led by first author Christopher Fincher, a graduate student in Reddien's laboratory — set out to apply single-cell RNA sequencing systematically to planarians. The group isolated individual cells from five regions of the animal and gathered data from a total of 66,783 cells. The results include transcriptomes for rare cell types, such as those that comprise on the order of 10 cells out of an adult animal that consists of roughly 500,000 to 1 million cells.
In addition, the researchers uncovered some cell types that have yet to be described in planarians, as well cell types common to many organisms, making the atlas a valuable tool across the scientific community. "We identified many cells that were present widely throughout the animal, but had not been previously identified. This surprising finding highlights the great value of this approach in identifying new cells, a method that could be applied widely to many understudied organisms," Fincher says.
"One main important aspect of our transcriptome atlas is its utility for the scientific community," Reddien says. "Because many of the cell types present in planarians emerged long ago in evolution, similar cells still exist today in various organisms across the planet. That means these cell types and the genes active within them can be studied using this resource."
The Whitehead team also conducted some preliminary analyses of their atlas, which they've dubbed "Planarian Digiworm." For example, they were able to discern in the transcriptome data a variety of transition states that reflect the progression of stem cells into more specialized, differentiated cell types. Some of these cellular transition states have been previously analyzed in painstaking detail, thereby providing an important validation of the team's approach.
In addition, Reddien and his colleagues knew from their own prior, extensive research that there is positional information encoded in adult planarian muscle — information that is required not only for the general maintenance of adult tissues but also for the regeneration of lost or damaged tissue. Based on the activity pattern of known genes, they could determine roughly which positions the cells had occupied in the intact animal, and then sort through those cells' transcriptomes to identify new genes that are candidates for transmitting positional information.
"There are an unlimited number of directions that can now be taken with these data," Reddien says. "We plan to extend our initial work, using further single-cell analyses, and also to mine the transcriptome atlas for addressing important questions in regenerative biology. We hope many other investigators find this to be a very valuable resource, too."
This work was supported by the National Institutes of Health, the Howard Hughes Medical Institute, and the Eleanor Schwartz Charitable Foundation.
Posted: 19 Apr 2018 12:45 PM PDT
"I only recently decided on the area to which I would dedicate decades of my life," confides Mingda Li PhD '15, who has just been appointed assistant professor in the Department of Nuclear Science and Engineering. "I could not commit until I became mentally mature enough to make real contributions."
The area Li today calls his own, and where he is indeed generating significant advances, lies at the intersection of quantum physics and engineering. His research characterizing complex defects in materials has the potential to break through efficiency barriers in a wide range of energy applications.
In five papers published in 2017, including two in the Nano Letters, Li and his co-authors described a new approach to understanding a common type of material defect called a crystal dislocation, proposing a theoretical new particle named a "dislon" to help capture the mechanism underlying dislocation.
"These defects show up everywhere — in metals, semiconductors, insulators," says Li. Caused by stress, they emerge naturally in crystals, disrupting the precise lattice arrangement of atoms, and affecting a wide range of properties in materials, including electrical and thermal behavior.
Previous attempts had failed to precisely delineate the mechanism for these dislocations. Li, conducting postdoctoral research with advisors Gang Chen, the Soderberg Professor and head of the Department of Mechanical Engineering, and the late Institute Professor Emerita Mildred S. Dresselhaus, drew on quantum field theory to contrive a mathematical approach for explaining dislocations. His quantum dislocation framework, based on hundreds of pages of derivations, can determine how dislocations change materials.
"We came up with one equation to compute any properties caused by dislocation — electrical, optical, magnetic, thermal, even superconducting," says Li.
With his innovative approach, Li believes it will be possible to transform dislocations from mere defects into a new material tuning dimension. "We will be able to tailor them to improve the performance of many kinds of materials, including those used in thermoelectric technologies, nuclear reactor claddings, solar panels, and semiconductor microelectronics," says Li.
This accomplishment, which has vaulted Li to prominence in the field, comes after a long journey during which Li sometimes struggled to find his bearings. Growing up in Tsingtao, China, Li felt at an early age "a great passion for mathematics," and for computer science in particular. "I adored Bill Gates, and with access to a personal computer at school, taught myself programming," he says.
He avidly read comics, especially the popular Doraemon manga series, named for a robot cat who travels back in time to protect a boy. "The boy was really unlucky, criticized by tyrants," recalls Li. "I wished I had my own robot, but then I finally decided to become the Doraemon."
With his move to a boarding high school in Beijing, Li began to explore disciplines with great intensity. He entertained a life in mathematics, but gave that up before college. "I needed true genius and intuition, and realized I had that only 5 percent of the time." So he turned to physics and engineering. "I had an eagerness to learn and create something new, to make me feel excited and happy," he says.
At Tsinghua University, he took up high energy and laser physics, but then turned to nuclear science, whose promise of unlimited energy intrigued him. After encountering MIT transfer students, Li decided to pursue doctoral studies in nuclear science and engineering in the U.S. with "the world's leading experts" at MIT.
Finding the right path
At NSE, Li initially worked on questions of X-ray scattering. "I was interested in particle dynamics at a nanometer scale, trying to understand how there's something beyond material structure that can influence properties."
After a project in high end electron microscopy fell through, Li felt stuck. "I didn't have a good thesis topic, and had no idea what to do next," he recalls.
In search of direction, Li approached his advisor, Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering, for advice. "He just told me to do great science," recalls Li. "This made me feel both anxious and immensely free, because I needed to design an interesting project from the ground up, finding suitable collaborators, and resources, which I finally did."
For his dissertation, Li began studying topological materials, "semiconductors that behave weirdly," using spectroscopic and other methods. "These materials live in a quantum world, and they act very different from traditional materials, especially in terms of electron and thermal transport," he says. For his postdoctoral research, he "was thinking bigger and crazier," he says, with studies of topological materials culminating in his pathbreaking dislon framework.
Today, Li is extending this research, working with materials in his own lab to see how defects might improve the performance of technologies society depends on. "People want to build transmission lines without heat loss," says Li, citing one example. "We need to learn how to tune material properties in the right direction."
He has a long-term ambition to develop a comparable framework for analyzing amorphous materials like cement. "If we want to build structures that last forever, we need to understand their behaviors better."
Most of all, he relishes the freedom of his new academic life, and the growing MIT network that sustains his research passions. "I have so many talented colleagues, friends, and students, who talk and get excited together every day," says Li. "I really treasure this community."
Posted: 19 Apr 2018 12:00 PM PDT
Two and a half years ago, a team of researchers led by groups at MIT, the University of California at Berkeley, and Boston University announced a milestone: the fabrication of a working microprocessor, built using only existing manufacturing processes, that integrated electronic and optical components on the same chip.
The researchers' approach, however, required that the chip's electrical components be built from the same layer of silicon as its optical components. That meant relying on an older chip technology in which the silicon layers for the electronics were thick enough for optics.
In the latest issue of Nature, a team of 18 researchers, led by the same MIT, Berkeley, and BU groups, reports another breakthrough: a technique for assembling on-chip optics and electronic separately, which enables the use of more modern transistor technologies. Again, the technique requires only existing manufacturing processes.
"The most promising thing about this work is that you can optimize your photonics independently from your electronics," says Amir Atabaki, a research scientist at MIT's Research Laboratory of Electronics and one of three first authors on the new paper. "We have different silicon electronic technologies, and if we can just add photonics to them, it'd be a great capability for future communications and computing chips. For example, now we could imagine a microprocessor manufacturer or a GPU manufacturer like Intel or Nvidia saying, 'This is very nice. We can now have photonic input and output for our microprocessor or GPU.' And they don't have to change much in their process to get the performance boost of on-chip optics."
Moving from electrical communication to optical communication is attractive to chip manufacturers because it could significantly increase chips' speed and reduce power consumption, an advantage that will grow in importance as chips' transistor count continues to rise: The Semiconductor Industry Association has estimated that at current rates of increase, computers' energy requirements will exceed the world's total power output by 2040.
The integration of optical — or "photonic" — and electronic components on the same chip reduces power consumption still further. Optical communications devices are on the market today, but they consume too much power and generate too much heat to be integrated into an electronic chip such as a microprocessor. A commercial modulator — the device that encodes digital information onto a light signal — consumes between 10 and 100 times as much power as the modulators built into the researchers' new chip.
It also takes up 10 to 20 times as much chip space. That's because the integration of electronics and photonics on the same chip enables Atabaki and his colleagues to use a more space-efficient modulator design, based on a photonic device called a ring resonator.
"We have access to photonic architectures that you can't normally use without integrated electronics," Atabaki explains. "For example, today there is no commercial optical transceiver that uses optical resonators, because you need considerable electronics capability to control and stabilize that resonator."
Atabaki's co-first-authors on the Nature paper are Sajjad Moazeni, a PhD student at Berkeley, and Fabio Pavanello, who was a postdoc at the University of Colorado at Boulder, when the work was done. The senior authors are Rajeev Ram, a professor of electrical engineering and computer science at MIT; Vladimir Stojanovic, an associate professor of electrical engineering and computer sciences at Berkeley; and Milos Popovic, an assistant professor of electrical and computer engineering at Boston University. They're joined by 12 other researchers at MIT, Berkeley, Boston University, the University of Colorado, the State University of New York at Albany, and Ayar Labs, an integrated-photonics startup that Ram, Stojanovic, and Popovic helped found.
In addition to millions of transistors for executing computations, the researchers' new chip includes all the components necessary for optical communication: modulators; waveguides, which steer light across the chip; resonators, which separate out different wavelengths of light, each of which can carry different data; and photodetectors, which translate incoming light signals back into electrical signals.
Silicon — which is the basis of most modern computer chips — must be fabricated on top of a layer of glass to yield useful optical components. The difference between the refractive indices of the silicon and the glass — the degrees to which the materials bend light — is what confines light to the silicon optical components.
The earlier work on integrated photonics, which was also led by Ram, Stojanovic, and Popovic, involved a process called wafer bonding, in which a single, large crystal of silicon is fused to a layer of glass deposited atop a separate chip. The new work, in enabling the direct deposition of silicon — with varying thickness — on top of glass, must make do with so-called polysilicon, which consists of many small crystals of silicon.
Single-crystal silicon is useful for both optics and electronics, but in polysilicon, there's a tradeoff between optical and electrical efficiency. Large-crystal polysilicon is efficient at conducting electricity, but the large crystals tend to scatter light, lowering the optical efficiency. Small-crystal polysilicon scatters light less, but it's not as good a conductor.
Using the manufacturing facilities at SUNY-Albany's Colleges for Nanoscale Sciences and Engineering, the researchers tried out a series of recipes for polysilicon deposition, varying the type of raw silicon used, processing temperatures and times, until they found one that offered a good tradeoff between electronic and optical properties.
"I think we must have gone through more than 50 silicon wafers before finding a material that was just right," Atabaki says.
Posted: 19 Apr 2018 11:50 AM PDT
Growing global food demand, climate change, and climate policies favoring bioenergy production are expected to increase pressures on water resources around the world. Many analysts predict that water shortages will constrain the ability of farmers to expand irrigated cropland, which would be critical to ramping up production of both food and bioenergy crops. If true, bioenergy production and food consumption would decline amid rising food prices and pressures to convert forests to rain-fed farmland. Now a team of researchers at the MIT Joint Program on the Science and Policy of Global Change has put this prediction to the test.
To assess the likely impacts of future limited water resources on bioenergy production, food consumption and prices, land-use change and the global economy, the MIT researchers have conducted a study that explicitly represents irrigated land and water scarcity. Appearing in the Australian Journal of Agriculture and Resource Economics, the study is the first to include an estimation of how irrigation management and systems may respond to changes in water availability in a global economy-wide model that represents agriculture, energy and land-use change.
Combining the MIT Integrated Global System Modeling (IGSM) framework with a water resource system (WRS) component that enables analyses at the scale of river basins, the model represents additional irrigable land in 282 river basins across the globe. Using the IGSM-WRS model, the researchers assessed the costs of expanding production in these areas through upgrades such as improving irrigation efficiency, lining canals to limit water loss, and expanding water storage capacity.
They found that explicitly representing irrigated land (i.e., distinguishing it from rain-fed land, which produces lower yields) had little impact on their projections of global food consumption and prices, bioenergy production, and the rate of deforestation under water scarcity. The impacts are minimal because in response to shortages, water can be used more efficiently through the aforementioned upgrades, and regions with relatively less water scarcity can expand agricultural production for export to more arid regions.
Moreover, the researchers determined that changes in water availability for agriculture of plus or minus 20 percent had little impact on global food prices, bioenergy production, land-use change and the global economy.
"Many previous economy-wide studies do not include a representations of water constraints, and those that do fail to consider changes in irrigation systems (e.g. construction of more dams or improvements in irrigation efficiency) in response to water shortages," says MIT Joint Program Principal Research Scientist Niven Winchester, the study's lead author. "When these responses are included, we find that water shortages have smaller impacts than estimated in other studies."
Despite the small global impacts, the researchers observed that explicitly representing irrigated land under water scarcity as well as changes in water availability for agriculture can have significant impact at the regional level. In places where rainfall is relatively low and/or population growth is projected to outpace irrigation capacity and efficiency improvements, water shortages are more likely to limit irrigated cropland expansion, leading to lower crop production in those areas.
The study's findings highlight the importance of improvements in irrigation efficiency and international trade in agricultural commodities. The research may also be used to identify regions with a high potential to be severely influenced by future water shortages.
The study was primarily funded by BP.
Posted: 19 Apr 2018 10:59 AM PDT
Diamond is well-known as the strongest of all natural materials, and with that strength comes another tightly linked property: brittleness. But now, an international team of researchers from MIT, Hong Kong, Singapore, and Korea has found that when grown in extremely tiny, needle-like shapes, diamond can bend and stretch, much like rubber, and snap back to its original shape.
The surprising finding is being reported this week in the journal Science, in a paper by senior author Ming Dao, a principal research scientist in MIT's Department of Materials Science and Engineering; MIT postdoc Daniel Bernoulli; senior author Subra Suresh, former MIT dean of engineering and now president of Singapore's Nanyang Technological University; graduate students Amit Banerjee and Hongti Zhang at City University of Hong Kong; and seven others from CUHK and institutions in Ulsan, South Korea.
Experiment (left) and simulation (right) of a diamond nanoneedle being bent by the side surface of a diamond tip, showing ultralarge and reversible elastic deformation.
The results, the researchers say, could open the door to a variety of diamond-based devices for applications such as sensing, data storage, actuation, biocompatible in vivo imaging, optoelectronics, and drug delivery. For example, diamond has been explored as a possible biocompatible carrier for delivering drugs into cancer cells.
The team showed that the narrow diamond needles, similar in shape to the rubber tips on the end of some toothbrushes but just a few hundred nanometers (billionths of a meter) across, could flex and stretch by as much as 9 percent without breaking, then return to their original configuration, Dao says.
Ordinary diamond in bulk form, Bernoulli says, has a limit of well below 1 percent stretch. "It was very surprising to see the amount of elastic deformation the nanoscale diamond could sustain," he says.
"We developed a unique nanomechanical approach to precisely control and quantify the ultralarge elastic strain distributed in the nanodiamond samples," says Yang Lu, senior co-author and associate professor of mechanical and biomedical engineering at CUHK. Putting crystalline materials such as diamond under ultralarge elastic strains, as happens when these pieces flex, can change their mechanical properties as well as thermal, optical, magnetic, electrical, electronic, and chemical reaction properties in significant ways, and could be used to design materials for specific applications through "elastic strain engineering," the team says.
Experiment (left) and simulation (right) of a diamond nanoneedle being bent to fracture by the side surface of a diamond tip, showing ultralarge elastic deformation (around 9 percent maximum tensile strain).
The team measured the bending of the diamond needles, which were grown through a chemical vapor deposition process and then etched to their final shape, by observing them in a scanning electron microscope while pressing down on the needles with a standard nanoindenter diamond tip (essentially the corner of a cube). Following the experimental tests using this system, the team did many detailed simulations to interpret the results and was able to determine precisely how much stress and strain the diamond needles could accommodate without breaking.
The researchers also developed a computer model of the nonlinear elastic deformation for the actual geometry of the diamond needle, and found that the maximum tensile strain of the nanoscale diamond was as high as 9 percent. The computer model also predicted that the corresponding maximum local stress was close to the known ideal tensile strength of diamond — i.e. the theoretical limit achievable by defect-free diamond.
When the entire diamond needle was made of one crystal, failure occurred at a tensile strain as high as 9 percent. Until this critical level was reached, the deformation could be completely reversed if the probe was retracted from the needle and the specimen was unloaded. If the tiny needle was made of many grains of diamond, the team showed that they could still achieve unusually large strains. However, the maximum strain achieved by the polycrystalline diamond needle was less than one-half that of the single crystalline diamond needle.
Yonggang Huang, a professor of civil and environmental engineering and mechanical engineering at Northwestern University, who was not involved in this research, agrees with the researchers' assessment of the potential impact of this work. "The surprise finding of ultralarge elastic deformation in a hard and brittle material — diamond — opens up unprecedented possibilities for tuning its optical, optomechanical, magnetic, phononic, and catalytic properties through elastic strain engineering," he says.
Huang adds "When elastic strains exceed 1 percent, significant material property changes are expected through quantum mechanical calculations. With controlled elastic strains between 0 to 9 percent in diamond, we expect to see some surprising property changes."
The team also included Muk-Fung Yuen, Jiabin Liu, Jian Lu, Wenjun Zhang, and Yang Lu at the City University of Hong Kong; and Jichen Dong and Feng Ding at the Institute for Basic Science, in South Korea. The work was funded by the Research Grants Council of the Hong Kong Special Administrative Region, Singapore-MIT Alliance for Rresearch and Technology (SMART), Nanyang Technological University Singapore, and the National Natural Science Foundation of China.
Posted: 19 Apr 2018 09:30 AM PDT
In past negotiations aimed at reducing the arsenals of the world's nuclear superpowers, chiefly the U.S. and Russia, a major sticking point has been the verification process: How do you prove that real bombs and nuclear devices — not just replicas — have been destroyed, without revealing closely held secrets about the design of those weapons?
Now, researchers at MIT have come up with a clever solution, which in effect serves as a physics-based version of the cryptographic keys used in computer encryption systems. In fact, they've come up with two entirely different versions of such a system, to show that there may be a variety of options to choose from if any one is found to have drawbacks. Their findings are reported in two papers, one in Nature Communications and the other in the Proceedings of the National Academy of Sciences, with MIT assistant professor of nuclear science and engineering Areg Danagoulian as senior author of both.
Because of the difficulties in proving that a nuclear warhead is real and contains actual nuclear fuel (typically highly enriched plutonium), past treaties have instead focused on the much larger and harder-to-fake delivery systems: intercontinental ballistic missiles, cruise missiles, and bombers. Arms reduction treaties such as START, which reduced the number of delivery systems on each side by 80 percent in the 1990s, resulted in the destruction of hundreds of missiles and planes, including 365 huge B-52 bombers chopped into pieces by a giant guillotine-like device in the Arizona desert.
But to avert the dangers of future proliferation — for example, if rogue nations or terrorists gained control of nuclear warheads — actually disposing of the bombs themselves and their fuel should be a goal of future treaties, Danagoulian says. So, a way of verifying such destruction could be a key to making such agreements possible. Danagoulian says his team, which included graduate student Jayson Vavrek, postdoc Brian Henderson, and recent graduate Jake Hecla '17, have found just such a method, in two different variations.
"How do you verify what's in a black box without looking inside? People have tried many different concepts," Danagoulian says. But these efforts tend to suffer from the same problem: If they reveal enough information to be effective, they reveal too much to be politically acceptable.
To get around that, the new method is a physical analog of data encryption, in which data is typically manipulated using a specific set of large numbers, known as the key. The resulting data are essentially rendered into gibberish, indecipherable without the necessary key. However, it is still possible to tell whether or not two sets of data are identical, because after encryption they would still be identical, transformed into exactly the same gibberish. Someone viewing the data would have no knowledge of their content, but could still be certain that the two datasets were the same.
That's the principle Danagoulian and his team applied, in physical form, with the warhead verification system — doing it "not through computation, but through physics," he says. "You can hack electronics, but you can't hack physics."
A nuclear warhead has two essential characteristics: the mix of heavy elements and isotopes that makes up its nuclear "fuel," and the dimensions of the hollow sphere, called a pit, in which that nuclear material is typically configured. These details are considered top-secret information within all the nations that possess such weapons.
Just measuring the radiation emitted by a supposed warhead isn't enough to prove it's real, Danagoulian says. It could be a fake containing weapon-irrelevant materials which give off exactly the same radiation signature as a real bomb. Probes using isotope-sensitive resonant processes can be used to probe the bomb's internal characteristics and reveal both the isotope mix and the shape, proving its reality, but that gives away all the secrets. So Danagoulian and his team introduced another piece to the puzzle: a physical "key" containing a mix of the same isotopes, but in proportions that are unknown to the inspection crew and which thus scramble the information about the weapon itself.
Think of it this way: It's as though the isotopes were represented by colors, and the key was a filter placed over the target, with areas that balance each color on the target with its exact complementary color, just like a photographic negative, so that when lined up the colors cancel out perfectly and everything just looks black. But if the target itself has a different color pattern, the mismatch would be glaringly obvious – revealing a "fake" target.
In the case of the neutron-based concept, it's the mix of the heavy isotopes that's matched, rather than colors, but the effect is the same. The country that produced the bomb would produce the matching "filter," in this case called a cryptographic reciprocal or a cryptographic foil. The warhead to be verified, which can be concealed within a black box to prevent any visual inspection, is lined up with the cryptographic reciprocal or a foil. The combination undergoes a measurement using a beam of neutrons, and a detector which can register the isotope-specific resonant signatures. The resulting neutron data can be rendered as an image that appears essentially blank if the warhead is real, but shows details of the warhead if it's not. (In the alternative version, the beam consists of photons, the "filter" is a cryptographic foil, and the output is a spectrum rather than an image, but the essential principle is the same.) These tests are based on the requirements of a Zero Knowledge Proof – where the honest prover can demonstrate compliance, without revealing anything more.
There's a further disincentive to cheating built into the neutron-based system. Because the template is a perfect complement of the warhead itself, trying to pass off a dummy instead of the real thing would actually do the very thing that nations are trying to avoid: it would reveal some of the secret details of the warhead's composition and configuration (just as a photographic negative lined up with a non-matching positive would reveal the outlines of the image).
Danagoulian, who grew up in Armenia when it was part of Soviet Union before emigrating to the U.S. for college (he earned his bachelor's at MIT in 1999 and his PhD at the University of Illinois at Urbana-Champaign in 2006), says he remembers vividly the Cold-War days when both the U.S.S.R and the U.S. had thousands of nuclear missiles perpetually at the ready, aimed at each others' cities. After the fall of the Soviet Union, he says, there was a huge amount of fissile material suitable for bomb-making left in Russia and its former satellites. This material "measured in tens of tons – which could be used for making thousands, if not tens of thousands," of nuclear bombs, he says. Those memories provided a strong motivation to find ways of using his knowledge in physics to facilitate a reduction in the amount of such material and in the number of nuclear weapons at the ready around the world, he says.
The team has verified the neutron concept through extensive simulations and now hopes to prove that it works through tests of actual fissile materials, in collaboration with a national laboratory that can provide such materials. The photon concept has been the focus of a proof of concept experiment carried out at MIT and is described in the PNAS publication.
Karl van Bibber, professor and co-chair of the Department of Nuclear Engineering at the University of California at Berkeley, says that an earlier paper from this team that outlined the concept "attracted much attention when it appeared, but as a theoretical work one could rightly reserve judgment regarding its feasibility in practice." This new paper, however, "goes far as a first scientific demonstration of the technique, particularly as the experiment was performed with the simplest and least favorable photon source available, ... simple enough for this methodology to gain currency in an actual verification program."
Thus, van Bibber says, "Danagoulian and team have passed a major bar ... The challenge up next will be tests with higher fidelity surrogates for warheads and ultimately real systems."
If a system does someday get adopted and helps bring about significant reductions in the amount of nuclear weapons in the world, Danagoulian says, "everyone will be better off. There will be less of this waiting around, waiting to be stolen, accidentally dropped or smuggled somewhere. We hope this will make a dent in the problem."
Posted: 19 Apr 2018 09:20 AM PDT
The founder of a startup at the cutting edge of computer science and cloud computing, Ryan Robinson '17 says his MIT 21E joint degree in the humanities and engineering has helped him understand the human dimensions of the world's greatest challenges.
"In other words," he says, "I wanted to do everything."
He didn't realize at the time that "everything" would include poetry.
As a sophomore, Robinson marveled while Professor Howard Eiland analyzed Shakespeare's Sonnet 121 in 21L.004 (Reading Poetry). Methodically, expertly, Eiland examined each line of the classic poem for historical resonance, allusions to other works, and etymological significance. Exposure to such mastery was inspiring, Robinson says.
"I realized that the world is more complex than I had ever imagined and there's a beauty behind that complexity if you are willing to look for it," he says, "It was at that moment that I saw myself as an MIT literature major."
Robinson himself says: "I use the skills I learned as a humanities and engineering major every day to move between the worlds of business and technology."
Story prepared by MIT SHASS Communications
Posted: 19 Apr 2018 08:59 AM PDT
Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre.
This model, which consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks, was used by the researchers to shed light on how the human brain may be performing the same tasks.
"What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels," says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and the senior author of the study. "Historically, this type of sensory processing has been difficult to understand, in part because we haven't really had a very clear theoretical foundation and a good way to develop models of what might be going on."
The study, which appears in the April 19 issue of Neuron, also offers evidence that the human auditory cortex is arranged in a hierarchical organization, much like the visual cortex. In this type of arrangement, sensory information passes through successive stages of processing, with basic information processed earlier and more advanced features such as word meaning extracted in later stages.
MIT graduate student Alexander Kell and Stanford University Assistant Professor Daniel Yamins are the paper's lead authors. Other authors are former MIT visiting student Erica Shook and former MIT postdoc Sam Norman-Haignere.
Modeling the brain
When deep neural networks were first developed in the 1980s, neuroscientists hoped that such systems could be used to model the human brain. However, computers from that era were not powerful enough to build models large enough to perform real-world tasks such as object recognition or speech recognition.
Over the past five years, advances in computing power and neural network technology have made it possible to use neural networks to perform difficult real-world tasks, and they have become the standard approach in many engineering applications. In parallel, some neuroscientists have revisited the possibility that these systems might be used to model the human brain.
"That's been an exciting opportunity for neuroscience, in that we can actually create systems that can do some of the things people can do, and we can then interrogate the models and compare them to the brain," Kell says.
The MIT researchers trained their neural network to perform two auditory tasks, one involving speech and the other involving music. For the speech task, the researchers gave the model thousands of two-second recordings of a person talking. The task was to identify the word in the middle of the clip. For the music task, the model was asked to identify the genre of a two-second clip of music. Each clip also included background noise to make the task more realistic (and more difficult).
After many thousands of examples, the model learned to perform the task just as accurately as a human listener.
"The idea is over time the model gets better and better at the task," Kell says. "The hope is that it's learning something general, so if you present a new sound that the model has never heard before, it will do well, and in practice that is often the case."
The model also tended to make mistakes on the same clips that humans made the most mistakes on.
The processing units that make up a neural network can be combined in a variety of ways, forming different architectures that affect the performance of the model.
The MIT team discovered that the best model for these two tasks was one that divided the processing into two sets of stages. The first set of stages was shared between tasks, but after that, it split into two branches for further analysis — one branch for the speech task, and one for the musical genre task.
Evidence for hierarchy
The researchers then used their model to explore a longstanding question about the structure of the auditory cortex: whether it is organized hierarchically.
In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system. It has been well documented that the visual cortex has this type of organization. Earlier regions, known as the primary visual cortex, respond to simple features such as color or orientation. Later stages enable more complex tasks such as object recognition.
However, it has been difficult to test whether this type of organization also exists in the auditory cortex, in part because there haven't been good models that can replicate human auditory behavior.
"We thought that if we could construct a model that could do some of the same things that people do, we might then be able to compare different stages of the model to different parts of the brain and get some evidence for whether those parts of the brain might be hierarchically organized," McDermott says.
The researchers found that in their model, basic features of sound such as frequency are easier to extract in the early stages. As information is processed and moves farther along the network, it becomes harder to extract frequency but easier to extract higher-level information such as words.
To see if the model stages might replicate how the human auditory cortex processes sound information, the researchers used functional magnetic resonance imaging (fMRI) to measure different regions of auditory cortex as the brain processes real-world sounds. They then compared the brain responses to the responses in the model when it processed the same sounds.
They found that the middle stages of the model corresponded best to activity in the primary auditory cortex, and later stages corresponded best to activity outside of the primary cortex. This provides evidence that the auditory cortex might be arranged in a hierarchical fashion, similar to the visual cortex, the researchers say.
"What we see very clearly is a distinction between primary auditory cortex and everything else," McDermott says.
Alex Huth, an assistant professor of neuroscience and computer science at the University of Texas at Austin, says the paper is exciting in part because it offers convincing evidence that the early part of the auditory cortex performs generic sound processing while the higher auditory cortex performs more specialized tasks.
"This is one of the ongoing mysteries in auditory neuroscience: What distinguishes the early auditory cortex from the higher auditory cortex? This is the first paper I've seen that has a computational hypothesis for that," says Huth, who was not involved in the research.
The authors now plan to develop models that can perform other types of auditory tasks, such as determining the location from which a particular sound came, to explore whether these tasks can be done by the pathways identified in this model or if they require separate pathways, which could then be investigated in the brain.
The research was funded by the National Institutes of Health, the National Science Foundation, a Department of Energy Computational Science Graduate Fellowship, and a McDonnell Scholar Award.
|You are subscribed to email updates from MIT News. |
To stop receiving these emails, you may unsubscribe now.
|Email delivery powered by Google|
|Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States|