Zicutake USA Comment | Search Articles

#History (Education) #Satellite report #Arkansas #Tech #Poker #Language and Life #Critics Cinema #Scientific #Hollywood #Future #Conspiracy #Curiosity #Washington
 Smiley face
 SYFY TV online Free


[Calculate SHA256 hash]
 Smiley face
 Smiley face Encryption Text and HTML
Aspect Ratio Calculator
[HTML color codes]
 Smiley face Conversion to JavaScript
[download YouTube videos in MP4, FLV, 3GP, and many more formats]

 Smiley face Mining Satoshi | Payment speed

 Smiley face
Online BitTorrent Magnet Link Generator


#Education Articles University

#Education Articles University

AI, the law, and our future

Posted: 18 Jan 2019 11:20 AM PST

Scientists and policymakers converged at MIT on Tuesday to discuss one of the hardest problems in artificial intelligence: How to govern it.

The first MIT AI Policy Congress featured seven panel discussions sprawling across a variety of AI applications, and 25 speakers — including two former White House chiefs of staff, former cabinet secretaries, homeland security and defense policy chiefs, industry and civil society leaders, and leading researchers.

Their shared focus: how to harness the opportunities that AI is creating — across areas including transportation and safety, medicine, labor, criminal justice, and national security — while vigorously confronting challenges, including the potential for social bias, the need for transparency, and misteps that could stall AI innovation while exacerbating social problems in the United States and around the world.

"When it comes to AI in areas of public trust, the era of moving fast and breaking everything is over," said R. David Edelman, director of the Project on Technology, the Economy, and National Security (TENS) at the MIT Internet Policy Research Initiative (IPRI), and a former special assistant to the president for economic and technology policy in the Obama White House.

Added Edelman: "There is simply too much at stake for all of us not to have a say."

Daniel Weitzner, founding director of IPRI and a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), said a key objective of the dialogue was to help policy analysts feel confident about their ability to actively shape the effects of AI on society.

"I hope the policymakers come away with a clear sense that AI technology is not some immovable object, but rather that the right interaction between computer science, government, and society at large will help shape the development of new technology to address society's needs," Weitzner said at the close of the event.

The MIT AI Policy Congress was organized by IPRI, alongside a two-day meeting of the Organization for Economic Cooperation and Development (OECD), the Paris-based intergovernmental association, which is developing AI policy recommendations for 35 countries around the world. As part of the event, OECD experts took part in a half-day, hands-on training session in machine learning, as they trained and tested a neural network under the guidance of Hal Abelson, the  Class of 1922 Professor of Computer Science and Engineering at MIT.

Tuesday's forum also began with a primer on the state of the art in AI from Antonio Torralba, a professor in CSAIL and the Department of Electrical Engineering and Computer Science (EECS), and director of the MIT Quest for Intelligence. Noting that "there are so many things going on" in AI, Torralba quipped: "It's very difficult to know what the future is, but it's even harder to know what the present is."

A new "commitment to address ethical issues"

Tuesday's event, co-hosted by the IPRI and the MIT Quest for Intelligence, was held at a time when AI is receiving a significant amount of media attention — and an unprecedented level of financial investment and institutional support.

For its part, MIT announced in October 2018 that it was founding the MIT Stephen A. Schwarzman College of Computing, supported by a $350 million gift from Stephen Schwarzman, which will serve as an interdisciplinary nexus of research and education in computer science, data science, AI, and related fields. The college will also address policy and ethical issues relating to computing.

"Here at MIT, we are at a unique moment with the impending launch of the new MIT Schwarzman College of Computing," Weitzner noted. "The commitment to address policy and ethical issues in computing will result in new AI research, and curriculum to train students to develop new technology to meet society's needs."

Other institutions are making an expanded commitment to AI as well — including the OECD.

"Things are evolving quite quickly," said Andrew Wyckoff, director for science, technology, and innovation at the OECD. "We need to begin to try to get ahead of that."

Wyckoff added that AI was a "top three" policy priority for the OECD in 2019-2020, and said the organization was forming a "policy observatory" to produce realistic assessments of AI's impact, including the issue of automation replacing jobs.

"There's a lot of fear out there about [workers] being displaced," said Wyckoff. "We need to look at this and see what is reality, versus what is fear."

A fair amount of that idea stems more from fear than reality, said Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and a professor at the MIT Sloan School of Management, during a panel discussion on manufacturing and labor.

Compared to the range of skills needed in most jobs, "Today what machine learning can do is much more narrow," Brynjolfsson said. "I think that's going to be the status quo for a number of years."

Brynjolfsson noted that his own research on the subject, evaluating the full range of specific tasks used in a wide variety of jobs, shows that automation tends to replace some but not all of those tasks.

"In not a single one of those occupations did machine learning run the table" of tasks, Brynjolfsson said. "You're not just going to be able to plug in a machine very often." However, he noted, the fact that computers can usurp certain tasks means that "reinvention and redesign" will be necessary for many jobs. Still, as Brynjolfsson emphasized, "That process is going to play out over years, if not decades."

A varied policy landscape

One major idea underscored at the event is that AI policymaking could unfold quite differently from industry to industry. For autonomous vehicles — perhaps the most widely-touted application of AI — U.S. states have significant rulemaking power, and laws could vary greatly across state lines.

In a panel discussion on AI and transportation, Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of CSAIL, remarked that she sees transportation "as one of the main targets and one of the main points of adoption for AI technologies in the present and near future."

Rus suggested that the use of  autonomous vehicles in some low-speed, less-complex environments might be possible within five years or so, but she also made clear that autonomous vehicles fare less well in more complicated, higher-speed situations, and struggle in bad weather.

Partly for those reasons, many autonomous vehicles figure to feature systems where drivers can take over the controls. But as Rus noted, that "depends on people's ability to take over instantaneously," while studies are currently showing that it takes drivers about nine seconds to assume control of their vehicles.

The transportation panel discussion also touched on the use of AI in nautical and aerial systems. In the latter case, "you can't look into your AI co-pilot's eyes and judge their confidence," said John-Paul Clarke, the vice president of strategic technologies at United Technologies, regarding the complex dynamics of human-machine interfaces.

In other industries, fundamental AI challenges involve access to data, a point emphasized by both Torralba and Regina Barzilay, an MIT professor in both CSAIL and EECS. During a panel on health care, Barzilay presented on one aspect of her research, which uses machine learning to analyze mammogram results for better early detection of cancer. In Barzilay's view, key technical challenges in her work that could be addressed by AI policy include access to more data and testing across populations — both of which can help refine automated detection tools.

The matter of how best to create access to patient data, however, led to some lively subsequent exchanges. Tom Price, former secretary of health and human services in the Trump administration, suggested that "de-identified data is absolutely the key" to further progress, while some MIT researchers in the sudience suggested that it is virtually impossible to create totally anonymous patient data.

Jason Furman, a professor of the practice of economic policy at the Harvard Kennedy School and a former chair of the Council of Economic Advisors in the Obama White House, addressed the concern that insurers would deny coverage to people based on AI-generated predictions about which people would most likely develop diseases later in life. Furman suggested that the best solution for this lies outside the AI domain: preventing denial of care based on pre-existing conditions, an element of the Affordable Care Act.

But overall, Furman added, "the real problem with artificial intelligence is we don't have enough of it."

For his part, Weitzner suggested that, in lieu of perfectly anonymous medical data, "we should agree on what are the permissible uses and the impermissible uses" of data, since "the right way of enabling innovation and taking privacy seriously is taking accountability seriously."

Public accountability

For that matter, the accountability of organizations constituted another touchstone of Tuesday's discussions, especially in a panel on law enforcement and AI.

"Government entities need to be transparent about what they're doing with respect to AI," said Jim Baker, Harvard Law School lecturer and former general counsel of the FBI. "I think that's obvious."

Carol Rose, executive director of the American Civil Liberties Union's Massachusetts chapter, warned against over-use of AI tools in law enforcement

"I think AI has tremendous promise, but it really depends if the data scientists and law enforcement work together," Rose said, suggesting that a certain amount of "junk science" had already made its way into tools being marketed to law-enforcement officials. Rose also cited Joy Buolamwini of the MIT Media Lab as a leader in the evaluation of such AI tools; Buolamwini founded the Algorithmic Justice League, a group scrutinizing the use of facial recognition technologies.

"Sometimes I worry we have an AI hammer looking for a nail," Rose said.

All told, as Edelman noted in closing remarks, the policy world consists of "very different bodies of law," and policymakers will need to ask themselves to what extent general regulations are meaningful, or if AI policy issues are best addressed in more specific ways — whether in medicine, criminal justice, or transportation.  

"Our goal is to see the interconnection among these fields … but as we do, let's also ask ourselves if 'AI governance' is the right frame at all — it might just be that in the near future, all governance deals with AI issues, one way or another," Edelman said.

Weitzner concluded the conference with a call for governments to continue engagement with the computer science and artificial intelligence technical communities. "The technologies that are shaping the world's future are being developed today. We have the opportunity to be sure that they serve society's needs if we keep up this dialogue as way of informing technical design and cross-disciplinary research."

Enhanced NMR reveals chemical structures in a fraction of the time

Posted: 18 Jan 2019 10:59 AM PST

MIT researchers have developed a way to dramatically enhance the sensitivity of nuclear magnetic resonance spectroscopy (NMR), a technique used to study the structure and composition of many kinds of molecules, including proteins linked to Alzheimer's and other diseases.

Using this new method, scientists should be able to analyze in mere minutes structures that would previously have taken years to decipher, says Robert Griffin, the Arthur Amos Noyes Professor of Chemistry. The new approach, which relies on short pulses of microwave power, could allow researchers to determine structures for many complex proteins that have been difficult to study until now.

"This technique should open extensive new areas of chemical, biological, materials, and medical science which are presently inaccessible," says Griffin, the senior author of the study.

MIT postdoc Kong Ooi Tan is the lead author of the paper, which appears in Sciences Advances on Jan. 18. Former MIT postdocs Chen Yang and Guinevere Mathies, and Ralph Weber of Bruker BioSpin Corporation, are also authors of the paper.

Enhanced sensitivity

Traditional NMR uses the magnetic properties of atomic nuclei to reveal the structures of the molecules containing those nuclei. By using a strong magnetic field that interacts with the nuclear spins of hydrogen and other isotopically labelled atoms such as carbon or nitrogen, NMR measures a trait known as chemical shift for these nuclei. Those shifts are unique for each atom and thus serve as fingerprints, which can be further exploited to reveal how those atoms are connected.

The sensitivity of NMR depends on the atoms' polarization — a measurement of the difference between the population of "up" and "down" nuclear spins in each spin ensemble. The greater the polarization, the greater sensitivity that can be achieved. Typically, researchers try to increase the polarization of their samples by applying a stronger magnetic field, up to 35 tesla.

Another approach, which Griffin and Richard Temkin of MIT's Plasma Science and Fusion Center have been developing over the past 25 years, further enhances the polarization using a technique called dynamic nuclear polarization (DNP). This technique involves transferring polarization from the unpaired electrons of free radicals to hydrogen, carbon, nitrogen, or phosphorus nuclei in the sample being studied. This increases the polarization and makes it easier to discover the molecule's structural features.

DNP is usually performed by continuously irradiating the sample with high-frequency microwaves, using an instrument called a gyrotron. This improves NMR sensitivity by about 100-fold. However, this method requires a great deal of power and doesn't work well at higher magnetic fields that could offer even greater resolution improvements.

To overcome that problem, the MIT team came up with a way to deliver short pulses of microwave radiation, instead of continuous microwave exposure. By delivering these pulses at a specific frequency, they were able to enhance polarization by a factor of up to 200. This is similar to the improvement achieved with traditional DNP, but it requires only 7 percent of the power, and unlike traditional DNP, it can be implemented at higher magnetic fields.

"We can transfer the polarization in a very efficient way, through efficient use of microwave irradiation," Tan says. "With continuous-wave irradiation, you just blast microwave power, and you have no control over phases or pulse length."

Saving time

With this improvement in sensitivity, samples that would previously have taken nearly 110 years to analyze could be studied in a single day, the researchers say. In the Sciences Advances paper, they demonstrated the technique by using it to analyze standard test molecules such as a glycerol-water mixture, but they now plan to use it on more complex molecules.

One major area of interest is the amyloid beta protein that accumulates in the brains of Alzheimer's patients. The researchers also plan to study a variety of membrane-bound proteins, such as ion channels and rhodopsins, which are light-sensitive proteins found in bacterial membranes as well as the human retina. Because the sensitivity is so great, this method can yield useful data from a much smaller sample size, which could make it easier to study proteins that are difficult to obtain in large quantities.

The research was funded by the National Institutes of Biomedical Imaging and Bioengineering, the Swiss National Science Foundation, and the German Research Foundation.

MIT Press to co-publish new open-access Quantitative Science Studies journal

Posted: 18 Jan 2019 09:10 AM PST

The International Society for Scientometrics and Informetrics (ISSI) has announced the launch of a new journal, Quantitative Science Studies (QSS). QSS is owned by ISSI, the primary scholarly and professional society for scientometrics and informetrics, and will be published jointly with the MIT Press in compliance with fair open access principles.

QSS will be a journal run for and by the scientometric community. The initial editorial board will be fully constituted by the former editorial board of the Journal of Informetrics (JOI), an Elsevier-owned journal. The transition of the editorial board from JOI to QSS was initiated by the unanimous resignation, on Jan. 10, of all members of the JOI editorial board. The editorial board members maintain that scholarly journals should be owned by the scholarly community rather than by commercial publishers; that journals should be open access; and that publishers should make citation data freely available. The members of the board had been unsatisifed with Elsevier for not meeting their expectations, and they therefore resigned their positions.

The content for QSS will be open access and therefore freely available for readers worldwide. Funding for establishing and marketing the new journal has been provided in part by the MIT Libraries. To ensure access for authors, the MIT Press will charge a comparatively low per-article charge, which will be fully covered by the Technische Informationsbibliothek (TIB) - Leibniz Information Centre for Science and Technology for the first three years of operation, with support of the Communication, Information, Media Centre of the University of Konstanz. The funds from TIB will be managed by the Fair Open Access Alliance to ensure that the journal is operating under fair open access principles. The MIT Press is also a full participant in the I4OC initiative, which promotes unrestricted availability of scholarly citation data.

A recognized leader in open access book and journals publishing, the MIT Press has partnered with the MIT Libraries on several open access projects, including the Strong Ideas, a new hybrid open-access/print book series; and with the MIT Media Lab to launch the Knowledge Futures Group to support the development of new open access publishing platforms and tools.

"The MIT Press is very pleased to be the ISSI's publishing partner for QSS," says Nick Lindsay, director of journals and open access at the MIT Press. "Both organizations share many goals around extending the reach and availability of scholarship and QSS will undoubtedly quickly become a central node for scientometrics research."

QSS is now open and accepting submissions. Please consult the journal's website for more details.

Democratizing artificial intelligence in health care

Posted: 18 Jan 2019 09:00 AM PST

An artificial intelligence program that's better than human doctors at recommending treatment for sepsis may soon enter clinical trials in London. The machine learning model is part of a new way of practicing medicine that mines electronic medical-record data for more effective ways of diagnosing and treating difficult medical problems, including sepsis, a blood infection that kills an estimated 6 million people worldwide each year. 

The discovery of a promising treatment strategy for sepsis didn't come about the regular way, through lengthy, carefully-controlled experiments. Instead, it emerged during a free-wheeling hackathon in London in 2015. 

In a competition bringing together engineers and health care professionals, one team hit on a better way to treat sepsis patients in the intensive-care unit, using MIT's open-access MIMIC database. One team member, Matthieu Komorowski, would go on to work with the MIT researchers who oversee MIMIC to develop a reinforcement learning model that predicted higher survival rates for patients given lower doses of IV fluids and higher doses of blood vessel-constricting drugs. The researchers published their findings this fall in Nature Medicine

The paper is part of a stream of research to come out of the "datathons" pioneered by Leo Celi, a researcher at MIT and staff physician at Beth Israel Deaconess Medical Center. Celi held the first datathon in January 2014 to spark collaboration among Boston-area nurses, doctors, pharmacists and data scientists. Five years later a datathon now happens once a month somewhere in the world. 

Following months of preparation, participants gather at a sponsoring hospital or university for the weekend tocomb through MIMIC or a local database in search of better ways to diagnose and treat critical care patients. Many go on to publish their work, and in a new milestone for the program, the authors of the reinforcement learning paper are now preparing their sepsis-treatment model for clinical trials at two hospitals affiliated with Imperial College London.

As a young doctor, Celi was troubled by the wide variation he saw in patient care. The optimal treatment for the average patient often seemed ill-suited for the patients he encountered. By the 2000s, Celi could see how powerful new tools for analyzing electronic medical-record data could personalize care for patients. He left his job as a doctor to study for a dual master's in public health and biomedical informatics at Harvard University and MIT respectively. 

Joining MIT's Institute for Medical Engineering and Science after graduation, he identified two main barriers to a data revolution in health care: medical professionals and engineers rarely interacted, and most hospitals, worried about liability, wanted to keep their patient data — everything from lab tests to doctors' notes — out of reach. 

Celi thought a hackathon-style challenge could break down those barriers. The doctors would brainstorm questions and answer them with the help of the data scientists and the MIMIC database. In the process, their work would demonstrate to hospital administrators the value of their untapped archives. Eventually, Celi hoped that hospitals in developing countries would be inspired to create their own databases, too. Researchers unable to afford clinical trials could understand their own patient populations and treat them better, democratizing the creation and validation of new knowledge.

"Research doesn't have to be expensive clinical trials," he says. "A database of patient health records contains the results of millions of mini experiments involving your patients. Suddenly you have several lab notebooks you can analyze and learn from."

So far, a number of sponsoring hospitals — in London, Madrid, Tarragona, Paris, Sao Paulo, and Beijing — have embarked on plans to build their own version of MIMIC, which took MIT's Roger Mark and Beth Israel seven years to develop. Today the process is much quicker thanks to tools the MIMIC team has developed and shared with others to standardize and de-identify their patient data. 

Celi and his team stay in touch with their foreign collaborators long after the datathons by hosting researchers at MIT, and reconnecting with them at datathons around the globe. "We're creating regional networks — in Europe, Asia and South America — so they can help each other," says Celi. "It's a way of scaling and sustaining the project."

Humanitas Research Hospital, Italy's largest, is hosting the next one — the Milan Critical Care Datathon Feb. 1-3 — and Giovanni Angelotti and Pierandrea Morandini, recent exchange students to MIT, are helping to put it on. "Most of the time clinicians and engineers speak different languages, but these events promote interaction and build trust," Morandini says. "It's not like at a conference where someone is talking and you take notes. You have to build a project and carry to it to the end. There are no experiences like this in the field."

The pace of the events has picked up with tools like Jupyter Notebook, Google Colab, and GitHub letting teams dive into the data instantly and collaborate for months after, shortening the time to publication. Celi and his team now teach a semester-long course at MIT, HST.953 (Collaborative Data Science in Medicine), modeled after the datathons, creating a second pipeline for this kind of research.

Beyond standardizing patient care and making AI in health care accessible to all, Celi and his colleagues see another benefit of the datathons: their built-in peer-review process could prevent more flawed research from being published. They outlined their case in a 2016 piece in Science Translational Medicine.  

"We tend to celebrate the story that gets told — not the code or the data," says study co-author Tom Pollard, an MIT researcher who is part of the MIMIC team. "But it's the code and data that are essential for evaluating whether the story is true, and the research legitimate."