--> -->
Zicutake USA Comment | Search Articles

#History (Education) #Satellite report #Arkansas #Tech #Poker #Language and Life #Critics Cinema #Scientific #Hollywood #Future #Conspiracy #Curiosity #Washington
 Smiley face
 SYFY TV online Free


[Calculate SHA256 hash]
 Smiley face
Zicutake BROWSER
 Smiley face Encryption Text and HTML
Aspect Ratio Calculator
[HTML color codes]
 Smiley face Conversion to JavaScript
[download YouTube videos in MP4, FLV, 3GP, and many more formats]

 Smiley face Mining Satoshi | Payment speed

 Smiley face
Online BitTorrent Magnet Link Generator


#math.columbia (University)

#math.columbia (University)

15 Years of Multiverse Mania

Posted: 29 Jan 2018 06:57 PM PST

Today is the 15th anniversary of the event that kicked off the Multiverse Mania that continues to this day, recently taking the form of a concerted attack on conventional notions of science. 2018 has seen an acceleration of this attack, with the latest example appearing this weekend.

On January 29, 2003, Kachru, Kallosh, Linde and Trivedi submitted a paper to the arXiv that outlined a construction of a supposed model of a metastable string theory state that had all moduli fixed. Ever since the first explosion of interest in string theory unification in 1984-5, it had been clear that a big problem with using string theory to get anything that looks like known physics was the so-called “moduli problem”. If you try and use 10d superstring theory to describe our universe, you need to somehow hide six of the dimensions, and the best way to do that seemed to be to argue that superstring theory implied one could do this by compactifying on an unobservably small approximately Calabi-Yau manifold. Such manifolds however come in families labeled by “moduli” parameters, which can be thought of as describing the size and shape of the Calabi-Yau. These moduli will show up as zero mass fields generating new long-range forces unless some dynamical mechanism could be found to fix their values. It was this that KKLT claimed to have found. I won’t even try to describe the complex KKLT proposal, which was aptly described by Lenny Susskind as a “Rube Goldberg mechanism”.

What string theorists had been hoping for was a moduli stabilization mechanism that would pick out specific moduli field values, getting rid of the unwanted dozens of new long-range forces and providing a way to make physical predictions. While the KKLT mechanism got rid of the unwanted forces, it had been observed three years earlier by Bousso and Polchinski, working with just parts of the Rube Goldberg mechanism, that this sort of thing led to not one specific value of the moduli fields, but an exponentially large number of possibilities. They had noted that this could allow an anthropic solution to the cosmological constant problem, and the KKLT fixing of all the moduli provided a model that accomplished this (without the long range forces).

KKLT did not mention anthropics and the multiverse, but less than a month later Lenny Susskind published The Anthropic Landscape of String Theory, a call to arms for anthropics and a founding document of Multiverse Mania. He immediately went to work on writing a book-length version of string theory multiverse propaganda aimed at the public, The Cosmic Landscape, which was published in 2005. Less than a month after Susskind’s manifesto, Michael Douglas published a statistical analysis of supposed string/M-theory vacua, and at some point the estimated number $10^{500}$ of vacua started appearing based on this sort of calculation.

I didn’t notice KKLT when it appeared, but did notice the Susskind arXiv article. I had just finished writing the first version of my book, and remember that my reaction to the Susskind article was roughly “Wow, if people like Susskind are arguing in effect that you can’t predict anything with string theory, that’s going to pull the plug on the subject.” The book took a while to find a publisher, and by the time it was published I had tacked on a chapter about the multiverse problem. I started this blog in March 2004, and recently looked back at some of the earliest postings, noticing that a huge amount of time was spent arguing with people about KKLT and its implications for the predictivity of string theory. It seemed clear to me from looking at the calculations people were doing that this kind of thing could not ever lead to a prediction of anything. I won’t go over those arguments, but claim that my point of view has held up well (no prediction of anything has ever emerged from such calculations, for reasons that are obvious if you start looking at them).

Back in 2003-4 I never would have believed that the subject would end up in the state it finds itself in now. With the LHC results removing the last remaining hope for observational evidence relevant to string theory unification, what we’ve been seeing the last few years has been a concerted campaign to avoid admitting failure by the destructive tactic of trying to change the usual conception of testable science. Two examples of this from last week were discussed here, and today there’s a third effort along the same lines, Quantum Multiverses, by Hartle. Unlike the others, this one includes material on the interpretation of quantum mechanics one may or may not agree with, but of no relevance to the fundamental problem of not having a predictive theory that can be tested.

I’m wasting far too much time discussing the obvious problems with articles like this, to no perceptible effect. Hartle like the others completely ignores the actual arguments against his position (he lists some references. describing them as “known to the author (but not necessarily read carefully by the author)”). In a section on “A FAQ for discussion” we find arguments that include

  • The cosmological multiverse is falsifiable, because maybe you’ll falsify quantum mechanics.
  • The cosmological multiverse is testable: “by experiment if a very large spacetime volume could be prepared with an initial quantum state from which galaxies, stars, life etc would emerge over billions of years.” Not surprisingly, no indication is given of how we will produce such a state or any theory that would describe what would happen if we did.
  • The theory of evolution is just like the theory of the cosmological multiverse.

Both the absurdity and the danger of this last argument are all too clear.

By the way, for a while earlier this year the arXiv started allowing trackbacks again to this blog, but then this stopped again. The origin of the ban seems to have been in the story described here and my early criticism of the string theory multiverse. I have no idea what their current justification for the ban is.

Update: A good place to look for information about the current state of string landscape calculations is at the website for this workshop. The idea that the problems of this subject can be solved by “modern techniques in data science” seems to me absurd, but for a different point of view, look at the slides of Michael Douglas. For something more sensible, try the talk by Frederik Denef, which describes some of the fundamental intractable problems:

  • You don’t have a complete theory, with only some non-perturbative corrections known, no systematic understanding of these.
  • Dine-Seiberg Problem: When corrections can be computed, they are not important, and when they are important, they cannot be computed.
  • Measure Problem: Whenever a landscape measure is strongly predictive, it is wrong, and when it's not, we don't know if it's right.
  • Tractability Problem: Whenever a low energy property is selective enough to single out a few vacua, finding these vacua is intractable.

Denef does make some very interesting comments about where modern techniques in data science might actually be useful: dealing not with the landscape of string vacua, but with the huge landscape of string theory papers (e.g. the 15,000 papers that refer to the Maldacena paper). He argues:

For obvious reasons, besides time constraints, incentives to write papers are much stronger for research scientists than to read them. So printed stacks pile up unread, PDFs remain ambitiously open until we reboot our laptops, recursive reference-backtracking gets sidetracked by the deluge of micro-distractions puncturing our days. This, plus the sheer volume of disorganized pages of important results, leads to loss of access to crucial knowledge, to repeated duplication of efforts, and to many other
inefficiencies. Worst of all, it becomes increasingly harder for young brilliant minds to stand on the shoulders of giants, and thus to make revolutionary new discoveries. It seems inevitable that we will have to outsource the tedious task of parsing the literature, in search for relevant results, insights, questions and inspiration, to the Machines.