Hi, I don’t usually use Fluther, but, just for you, Ivan… In Metaversalism, the universe is conceived as existing simultaneously in all possible contexts that could have generated it.
This is similar to how, in quantum mechanics, a quantum state receives a contribution to its amplitude along every possible trajectory that could have led to it. In quantum mechanics, we cannot say that a given particle came from only one unique place – it came from all possible places it could have come from. Likewise, we cannot say that the universe had only one origin or only one purpose. Some share of the responsibility for generating the world we see around us is accounted for by each of every possible origin, every possible purpose.
However, just as, for a quantum particle, we can ask which classical trajectory for getting to its present state was most probable, similarly, for the universe as a whole, we can ask, what origin/purpose is most probable? Which has the largest “share” of responsibility for the universe that we see?
As for this, I agree with Tyson that we do not know for sure. You express confidence that, given enough time, we would eventually figure it out. I do not think we can ever figure it out with certainty. However, continued study of the possible mechanisms by which universes can be generated, and of general principles of science and philosophy, may eventually lead us to find an objective way to evaluate these probabilities.
I say “may” because I am not sure that these questions will not remain forever inscrutable no matter how much thought or energy is devoted to them. How can one objectively define a probability density function over an uncountably-large space of possible “origin scenarios” or larger landscapes (like the landscape of possible Calubi-Yao manifolds in String Theory) within which our universe might be embedded?
In quantum mechanics, probability amplitudes are determined by the constraint of the Schroedinger equation (plus boundary conditions). But can quantum-mechanical principles even be applied outside of the context of our particular spacetime manifold? And, are complex amplitudes sufficient for this, or do we also need to consider quaternions, octonions, noncommutative geometries and the like? Mathematicians and theoretical physicists are already exploring questions of this nature today, and it is possible that they will eventually find answers. But I am not sure.
Even if, to simplify things, we suppose that continuous spaces are just a convenient mathematical fiction, and restrict our attention to discrete computational models, there is still the problem that every kind of model can be simulated in an infinite variety of others, and there may be no unique unambiguous way to assign weights to possible computational models, even in the discrete case.
In Metaversalism, I have proposed that there might be a unique “fixed point” probability distribution, in which the probability of a given model is related to the probability of each other possible model in which it is embedded, so that perhaps the probability function could be defined, at least in principle, by solving a set of constraints over the graph of embeddings (in rough analogy to how Google’s PageRank ranks web pages). But so far, this is still only a conjecture, and not yet a concrete theory of how exactly to define such a function.
Perhaps it is the case that probability itself is only an illusion, an artifact of how our consciousness works. Maybe the primary reality is that every possible universe (state, scenario..) exists with exactly equal strength/reality. I.e., maybe the panorama of all that exists is really just a blank slate, filled with uniform, random, structureless static; white noise. Maybe all that we are is an interpretation of the static, an answer to the abstract question “What if we assigned something called a ‘probability function’ to the uniform static of possible universes in such-and-such way, what would beings living in the high-probability regions of that space perceive?” But then, we wonder, who is asking the question, and why do they flesh out “such-and-such way” in the particular way that they do, and then we are led to consider the probabilities of different possible answers to these questions, and we are back to the question of how to objectively determine the probabilities again. (Or maybe there are no real probabilities at all there either, just uniform static at every level of questioning? Does that even make sense?)
These difficult and circular-seeming chains of reasoning are what make me worry that these questions will remain forever inscrutable, no matter how much thought we apply to them. Even if we turned the entire universe into a vast supercomputer and dedicated 100% of its resources to trying to answer these questions for the next 10 or 100 billion years, it still might just come up with bumkus.