The complete question was:
If you were to look at the universe as an organism, was the early universe a simpler organism than the present-day organism? Is the total complexity of the universe growing, shrinking or staying the same? And how do you measure that?
Physicist: Absolutely. The total complexity of the universe is increasing, due to the inevitable march of entropy (or information), which is exactly the measure of complexity. A more intuitive way to talk about complexity and entropy is: can you predict what you’ll see next? If you look at part of a checker board, you can probably guess what the whole thing looks like, so the board is predictable and has low entropy. In the early universe matter was distributed pretty uniformly, almost all of it was hydrogen, almost everything was the same temperature, and there were no complex chemicals of any kind (going back far enough everything was ionized). So if you’d seen one part of the universe, you’ve pretty much seen all of it.
Nowadays the universe is full of a wide variety of different elements with very complicated ways to combine together, matter shows up hot, cold, as plasma, as proteins, in stars, and clouds, and not at all. The amount of data it would take to accurately describe the universe as it is now utterly dwarfs the amount that it would take to describe the early universe. On an atom-by-atom basis, in the early universe you could grab an atom at random and feel fairly confident that: it’s hydrogen, it’s ionized, it’s about “yay” far away from the other nearby hydrogen, etc. Today you’d probably be right if you guessed “hydrogen” (about 3/4 of the universe’s mass is still hydrogen), but you’d have a really hard time predicting anything beyond that.
Oddly enough, life is surprisingly uncomplex compared to say, dirt or sea water. If you look at a single cell in your body, you’ve already got a pretty good idea of what you’ll see everywhere else in your body. Admittedly, we are more complex than single celled life, but most of that is a symptom of being physically bigger.
Let’s pretend that we live in a deterministic world for a minute (my next point may or may not apply when you start dealing with quantum uncertainty…I certainly cannot close to intuiting how those math questions would play out). Given the universe at the beginning of time, it has a certain complexity C. Now let’s say that the complexity of the laws governing the physical world is V, then it seems hard to me to see why the complexity of the universe is not upwardly capped at C+V. Especially when you say that complexity is the question “how easy is it to predict what happens next?”. It seems that knowledge sufficient to replicate the initial universe (C) coupled with the knowledge sufficient to know the laws of the universe(V) would be all that you need to predict anything you want at any time (T). So although the complexity of the universe might increase in the very beginning of time, it would fairly quickly hit the upper limit once the added complexity is matched to the complexity of the laws of physics.
Any clarification on this point?
All entropy is secretly “conditional entropy”. So when you talk about the entropy of a gas, for example, what you really mean is the entropy of the gas, given that: the container exists, that it doesn’t have holes, that the gas is inside, that it is located at this place, at this time, etc., etc. Almost all of the “givens” are assumed because they’re obvious. What you’re talking about is “the entropy of the universe, given the universe”, which is zero. I didn’t want to be to over-specific, but what I should have said is “the entropy of the universe, given what it is possible for one person to know”.
Similarly, you can talk about the entropy of Shakespeare, which you measure by reading it slowly and attempting to predict the next letter, then the next, then the —-. The better you can predict, the lower the entropy. However, if you already have a complete copy of all of the Bard’s work (did he make’th up that name himself?), then there will be no uncertainty and zero entropy.
but eventually the entropy of the universe will hit a cap and then begin decreasing right? once everything is tied up in black holes slowly shedding hawking radiation, and then a long time after just a bunch of subatomic particles separated by vast stretches of space? this seems like a very low entropy system, as it is fairly easy to describe. does this violate the second law of thermodynamics? because i thought that any closed system constantly works towards a state of maximum entropy and then remains at that level, or just keeps increasing at an ever slowing rate. or would this just be an exception to the rule? or am i getting something wrong?
I was hoping no one would pick up on that. The very short answer is, although the particles in the heat-dead universe seem just as random as particles in the very early universe they’re not.
Particles floating about in the end of the universe each have a very good reason for being where they are: trillions of years of history and interactions that all add up to them being where they are. The state of each particle is correlated (technically: entangled) with a gargantuan set of other particles. Two isolated particles have many possible states, but after they’ve interacted they can be in entangled states, and there are just a hell of a lot more entangled than non-entangled states. More available states = more entropy.
This particular view of the increase of entropy with time (perhaps even defining the arrow of time) falls out of Boltzmann’s “Stosszahlansatz”, which is the assumption that before an interaction particles are uncorrelated.
i have been reading up on entangled states and maybe im just understanding it wrong. what ive gathered is, after two quantum states interact they create a state of quantum entanglement. and once this happens, and the two quantum states separate by an amount of space, the conditions of one of the states can be known by measuring the state of the other. this seems to reduce the number of bits of information it takes to describe the system, since once one state is known so is the other. thus after quantum entanglement only one state needs to be described as before quantum entanglement both states need to be described. it seems then, that as the amount of quantum entanglement increases its entropy decreases.. whats the flaw in my reasoning?
hmm, or i guess that only counts after one of the states is measured? so that before hand each particle in the entangled pair is in a superposition of all its possible states? does the mean an entangled pair has maximum entropy until measured? then collapses to a state of less entropy? but still, to describe a system anyway you would have to measure it, so in measuring the heat-dead universe for description would it not collapse into its low entropy state? the way it seems to be described online is, using quantum entanglement you can describe a quantum system with two cbits, carrying the extra information in the collapsed state of one of the entangled pair. where as without using quantum entanglement, describing the system precisely with only cbits would require an infinite amount of bits. so ultimately it seems to take less information to describe an entangled state.
That’s another one of the problems with the Copenhagen interpretation: measurement decreases entropy. I was skirting the issue, but by “universe” I mean the total sum of every possible universe (the Many Worlds interpretation). MWI really cleans up a lot of issues.
The “1 qbit = 2 cbits” that you’re referring to only applies when two parties share a pair of entangled qbits. (post on that here) Qbits, entangled or not, always have infinitely more states (and thus information) than cbits, we can just never have access to the vast, vast majority of it. So, when you’re talking about entropy you have the classical entropy, and also the quantum (Von Neumann) entropy.
To illustrate the point (and obscure it a little more too), two particles in a single maximally entangled state actually have zero Von Neumann entropy, and maximum classical entropy (for the first measument, because the second is “determined” by the first).
Entangled particles are two particles that share their states with each other, and there are more shared than independent states. So if you allow for unknown entangled states, then you’re really cooking with gas (entropically speaking).
concerning the many world interpretation: can our own lives have a correlation of how things work on the quantum level? For example; In our own lives we have many directions we can choose at so many moments in time but we would be collapsing the wave functions of certain realities by the decisions we make and our life unfolds accordingly. We could have chosen other paths in life. they were available and they could then exist in parallel realities that we did not choose but another “us” would be experiencing them in other realities. the idea becomes astounding when we think about how many decisions we could possibly make in even one minute and how many different beings are making decisions. It becomes synonymous with exploring the Mandelbrot set. Is that how it works on the quantum level according to the theory? would there parallel realities where the different particles are behaving differently and so there would be so many different realities where the particles behaved in so many various ways and in different combinations of ways as well? can they even “choose”? or is that dependent on the observers? is there anything wrong with making these analogies?
@Ravilochana dasa
Your analogy MAY be right, but there’s the deal: quantum effects usually do not show themselves at big enough scales, because the more mass an object has, the less quantum freedom/unredictability can it demonstrate. Even if the freedom is there at the lowest level, the massiveness of the higher levels does constrain it very much. Imagine a gas: it can flow freely wherever it wants. But atmosphere of a planet still does not leave, for all the freedom of a gas, because of planet’s great gravity pull.
So, in the same way, a person’s destiny may be, while potentially “free”, still be very well constrained by massive “objects” (for a lack of better word) that grab a person’s mind and force it in some direction… Desire for survival, desire for intimacy, desire for play and creativity, desire for recognition… Perhaps those objects are what psychology researches!