• 0 Posts
  • 24 Comments
Joined 2 months ago
cake
Cake day: March 16th, 2025

help-circle
  • That’s a classical ambiguity, not a quantum ambiguity. It would be like if I placed a camera that recorded when cars arrived but I only gave you information on when it detected a car and at what time and no other information, not even providing you with the footage, and asked you to derive which car came first. You can’t because that’s not enough information.

    The issue here isn’t a quantum mechanical one but due to the resolution of your detector. In principle if it was precise enough, because the radiation emanates from different points, you could figure out which one is first because there would be non-overlapping differences. This is just a practical issue due to the low resolution of the measuring device, and not a quantum mechanical ambiguity that couldn’t be resolved with a more precise measuring apparatus.

    A more quantum mechanical example is something like if you apply the H operator twice in a row and then measure it, and then ask the value of the qubit after the first application. It would be in a superposition of states which describes both possibilities symmetrically so the wavefunction you derive from its forwards-in-time evolution is not enough to tell you anything about its observables at all, and if you try to measure it at the midpoint then you also alter the outcome at the final point, no matter how precise the measuring device is.


  • Let’s say the initial state is at time t=x, the final state is at time t=z, and the state we’re interested in is at time t=y where x < y < z.

    In classical mechanics you condition on the initial known state at t=x and evolve it up to the state you’re interested in at t=y. This works because the initial state is a sufficient constraint in order to guarantee only one possible outcome in classical mechanics, and so you don’t need to know the final state ahead of time at t=z.

    This does not work in quantum mechanics because evolving time in a single direction gives you ambiguities due to the uncertainty principle. In quantum mechanics you have to condition on the known initial state at t=x and the known final state at t=z, and then evolve the initial state forwards in time from t=x to t=y and the final state backwards in time from t=z to t=y where they meet.

    Both directions together provide sufficient constraints to give you a value for the observable.

    I can’t explain it in more detail than that without giving you the mathematics. What you are asking is ultimately a mathematical question and so it demands a mathematical answer.


  • I am not that good with abstract language. It helps to put it into more logical terms.

    It sounds like what you are saying is that you begin with something a superposition of states like (1/√2)(|0⟩ + |1⟩) which we could achieve with the H operator applied to |0⟩ and then you make that be the cause of something else which we would achieve with the CX operator and would give us (1/√2)(|00⟩ + |11⟩) and then measure it. We can call these t=0 starting in the |00⟩ state, then t=1 we apply H operator to the least significant, and then t=2 is the CX operator with the control on the least significant.

    I can’t answer it for the two cats literally because they are made up it a gorillion particles and computing it for all of them would be computationally impossible. But in this simple case you would just compute the weak values which requires you to also condition on the final state which in this case the final states could be |00⟩ or |11⟩. For each observable, let’s say we’re interested in the one at t=x, you construct your final state vector by starting on this final state, specifically its Hermitian transpose, and multiplying it by the reversed unitary evolution from t=2 to t=x and multiply that by the observable then multiply that by the forwards-in-time evolution from t=0 to t=x multiplied by the initial state, and then normalize the whole thing by dividing it by the Hermitian transpose of the final state times the whole reverse time evolution from t=2 to t=0 and then by the final state.

    In the case where the measured state at t=3 is |00⟩ we get for the observables (most significant followed by least significant)…

    • t=0: (0,0,+1);(+1,+i,+1)
    • t=1: (0,0,+1);(+1,-i,+1)
    • t=2: (0,0,+1);(0,0,+1)

    In the case where the measured state at t=3 is |11⟩ we get for the observables…

    • t=0: (0,0,+1);(-1,-i,+1)
    • t=1: (0,0,+1);(+1,+i,-1)
    • t=2: (0,0,-1);(0,0,-1)

    The values |0⟩ and |1⟩ just mean that the Z observable has a value of +1 or -1, so if we just look at the values of the Z observables we can rewrite this in something a bit more readable.

    • |00⟩ → |00⟩ → |00⟩
    • |00⟩ → |01⟩ → |11⟩

    Even though the initial conditions both began at |00⟩ they have different values on their other observables which then plays a role in subsequent interactions. The least significant qubit in the case where the final state is |00⟩ begins with a different signage on its Y observable than in the case when the outcome is |11⟩. That causes the H opreator to have a different impact, in one case it flips the least significant qubit and in another case it does not. If it gets flipped then, since it is the control for the CX operator, it will flip the most significant qubit as well, but if it’s not then it won’t flip it.

    Notice how there is also no t=3, because t=3 is when we measure, and the algorithm guarantees that the values are always in the state you will measure before you measure them. So your measurement does reveal what is really there.

    If we say |0⟩ = no sleepy gas is released and the cat is awake, and |1⟩ = sleepy gas is released and the cat go sleepy time, then in the case where both cats are observed to be awake when you opened the box, at t=1: |00⟩ meaning the first one’s sleepy gas didn’t get released, and so at t=2: |00⟩ it doesn’t cause the other one’s to get released. In the case where both cats are observed to be asleep when you open the box, then t=1: |01⟩ meaning the first one’s did get released, and at t=2: |11⟩ that causes the second’s to be released.

    When you compute this algorithm you find that the values of the observables are always set locally. Whenever two particles interact such that they become entangled, then they will form correlations for their observables in that moment and not later when you measure them, and you can even figure out what those values specifically are.

    To borrow an analogy I heard from the physicist Emily Adlam, causality in quantum mechanics is akin to filling out a Sudoku puzzle. The global rules and some “known” values constrains the puzzle so that you are only capable of filling in very specific values, and so the “known” values plus the rules determine the rest of the values. If you are given the initial and final conditions as your “known” values plus the laws of quantum mechanics as the global rules constraining the system, then there is only one way you can fill in these numbers, those being the values for the observables.


  • “Free will” usually refers to the belief that your decisions cannot be reduced to the laws of physics (e.g. people who say “do you really think your thoughts are just a bunch of chemical reactions in the brain???”), either because they can’t be reduced at all or that they operate according to their own independent logic. I see no reason to believe that and no evidence for it.

    Some people try to bring up randomness but even if the universe is random that doesn’t get you to free will. Imagine if the state forced you to accept a job for life they choose when you turn 18, and they pick it with a random number generator. Is that free will? Of course not. Randomness is not relevant to free will. I think the confusion comes from the fact that we have two parallel debates of “free will vs determinism” and “randomness vs determinism” and people think they’re related, but in reality the term “determinism” means something different in both contexts.

    In the “free will vs determinism” debate we are talking about nomological determinism, which is the idea that reality is reducible to the laws of physics and nothing more. Even if those laws may be random, it would still be incompatible with the philosophical notion of “free will” because it would still be ultimately the probabilistic mathematical laws that govern the chemical reactions in your brain that cause you to make decisions.

    In the “randomness vs determinism” debate we are instead talking about absolute determinism, sometimes also called Laplacian determinism, which is the idea that if you fully know the initial state of the universe you could predict the future with absolute certainty.

    These are two separate discussions and shouldn’t be confused with one another.


  • In a sense it is deterministic. It’s just when most people think of determinism, they think of conditioning on the initial state, and that this provides sufficient constraints to predict all future states. In quantum mechanics, conditioning on the initial state does not provide sufficient constraints to predict all future states and leads to ambiguities. However, if you condition on both the initial state and the final state, you appear to get determinstic values for all of the observables. It seems to be deterministic, just not forwards-in-time deterministic, but “all-at-once” deterministic. Laplace’s demon would just need to know the very initial conditions of the universe and the very final conditions.





  • Many Worlds is an incredibly bizarre point of view.

    Quantum mechanics has two fundamental postulates, that being the Schrodinger equation and the Born rule. It’s impossible to get rid of the Born rule in quantum mechanics as shown by Gleason’s Theorem, it’s an inevitable consequence of the structure of the theory. But Schrodinger’s equation implies that systems can undergo unitary evolution in certain contexts, whereas the Born rule implies systems can undergo non-unitary evolution in other contexts.

    If we just take this as true at face value, then it means the wave function is not fundamental because it can only model unitary evolution, hence why you need the measurement update hack to skip over non-unitary transformations. It is only a convenient shorthand for when you are solely dealing with unitary evolution. The density matrix is then more fundamental because it is a complete description which can model both unitary and non-unitary transformations without the need for measurement update, “collapse,” and does so continuously and linearly.

    However, MWI proponents have a weird unexplained bias against the Born rule and love for unitary evolution, so they insist the Born rule must actually just be due to some error in measurement, and that everything actually evolves unitarily. This is trivially false if you just take quantum mechanics at face value. The mathematics at face value unequivocally tells you that both kinds of evolution can occur under different contexts.

    MWI tries to escape this by pointing out that because it’s contextual, i.e. “perspectival,” you can imagine a kind of universal perspective where everything is unitary. For example, in the Wigner’s friend scenario, for his friend, he would describe the particle undergoing non-unitary evolution, but for Wigner, he would describe the system as still unitary from his “outside” perspective. Hence, you can imagine a cosmic, godlike perspective outside of everything, and from it, everything would always remain unitary.

    The problem with this is Hilbert space isn’t a background space like Minkowski space where you can apply a perspective transformation to something independent of any physical object, which is possible with background spaces because they are defined independently of the relevant objects. Hilbert space is a constructed space which is defined dependently upon the relevant objects. Two different objects described with two different wave functions would be elements of different Hilbert spaces.

    That means perspective transformations are only possible to the perspective of other objects within your defined Hilbert space, you cannot adopt a “view from nowhere” like you can with a background space, so there is just nothing in the mathematics of quantum mechanics that could ever allow you to mathematically derive this cosmic perspective of the universal wave function. You could not even define it, because, again, a Hilbert space is defined in terms of the objects it contains, and so a Hilbert space containing the whole universe would require knowing the whole universe to even define it.

    The issue is that this “universal wave function” is neither mathematically definable nor derivable, so it only has to be postulated, as well as its mathematical properties postulates, as a matter of fiat. Every single paper on MWI ever just postulates it entirely by fiat and defines by fiat what its mathematical properties are. Because the Born rule is inevitable form the logical structure of quantum theory, these mathematical properties always include something basically just the same as the Born rule but in a more roundabout fashion.

    None of this plays any empirical role in the real world. The only point of the universal wave function is so that whenever you perceive non-unitary evolution, you can clasp your hands together and pray, “I know from the viewpoint of the great universal wave function above that is watching over us all, it is still unitary!” If you believe this, it still doesn’t play any role in how you would carry out quantum mechanics, because you don’t have access to it, so you still have to treat it as if from your perspective it’s non-unitary.


  • pcalau12i@lemmy.worldtoScience Memes@mander.xyzObserver
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 days ago

    Yes they are both particles and waves, but “collapse” is also purely a mathematical trick and isn’t something that physically occurs. Quantum theory is a statistical theory and like all statistical theories, you model the evolution of the system statistically up until it gets to the point you want to make a prediction for. But state vector notation (the “wave function”) is just a mathematical convenience that works when you are dealing with a system in a pure state that is only subject to Schrodinger evolution. It doesn’t work when a system undergoes decoherence, which follows the Born rule, and that says to compute the square magnitude of the state vector. But if you compute the square magnitude of the state vector, you get a new vector that is no longer a valid state vector.

    Conveniently, whenever a system is subject to decoherence/Born evolution, that happens to be a situation when you can acquire new physical information about a system, whereas whenever it is subject to Schrodinger evolution, that corresponds to a situation when you cannot. People thus do this mathematical trick where, whenever a system undergoes decoherence/Born evolution, they take pause their statistical simulation, grab the new information provided about the system, and plug it back into the state vector, which allows them to reduce one probability amplitude to 1 and the rest to 0, which gives you a valid state vector again, and then they press play on their statistical simulation and carry it on from there.

    This works, yes, but you can also pause a classical statistical simulation, grab new information from real-world measurements, and plug it in as well, unpause the simulation, and you would also see a sudden “jump” in the mathematics, but this is because you went around the statistical machinery itself into the real world to collect new information to plug into the computation. It doesn’t represent anything actually physically occurring to the system.

    And, again, it’s ultimately just a mathematical trick because it’s easier to model a system in a pure state because you can model it with the state vector, but the state vector (the “wave function”) is simply not fundamental in quantum mechanics and this is a mistake people often make and get confused by. You can evolve a state vector according to Schrodinger evolution only as long as it is in a pure state, the moment decoherence/Born evolution gets involved, you cannot model it with the state vector anymore, and so people use this mathematical trick to basically hop over having to compute what happens during decoherence, and then delude themselves into thinking that this “hop” was something that happened in physical reality.

    If you want to evolve a state vector according to the Schrodinger equation, you just compute U(t)ψ. But if you instead represent it in density matrix form, you would evolve it according to the Schrodinger equation by computing U(t)ψψᵗU(t)ᵗ. It obviously gets a lot more complicated, so in state vector form it is simpler than density matrix form, so people want to stick to state vector form, but state vector form simply cannot model decoherence/Born evolution, and so this requires you to carry out the “collapse” trick to maintain in that notation. If you instead just model the system in density matrix form, you don’t have to leave the statistical machinery with updates about real information from the real world midway through your calculations, you can keep computing the evolution of the statistics until the very end.

    What you find is that the decoherence/Born evolution is not a sudden process but a continuous and linear process computed with the Kraus operators using ΣKᵢ(t)ρKᵢ(t)ᵗ and takes time to occur, cannot be faster than the quantum speed limit.

    While particles can show up anywhere in the universe in quantum mechanics, that is corrected for in quantum field theory. A particle’s probability of showing up somewhere doesn’t extend beyond its light cone when you introduce relativistic constraints.


  • pcalau12i@lemmy.worldtoFacepalm@lemmy.world"No nation is older than 250"
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    edit-2
    23 days ago

    While the US is pretty old as a state, most societies have a direct continuation from one state to the next. It’s not like when France overthrew its monarchy they stopped being France or seeing themselves as French. So they may see their continuous history as much older than the current state, with the Kingdom of France going back to 987.

    The US doesn’t have a continuous history prior to 1776 because they mostly come from Britain but they denounce their British heritage and they settled in NA but also denounce the heritage of the local peoples there. So the average American sees their entire history as starting at 1776, maybe a little bit further back to include the initial colonies and that’s about it.


  • pcalau12i@lemmy.worldtoScience Memes@mander.xyzshrimp colour drama
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    There is no way to “establish whether or not there is an objective reality.” It’s a philosophical position. You either take the reality which we observe and study as part of the material sciences to be objective reality, or you don’t believe it’s objective reality and think it is all sort of invented in the “mind” somehow. Either position you take, you cannot prove or disprove either one, because even if you take the latter position, no evidence I present to you could change your mind because to be presented evidence would only mean for that evidence to appear in the mind, and thus wouldn’t prove anything. The best argument we can make is just taking the reality we observe as indeed reality is just philosophically simpler, but that also requires you to philosophically value simplicity, which you cannot prove what philosophical principles we should value with science either.



  • On the contrary, this breaks semi-classical gravity’s usage of quantum mechanics. The predictions the approximation makes are not compatible with our observations of how quantum mechanics works, and scientists are working on an experiment that can disprove the hypothesis. ( https://doi.org/10.1103/PhysRevLett.133.180201 )

    The paper is interesting and in the right direction but is just a proposal. It needs to actually be performed, because the results can finally point in the right direction rather than just guessing at what the right direction is.

    I’m afraid you’ve got that precisely backwards. Falsifiability is the core of science

    No, it’s a justification for pseudoscience by allowing anyone to invent anything out of whole cloth based on absolutely nothing at all and call it “science.”

    as it is the method by which factually-deficient hypotheses are discarded

    Except it’s precisely used to justify them.

    If there is no contradiction between the theory and experimental practice then either all false theories have been discarded or we have overlooked an experiment that could prove otherwise.

    Those two, or the third case that we just haven’t conducted the experiment yet that would contradict with current theories (still talking about GR/QFT here specifically).

    That’s distinctly false. The Higgs Boson was only proposed in 1964 and wasn’t measured 'til just 13 years ago.

    I am obviously not defending that position and you know for a fact that is a position that has gained a lot of steam recently, you’re just trying to annoyingly turn it around on me to make it seem like I am defending a position I am not by stating something rather obvious.

    Because we still have falsifiable hypotheses to test.

    And this is exactly why you’re a promoter of pseudoscience: if a theory is “falsifiable” it’s “science” and “needs to be tested,” even if it’s literally based on nothing and there is no good reason anyone should take it seriously. If I claim there is a magical teapot orbiting Saturn that is the cause of some of its currently not well-understood weather patterns and if you just built a specialized 20 billion dollar telescope with a special lens on it and pointed it at specific coordinates you’d discover the proof, technically you can falsify this claim so by your logic it’s “science” and therefore we should go out of our way to investigate it. I don’t get why it is so difficult to just accept that there is more to a reasonable scientific proposal than it just technically can be falsified. That is obviously not a sufficient criteria at all and treating it as just allows for a ton of factually-deficient hypotheses based on nothing to be taken seriously.

    Whatever bullshit nonsense or mysticism someone makes up, as long as there is technically some way to conduct an experiment to falsify it, you will say that’s “science.” Popper has been complete poison to the academic discourse. In the past I would have to argue against laymen mystics, the equivalent of the modern day “quantum healing” types. But these days I don’t even care about those mystics because we have much more problematic mystics: those in academia who promote nonsense like “quantum immortality” and “quantum consciousness” or whatever new “multiverse” theory someone came up with based on pure sophistry, and they pass this off as genuine science, and we are expected to take it seriously by because “erm it technically can be falsified.”

    Although, my magic teapot analogy isn’t even good because the analogy says the teapot is proposed to explain not well-understood weather patterns, so it is proposed to explain an actual problem we haven’t solved. A more accurate analogy would be for a person to claim that they believe the hexagon cloud on Saturn should actually be a triangle. Why? No reason, they just feel it should be a triangle, because triangles seem more natural to them. According to you, again, this is technically still science because technically their theory can indeed be falsified by building the special telescope and pointing it at those coordinates.

    It’s impossible to combat pseudoscience mentality in the public and to combat things like quantum mysticism when some of the #1 promoters of quantum mysticism these days are academics themselves. Half the time when I see a completely absurd headline saying that quantum mechanics proves material reality doesn’t exist and “everything is consciousness,” or that quantum mechanics proves we’re immortal, or that quantum mechanics proves we live inside of a multiverse or a simulation, I click the article to see the source and no, it doesn’t go back to a Deepak Chopra sophist, it goes back to “legitimate” publications by an actual academic with credentials in the field who is taken seriously because “falsifiability.”

    How am I supposed to then tell the laymen the article they’re reading is bologna? I can’t, because they don’t understand quantum physics, so they wouldn’t even have the tools to understand it if I explained to them why it’s wrong, so they just trust it because it’s written by someone with “credentials.” Mysticism in academia is way more serious than mysticism among laymen because even otherwise reasonable laymen who do view science positively will end up believing in mysticism if it is backed an academic.

    We have, actually. The list of unsolved problems in physics on Wikipedia is like 15 pages long and we’re developing new experiments to address those questions constantly.

    Why are you intentionally being intellectually dishonest? We have been talking about a very specific theory and a very specific field of research this whole time, and you are trying to deflect this to science generally. I am sorry I even engaged with you at all, you are not in any way intellectually honest in the slightest and intentionally trying to misrepresent everything I say to “own” me and constantly are trying to pretend my position is something that it is not.

    By criticizing a small handful of pseudoproblems in science you are now trying to dishonestly pretend I am claiming there are no genuinely unsolved problems, because you don’t want to actually address my point and are just a hack and I am blocking you after this post for such a ridiculously dishonest way to try and smear me rather than just address my point.

    Likewise, there’s no reason to assume that the universe is not acting the way we’d like it to except where contradicted by observable evidence.

    We should just assume the universe is behaving exactly the way we observe it to behave based on the evidence.

    What we “like” is irrelevant. We should just observe the evidence and accept that is how the universe works until additional evidence shows otherwise.

    If the laws of physics can “break down” then they aren’t “laws”, merely approximations that are only accurate under a limited range of conditions.

    Plenty of laws of physics are only applicable to certain conditions, like the ideal gas law. Although, that’s not the impression I got from this conversation on how you were using “break down” in the first place, as we were talking about semi-classical gravity where you have singularities at black holes, and you were using “break down” in that sense. There is no change in the law of physics at black holes in semi-classical gravity, the singularity arises from the very structure of the theory and is not in contradiction with it, i.e. its fundamental principles don’t suddenly change at a black hole. The singularity at the black hole is a result of its underlying principles.

    The fact that the universe continues to exist despite the flaws in our theories proves that there must be a set of rules which are applicable in all cases.

    You want them to apply to cases that currently have not been demonstrated by physically even possible to probe, so you have not even demonstrated it is an actual “case” at all. I am not denying it isn’t physically possible to probe either before you dishonestly try to turn my statement around to intentionally misrepresent me as you love to do. I am saying quite the opposite: that we should try to probe the areas that seem to not make much in our current theories. We should be trying to probe quantum effects and gravitational effects at the same time to see how they behave, because that’s how we could actually make progress if semi-classical gravity is indeed wrong.

    We shouldn’t be constantly inventing fake “theories” based on literally nothing that are technically falsifiable then acting surprised when they are falsified, and then slightly tweaking them so they are not longer falsified with the previous experiment but still technically falsifiable with a future experiment. This would be like if you pointed the expensive telescope at Saturn and did not see the magical teapot, so I just changed my mind and said the teapot is actually orbiting Neptune so we need a bigger telescope and then the theory would be falsified!

    I could play this game forever and keep tweaking my nonsensical claim every time it is falsified, and according to you this is science! What I am saying is this is not science because science is not just falsifiability. There are tons of genuinely unsolved problems in science, but there are also a small number of “problems” which are poorly motivated, like the “fine-tuning problem” which is also not a genuine scientific problem.

    It’s really like 99.9% of the stuff in physics that’s perfectly fine. Most people in the real world are actually working on practical problems and not nonsense like “quantum consciousness” or whatever. The handful of people I am criticizing is largely a small minority, but they have a huge impact on public discourse and public understanding of science as they tend to be very vocal

    And if the rules can change, then our theories will have to be updated to describe those changes and the conditions where they occur.

    Obviously.


  • I understand that in semi-classical gravity the curvature of spacetime is based on the expectation value of the stress energy tensor, and so a massive object in a superposition of two possible location would curve spacetime as if the object was in the middle-point of the two locations, but when the state vector is reduced it would suddenly shift to one of those two points. While this does seem weird, no one has ever physically demonstrated that measuring this is actually possible, so until there is a demonstration that it is actually physically possible to measure this, there isn’t actually a contradiction between theory and experimental practice. All we can say is “that seems weird” but that’s not a scientific argument against it.

    You say it diverges from reality but… how do you know that? No experiment has ever demonstrated this. It could be that this is just how reality works, or it could also be that it’s just not physically possible to probe this in the first place, and so it’s just a nonphysical quirk of the theory of computing something nonphysical in the first place. We can’t say for certain it is wrong until someone actually conducts an experiment to probe it, and if we find it is wrong, then not only would we rule out semi-classical gravity, but we would have the data needed to actually replace it with something.

    This is my issue with “fundamental physics” these days in general: they do not actually demonstrate any contradiction between theory and experimental practice. The desire to unify quantum mechanics and general relativity is just based on some preconceptions that information shouldn’t be destroyed and gravity should be quantizable like every other force, but how do you know that with certainty? You did not derive this from experimental observation, because semi-classical gravity is currently compatible with all experimental observations. It is more that one begins with a preconception of how they think reality should work and says the theory is wrong because it does not fit those preconceptions. Yes, certain aspects of semi-classical gravity are “weird,” but that’s not a scientific argument against it.

    Because of the influence of Karl Popper, people think science = falsifiability, so new theories are then constructed not based on experimental evidence but by trying to better fit into our preconceptions, but are also made to falsifiable because that is “science.” When they are falsified by an experiment that just reconfirms the predictions of semi-classical gravity, they just tweak it a bit so the theory is not falsified by that experiment any longer but still technically falsifiable, and they do this ad infinitum. You end up with decades doing this and what do you have, String Theory that is only applicable to an anti-de Sitter space, a universe we don’t actually live in? Or Loop Quantum Gravity which can’t even reproduce Einstein’s field equations?

    Popper has been a detrimental influence onto the sciences. Science is not falsifiability. Science is about continually updating our models to resolve contradictions between the theory and experimental practice. If there is no contradiction between the theory and experimental practice then there is no justification to update the model. I have seen a mentality growing more popular these days which is that “fundamental physics hasn’t made progress in nearly a century.” But my response to this is why should it make progress? Why have not encountered a contradiction between experimental practice and theory, so all this “research” into things like String Theory is just guesswork, there is no reason to expect it to actually go anywhere.

    The same is also true of the so-called “measurement problem” which as physicists like Carlo Rovelli and Francois-Igor Pris have pointed out only arise because we have certain metaphysical preconceptions about how reality should work which when applied to quantum theory lead to absurdities and so people often conclude the theory must be wrong somehow, that it’s “incomplete,” that it needs to be replaced by something like an objective collapse theory or a multiverse theory or something similar. Yet, this is not a scientific criticism, the theory is in no contradiction with the experimental evidence. We should just get rid of our preconceptions about how reality should work and accept how reality does. As Bohr said: stop telling God what to do.

    There is no reason to assume the universe acts the way we’d like it to. Maybe the laws of physics really are just convoluted and break down at black holes. While yes, maybe one day we will discover a theory where it does not break down, it is anti-scientific to begin with an a priori assumption that this must necessarily be the case. It could be that the next breakthrough in fundamental physics even makes the mathematics more convoluted! You cannot just begin with a starting point prior to investigation that this is how nature works, you have to derive that a posteriori through investigation, and currently this is what our best theory derived from investigation states. It may be wrong, but there is no justification in claiming it is wrong without showing a contradiction between theory and experimental practice.

    This is my issue here. The desire to replace semi-classical gravity with something else, the measurement problem, the desire to unify all forces of nature into a “theory of everything,” trying to solve the “fine-tuning problem,” these are all ultimately pseudoproblems because they do no derive from any contradiction between experimental practice and theory. They are not genuine scientific problems. I am not even against people looking into these, because who knows, maybe they will stumble across something interesting, but the issue with treating these all as genuine “problems” is that when they go “unsolved” for a century, it makes it look like there is a “crisis in fundamental physics.” There just isn’t. In fact, it’s quite the opposite: every experimental test reconfirms our current best theories, this is the exact opposite of a “crisis.” People pretend like we have a “crisis” because our current theories are too good!


  • If I am not mistaken, information loss inside of a black hole comes out of semi-classical gravity. If these symmetries are tied to the assumption that the laws of physics don’t change and the symmetries break down in semi-classical gravity, then does that mean in semi-classical gravity the laws of physics change? Is there a particular example of that in the theory you could provide so I can understand?

    I don’t disagree that information is conserved in general relativity and quantum mechanics taken separately, but when you put them together it is not conserved, and my concern is that I don’t understand why we must therefore conclude that this necessarily wrong and it can’t just be that information conservation only holds true for limiting cases when you aren’t considering how gravitational effects and interference effects operate together simultaneously. I mean, energy conservation breaks down when we consider galactic scales as well in the case of cosmic redshift.

    Yes, we can experimentally verify these laws of conservation, because in practice we can only ever observe gravitational effects and interference effects separately, as a limiting case, and thus far there hasn’t been an experiment demonstrating the plausibility of viewing them simultaneously and how they act upon each other. In semi-classical gravity these “weird” aspects like information loss in a black hole only arise when we actually consider them together, which is not something we have observed yet in a lab, so I don’t see the basis of thinking it is wrong.

    You seem to suggest that thinking it is wrong implies the laws of physics change, but I’m not really sure what is meant by this. Is semi-classical gravity not a self-consistent mathematical framework?


  • I still don’t really understand why the information just can’t be destroyed. It seems like we’re starting from an assumption that it shouldn’t be destroyed despite it being so in semi-classical gravity, and then trying to think of alternative theories which could preserve it such as on the boundary or in its charge/mass/spin. Maybe that’s correct but it seems like speculation, and it’s not speculation based on any actual contradiction between theory and practice, i.e. not because semi-classical gravity has actually made an incorrect prediction in an experiment we can go out and verify, but only because we have certain preconceptions as to how nature should work which aren’t compatible with it. So it doesn’t really come across to me as a scientific “problem” but more of a metaphysical one.


  • pcalau12i@lemmy.worldtoScience Memes@mander.xyzMultiverse
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    There’s still a pattern in the results, so by one means or another we want to explain the results. Just calling it nondeterministic, if I understand right, would be just saying you can’t predict it from prior observations. So, whatever language you use to describe this puzzling situation, the puzzling situation thus far remains.

    I mean nondeterministic in a more fundamental sense, that it is just genuinely random and there is no possibility of predicting the outcome because nothing in nature actually pre-determines the outcome.

    A priori?

    Through rigorous experimental observation, it’s probably the most well-tested finding in all of science of all time.

    Or because it best fits with Relativity? It sounds about as strong as saying, “we know time is universal.” It’s obvious, has to be true, but apparently not how the universe functions.

    So we can never believe anything? We might as well deny the earth is round because people once thought time is absolute now we know it’s relative, so we might as well not believe in anything at all! Completely and utterly absurd. You sound just like the creationists who try to undermine belief in scientific findings because “science is always changing,” as if that’s a bad thing or a reason to doubt it.

    We should believe what the evidence shows us. We changed our mind about the nature of time because we discovered new evidence showing the previous intuition was wrong, not because some random dude on lemmy dot com decided their personal guesses are better than what the scientific evidence overwhelmingly demonstrates.

    If you think it’s wrong show evidence that it is wrong. Don’t hit me with this sophistry BS and insult my intelligence. I do not appreciate it.

    Maybe you are right that special relativity is wrong, but show me an experiment where Lorentz invariance is violated. Then I will take you seriously. Otherwise, I will not.