Widget HTML Atas

The Car And Garage Paradox Spacetime Diagram

This is my semi-educated guess as to what's going on here. Beware that semi-educated guesses are sometimes worse than uneducated ones, since they are more plausible and not necessarily more correct.

The Alcubierre warp drive requires both positive and negative curvatures. Positive curvatures are "easy" to make, since mass and energy bend space positively. Negative curvatures are typically said to require negative energy to produce, but the full story is thankfully more complex.

The universe, despite being filled with mass and energy, is roughly flat. That means that the natural bending of spacetime, without any stuff inside it, has to be negative - otherwise all the mass would make it extra positive. If you believe the consequence of vacuum energy in QED, that the fields filling the universe are VERY heavy, even when no particles are around, then you must believe the natural curvature to be very negative indeed.

So, it stands to reason, if you want to bend the universe backwards, take the weights off of it - really off of it, in this case using metal plates to forbid certain vacuum fluctuations, and let it unbend itself.

                                                          > Beware that semi-educated guesses are sometimes worse than uneducated   > ones, since they are more plausible and not necessarily more correct.                                                      
This is one of the most insightful comments I've read on the internet ever.

This is why we shouldn't take medical advice from people who have read a few Wikipedia articles. This is why years of experience in e.g. software are necessary to even begin to understand how deeply hidden security issues can be. This is why self-driving cars still elude us, but will be so much safer than human drivers when they finally do arrive.

I was thinking the opposite point: sometimes the experienced doctor or researcher made a mistake, and the guy who posted something on Reddit is correct. Usually not, but sometimes.

Generally speaking, you should trust someone whose professional or experienced more than some random stranger. Especially if it's not one person but a large group of experienced professionals (most of the controversial "anti-science" issues today).

But it's actually important to "do your own research", and not just listen to authority. Provided you actually know how, i.e. you can find accurate sources and distinguish fact from fiction. Because even the most qualified, unbiased, genius authority are occasionally wrong. The big issue today is that a lot of people don't know how to research and distinguish lies, but (although it's probably impossible for most) finding a way to teach someone how to find the truth, will always be more effective having some authority tell them the truth.

Key example: doctors often miss symptoms and diagnoses that a patient can discover on their own. It's not that the doctor is stupid or unqualified, in fact they have way more medical knowledge than you. But the doctor only has a series of questions and maybe a few tests to make a diagnosis, whereas you have the full experience of symptoms and an internal understanding of your "normal". IMO people actually should "play Internet doctor" and research their symptoms, not to reach any 100% conclusions, but to suggest diagnoses and tests to their real doctor.


                                                              > IMO people actually should "play Internet doctor" and research their symptoms, not to reach any 100% conclusions, but to suggest diagnoses and tests to their real doctor.                                                          
I agree, and do this. But I never tell the doctor "I suspect that I have X", rather I tell the doctor "I've observed J, K, and L under Foo conditions" and let the doctor decide where to go from there. In short time we can learn to give the doctors better information, but we cannot replace their breadth of knowledge.

Yeah sometimes people say "doctor I have X, I'm trying to get diagnosed with X". You're biasing the doctor to give you a diagnosis so you don't drop them, and you're biasing the doctor to ignore your symptoms because you're just saying them to get the diagnosis and maybe you just want to get certain medication.

Even if you don't outright say it, you should never convince yourself you have a diagnosis. The human body is extremely complex and some particularly weird issues need patient research, doctor's research, and experimentation to solve.

What you say is often true. But never say never.

I recently had an appointment with specialist for my son. My son has ADHD but was never diagnosed within Kaiser and so Kaiser won't prescribe medication.

The appointment started with, "My son has ADHD, was first diagnosed at 7 by _____, here are volumes of paperwork demonstrating that he had this diagnosis, has continued to have symptoms, and documenting what we have done for him. He wants to go back on medicine despite the side effects from last time. Here is more documentation that he continues to have symptoms."

There really ARE times when you're coming with enough background that the diagnosis is never in doubt.

For a less recent but more trivial example, I once showed up to the doctor and said, "I broke this bone in my hand while playing volleyball, what should I do about it?" Again, the diagnosis was never in doubt.


If you manage to hit the exact diagnostic questions they were trained for, that's the same, actually worse, than saying you have X, because you'll make them think they had the idea that you have X, when in reality you saw the list of symptoms, thought "I guess I have those," and then repeated them to the poor unsuspecting doctor.

>This is why we shouldn't take medical advice from people who have read a few Wikipedia articles.

>sometimes the experienced doctor or researcher made a mistake, and the guy who posted something on Reddit is correct

Both of those points are true, and we can reconcile them thusly:

Broadly, the doctor is the fully educated, while the layman-interested-in-subject is making semi-educated guesses. However this relationship inverts on narrow subjects, like a particular rare disease: the doctor ends up making a semi-educated guess (based on his general medical education), whereas a layman with intense interest in this particular rare disease can easily be the fully educated one - again, on this particular narrow subject.

> I was thinking the opposite point: sometimes the experienced doctor or researcher made a mistake, and the guy who posted something on Reddit is correct. Usually not, but sometimes.

If a researcher made a mistake and if 50 people on reddit said 50 different things, at random, one might be correct by chance.

Being correct once means nothing unless you are correct most of the times, but by then you are the researcher in the scenario above.

>Generally speaking, you should trust someone whose professional or experienced more than some random stranger. Especially if it's not one person but a large group of experienced professionals

Except if the large group is mostly a handful of experts with non-scientific private interests (say, to publish papers, to get grants, paid by companies with a product to sell), plus a large mass of professionals who seldom/never question the information they receive and just follow everything they read from their research and protocol-setting breathen.

When non-reproducible papers with BS mistakes get top citation counts and are held as gospel without any researcher/peer reviewer bothering verify them for decades, does anyone really think the average "professional doctor" really questions what they're told to follow?

> But it's actually important to "do your own research", and not just listen to authority. Provided you actually know how, i.e. you can find accurate sources and distinguish fact from fiction.

And to know if you're only seeing narrative-approved sources, when contra-narrative sources have been suppressed.

Lots of folks got fooled by Iraq WMD because you couldn't say anything to the contrary without being fired. Other examples, some more recent, are available.

> IMO people actually should "play Internet doctor" and research their symptoms

You should probably not play internet doctor if you're a hypochondriac or prone to anxiety. The least probable cause of almost any symptoms is cancer or another deadly condition and you can probably find a gruesome description of someone's experience online somewhere. If your nervous system can't grasp that, stay away from googling your symptoms!


                                                              > IMO people actually should "play Internet doctor" and research their symptoms, not to reach any 100% conclusions, but to suggest diagnoses and tests to their real doctor.                                                          
I do this regularly, and having a hard sciences background it's usually an educated and well-thought research. Unfortunately some doctor can get quite upset if you dare to suggest a diagnosis, especially if it's an evidently correct one they didn't think about. You really have to be very careful, allude to things, stress key symptoms and try to make them reach the same conclusion without explicitly telling them how to do their job.

                                                          > some doctor can get quite upset if you dare to suggest a diagnosis                                                      
A well known idiom is that when the client says he has a problem, he's right. When the client tells you what the cause is, he's wrong.

Don't tell the doctor your diagnosis. You didn't go through a decade of medical training. Tell the doctor what you've observed, let him conclude the diagnosis.


                                                              > When the client tells you what the cause is, he's wrong.                                                          
using a software analogy, I've seen plenty of bug reports where the reporter did some research on his own that turned out to be determinant in the issue resolution. You have training, inside knowledge of the system that's failing, but you see plenty of issues everyday, cannot dedicate to each one the time they would deserve. The reporter instead is motivated to reach a solution and has plenty of time to research it properly. Discarding a potentially fundamental input just because it comes from someone supposedly untrained would be a shame.

>Tell the doctor what you've observed, let him conclude the diagnosis.

Lots of anecdotal stories of people doing just that and having doctor after doctor failing them.

Indeed. My hypothyroidism was ultimately diagnosed by my mother's work colleague who recognized the (frankly quite textbook) symptoms because her dog had it.

Doctors are like everyone else: 90% of them are kinda shit at their job.

> This is one of the most insightful comments I've read on the internet ever.

On the other hand, if I'm in the market for a guess, I'll probably go for the educated guess rather than the uneducated one, because even though I recognize it's not necessarily more correct, it probably is less wrong.


Sometimes it's better to know you don't know than to think you know. If it's midnight and you're standing at the edge of a cliff you're better off thinking that you have no idea what's ahead of you than if you listen to a guy with "better than average night vision" telling you that there's a sidewalk ahead.


If you are, as you say, "in the market for a guess" that would imply that you are actively looking at different guesses and comparing the credentials of the guessers. Then you would _probably_ be right to choose eg. the guess from the virologist over the proctologist or the mechanic when you need a guess on how the Covid situation will unfold. But the issue might be that an "educated guess" in itself could be worse because you wouldn't base any major decisions on a mechanic that present an "uneducated guess" that "the virus will die down in a month" but if you have a crackpot chiropractor presenting themselves as a "certified doctor with an expertise in everything concerning the body" that says the same thing, you might be worse off in saying that this "educated guess" is worth basing your travel plans on.


The logic parent is mentioning only works for linear, smooth functions. Any non-linearity and discrete behaviour and "slightly better" is just noise.

He's just saying n=1.

Medical advice on the internet can be n=1, but read enough to get to n=100+, while still limited, can be quite valuable. I can't ask 100 doctors what's wrong with me, but I can ask my question to hundreds in an online forum. They might lead me astray, but so can a single doctor's misdiagnosis. A doctor can prescribe treatment that helps or harms, but I can ask a forum of people, with the same problem I have, if I should take his advice. I can't, however, ask 100 of his colleagues or peers if I should trust his advice, and that's why we end up with so many charlatans in medicine. But he's a doctor. Of course you can trust him! I don't know about the rest of you, but I've found a fair number of doctors to be mostly useless.

>This is why self-driving cars still elude us, but will be so much safer than human drivers when they finally do arrive.

Isn't that just a by definition thing?

My Tesla can already drive itself, I don't trust it and I don't think it's safe. When I get one that I trust and is safe then it will be safer than me, and by extension other humans.

The problem is that the current generation of algorithms are brittle.


Yes, but it had the lowest CPU and memory footprint! There's also something to be said about the COTS hardware requirements and ease of installation.


The way I state the above point is: "The smartest people in the room are the most skilled at convincing people (including themselves) of a narrative that sounds logical but is wrong".

I am afraid many times when talking about educated guesses we mean schooled guesses. The problem is that, more than once, we learn per Mark Twain's famous quote ...that schooling has a way of interfering with education...:-)

Specially for subtle but critical concepts:

"Researchers misunderstand confidence intervals and standard error bars"

https://pubmed.ncbi.nlm.nih.gov/16392994/


you might be interested in Kahneman's body of work, especially 'Thinking, Fast and Slow'.

As a trained physicist, I think you are getting this relatively right. Some of the issues at play are simply unknown according to the current state of theoretical physics.

Our Quantum Field Theories naively predict an infinite energy density all throughout space time. A naive guess at regularizing this leads to absurd predictions 120 orders of magnitude off [1]. So there has to be something else happening here. One of the things that supersymmetry brings to the table is that it makes this energy density exactly zero (but then it get's complicated because you need to break super symmetry...).

The thing is though, without gravity the absolute energy density of a quantum field is just some unobservable number. But how to reconcile gravity and quantum field theory is an unsolved problem to begin with, so it's completely unclear what that naive infinite number actually means. We are working without a firm theoretical (or empirical for that matter) foundation here.

However, the Casimir effect is remarkable exactly because it appears that the local energy density between two conducting plates is lower than in the ambient space. This is an energy difference that can and has been measured [2].

It is entirely plausible that it interacts with gravity in the right way to cause an effective negative energy density. This has been speculated by serious physicists for a long time. However: The Casimir effect is exceptionally weak. Space time is exceptionally hard to bend. So the idea of an actual warp bubble from Casimir effect geometry seems extremely far fetched[3] right now.

How stiff is space time? LIGO the gravitational wave observatory, measures changes to a 4000m distance that are smaller than an atomic nucleus, at its full design it will detect changes that are many time smaller than a proton. This is a feat of experiment that is the more astonishing the more you know. If you had told me you want to build something like this I would have assumed that it's likely impossible. Now, the first signal that could be detected by LIGO was a black hole merger that radiated away the energy equivalent of three solar masses. The peak power output of this event was larger than the combined energy output of all visible stars in the observable universe.

How small is the Casimir effect? Micro Newtons/m force gradients for micro meter distances [4]. Again we are talking about an effect so subtle that it's only been measured reliably in recent decades.

So these are the orders of magnitudes involved: White et.al. claim they can use an effect so small as to be barely perceptible, to manipulate space time in such a way to create interesting effects, when it takes the energy output of the entire universe to generate a ripple in space time strong enough to be detectable (a few galaxies over) by the most sensitive instrument ever devised. So I am sceptical.

[1] https://en.wikipedia.org/wiki/Cosmological_constant_problem#...

[2] https://en.wikipedia.org/wiki/Casimir_effect

[3] My physicist intuition says: Far fetched but not impossible. One of the heuristics for looking at this result though is this: The lead author is Harold White, one of the people who really pushed the EMDrive. The EMDrive is not far fetched but simply impossible (to a physicist the EMDrive is in exactly the same category as perpetuum mobiles). Worse, the original EMDrive paper by Harold White, despite supposedly showing evidence overthrowing three centuries of established physics, did not attempt to get published in a physics journal (which would have required extraordinary proof for such an extraordinary claim). Finally it involved some theoretical speculation that were just abject nonsense. These things mean I will not spend serious time on understanding and analyzing the details of this latest work.

[4] https://www.nature.com/articles/ncomms2842

Good Post.

I'll just put my back-of-the envelope calculation here to insert some optimism. While the energy emitted is immense, since the event was so far away (1.6e9 lt-yr), then if we plug in the solid angle subtended by the detector (~4000m4000m) I'm only seeing about ~2 kJ of energy passing through LIGO.

Caveats:

I assume the gravitational wave pattern is spherical; probably wrong but I'm not sure what the radiation pattern looks like for these event and in where we lay in relation to it.

Seems like White should get a little more credit; he's going well outside established science, and accepting a much higher risk of failure than most are comfortable with. Seems like he's throwing ideas at the wall in the hopes that maybe something sticks. More of us should do that, or at least, support those of us willing to see if something did stick.

The tone of this paper seemed less like an academic assertion of an effect, and more like "hey, found something cool, someone should look into this, I gotta get back to what I was doing."

I'm also not going to blame the guy for being interested in space-age propulsion systems.

If he was doing what you're saying that would do him credit. But I see it more as indulging in wishful thinking and gaming the funding system by promising vaguely plausible nonsense and avoiding the tough questions that would come with seriously engaging with the physics community.

A lot of the writing falls woefully short of being a serious effort at science. It's more "wouldn't it be cool if" science fiction.

The entire "attempt" to link the EMDrive to Bohmian quantum mechanics is just a grand amalgamation of conceptual confusion.


I read the new paper and I'm familiar reading physics articles. It strikes me almost as GPT-3 generated in places.. It certainly would not be accepted in its current form by any peer review process in a high impact journal. I'm sure the intent and ambition is absolutely sincere and there is serious research here but it's presented in a very unclear way IMHO. I think the paper could be rewritten to be more focused..


Peer review scores are uncorrelated with citation count over time. So forgive me if I discard peer reviews and other anecdata.


What the heck are peer review scores? And why would you expect it to be correlated with citation count? High quality research isn't the same thing as highly citable research. Peer review is supposed to check plausibility and maintain minimum quality standards. Peer review isn't a sign that something is good. But if something can not even get peer review it's a sure sign that it's pretty bad.

I know that video. This is also different from pure peer review because it involves an editorial decision/recommendation. I am sure >95% of the papers rejected from NeuralIPS will pass peer review elsewhere.

So this isn't about getting published in peer reviewed venues, this is about getting published in a particular, selective, high visibility venue.

So as far as I can see none of this has any bearing on anything you replied to.


I remember reading the suggestion that one possible interpretation of Dirac's equations was that "empty" space is actually a sea of negative energy.

> using metal plates to forbid certain vacuum fluctuations

Just curious, why can't fluctuations happen inside matter?

Isn't matter 99% vacuum anyway?

They can and do, and are an important part of intermolecular interactions (van der Waal's forces have a substantial Casimir effect component).

Matter just ordinarily isn't structured to produce a strong version of an interesting effect - which when you get down to it is essentially what all human technology is based on.

It's quite possible there are some weird molecular structures (if this paper is correct) which could also induce natural warp-field structures out of their crystal lattice.


Going down this route, it strikes me as more feasible to find lattice effects or nano-scale effects facilitating cold fusion than warp fields. After all fusion itself is at least provably existing, just difficult to arrange.

While I understand your reservations, I think you are really underselling the power of Ansatz.

Ansatz reduce the search space of possible solutions because it adds various constraints that makes it possible to get to solutions much faster.

For example, without these constrained searches we will never have developed any analytical solutions to differential equations that are pervasive across science, engineering, finance among others.

> The universe, despite being filled with mass and energy, is roughly flat.

No, it isn't. It is spatially flat (if we choose appropriate coordinates), but the spacetime of the universe is not flat, because it is expanding.

The rest of your reasoning is wrong because it starts from this false premise.


"Negative curvature" isn't quite the right term. (For one thing, curvature in 4-dimensional spacetime cannot be described by a single number.) The Alcubierre drive requires "exotic matter", i.e., matter that violates the weak energy condition. One way of phrasing that condition is that the matter will have negative average energy density as measured by certain observers.

> Which kind negative curvature does the drive need?

The issue with your reasoning is not what kind of curvature the drive needs, but the idea that there is a "natural" bending of spacetime at all. There is no such thing.

The biggest issue - based on your explanation - seems to be the need for negative energy...

I might have a non-educated theory that could explain negative energy; could you tell me if this is potentially correct or am I completely wrong here?

Disclaimer: I haven't read the article (it doesn't load), and I am (probably) lacking even the basics of the required knowledge.

Let's assume the law of energy conservation applies to the universe: if you assume the total energy in the universe is constant, the negative energy is a relative imbalance instead of an absolute one, caused by the universe expanding at a different speed relative to the "things" it contains.

As the universe is "expanding" constantly, the "energy amount per expanding area" gets "less dense", so the "things" that are not "expanding" at the same speed get "more dense" in comparison. This imbalance creates different types of energy perturbation. From the point of view of the things that do not expand as fast, the energy levels relative to yours seem to evolve in a "negative way".

It's like looking at a stationary train from a train that is moving: from the point of view of the moving train, the train standing still seems to go backwards.

Does this make any sense at all, or am I completely missing the mark here?

Updated: reworded some things and provided extra context.


Energy conservation follows from Noether's theorem and time translational symmetry. This doesn't hold in general for an arbitrary space-time. In fact, it holds for a very specific subset (static spacetimes) of which our universe is not one of them.

As another complete layman just pooling my intuition:

Relatively more or less dense "things" (to each other) will always generate positive curvature of spacetime. What you want is relatively less dense area than vacuum in another region of space, since vacuum is (for now) what we constitute as (somewhat) flat.

That's what Casimir effect tries to accomplish: define a small enough region of spacetime where some larger quantum fluctuations don't have enough space to happen, thus lowering its energy below vacuum.

Since the idea is to define small enough space-like interval in spacetime to curb some energies off, could defining small enough time-like interval help the same purpose, multiplying the former?

Thanks, this makes a bit more sense to me now, but let's figure out if I understood it correctly:

The Casimir effect is about creating a spacetime "corset" so to speak, that is so tight that even the regular quantum fluctuations don't have "enough room to pass", so they "almost freeze", and when they "come out on the other end", they could for example break or delay entanglement?

On the other hand I think your assumption about the time aspect might be correct as we are talking about spacetime after all. The biggest issue might be that measuring time deltas might be harder than space deltas?

I don't think the Casimir effect is utilizing some medium flowing between the plates. Its not about a quantum fluctuation freezing as it passes trough the passage between the plates. Its rather that the quantum fluctuation comes into existence and disappears at random point in a vacuum, how often depends on the energy of the vacuum. When you put two plates really close together, there is not enough space for the quantum fluctuation to materialize, as it's wavelength is bigger than the gap. That way you limit the amount of those fluctuations materializing between the plates, and because there's less of them, there should be less energy than in unobstructed space, even vacuum. Thus negative energy, because its less than the 0 we assume for vacuum.

Regarding working with time-interval, it might as well just be the opposite: minimize space-interval and maximize time-interval for maximum effect. It's just wild guesses at my side. But minimizing time-interval should be trivially doable by just slinging the plates past each other at high speed.

LOL, looks like one gets seriously downvoted for mental freewheeling on HN... I did not see that coming, especially since I added a disclaimer.

Anyway, thanks for taking the time to try to explain this to a total layman, but I'll defer engaging into subject matters that I never heard about before; lesson learned...

Juicy bit, cut from the end of the abstract:

> a toy model consisting of a 1µm diameter sphere centrally located in a 4µm diameter cylinder was analyzed to show a three-dimensional Casimir energy density that correlates well with the Alcubierre warp metric requirements. This qualitative correlation would suggest that chip-scale experiments might be explored to attempt to measure tiny signatures illustrative of the presence of the conjectured phenomenon: a real, albeit humble, warp bubble.


For someone used to working at nanoscale, 1µm seems enormous! There are entire free-living organisms that could fit inside that warp bubble.


A warp bubble full of DNA could be an okay form of "FTL" communication.

Think about the potential for mischief or warfare it would enable. You could warp a tiny blob of some potent toxin right into someone's body.

Fascinating.


There is a macroscale non-warp solution for that already available in the form of normal bullets. No need to make it more complex by adding microscopic warp particles.


With normal bullets the shooter has to be somewhere in the target's past light cone. With an FTL bullet the shooter could be outside the target's past light cone, which makes arranging an alibi easier.

Bullets are loud and require you to transport a gun into the area and find a spot where you can snipe at your target, both of which could be very difficult in, for instance, an urban environment. You probably would get caught.

The poison warp bubble is much cooler and stealthier. And as someone else mentioned, you aren't restricted by the lightcone.

Also it could make wars even more obsolete then nukes make them. Yay :) But with all consequences - we should immediately start to cut down on bureaucracy ! ;)

And in case that article is not some pre- 1 April joke: how the heck you all here peoples can be so calm and "meh HFT will eat this" ??????


I think we can already get that effect (tiny stuff into people some distance away) just using blow darts?


Cute. But over the distances where this even remotely makes sense your problem is targeting!


CIA made electric dart (instead of blobs) gun in like 1960, if not earlier with effective range of 100 meters.


I have to save this posting somewhere. So many interesting ideas in a single place!


or imagine a gray goo scenario where an advanced civilization shoots self replicating nanotech based microorganisms at FTL wrapped in this bubble and 'sanitize' a potential threat-planet.


1 micron and 4 microns, not 1 meter and 4 meters. @dang, is this an HN bug?

It looks like the article includes the greek letters as image, not as unicode character, so they got lost while copying.

It should be 1µm and 4µm instead.


They must have very little faith in Unicode support and correct encodings being used if something as mundane as µ needs an image haha


                                                              consisting of a 1     <span class="img-inline">       <img src="/articles/epjc/abs/2021/07/10052_2021_Article_9484/10052_2021_Article_9484_tex_eq1.png">     </span>     m diameter sphere                                                          
nah. it's just a strange rendering decision to use an image for a standard extended-ascii character: μ


The original paper was likely LaTeX and they rendered all equations as images. You'd have to special case equations that can be rendered directly in HTML.


Hm, I seem to be able to copy and paste the mus produced by pdflatex or by using pandoc to convert from LaTeX to HTML. Perhaps they're using something bespoke.


It would be funny if the signs and magnitudes worked out to allow the 1 micron sphere to levitate inside a 4 micron hole.

Any physicists wanna throw some cold water on our hopes and dreams here? Compared to most papers with far-out results, there didn't seem to be any obvious caveats here. What's up with worldline numerics? Could there be any issues hiding in the numerical methods there? How established are those methods in the field?

Also, to be clear, is it correct that they haven't actually built such a device, this result is still just in simulation?


FTL travel implies violation of causality, no matter how you slice it. How sacred causality is to you depends of course on your personal world view, but to me it is inconceivable how causality could be violated, so I won't be getting my hopes up about this at all, even though I can't comment on the technicalities of this paper.


Causality isn't nearly as central to physics as it may be to your worldview. Lawrence Krauss and others who have commented on the matter have argued that FTL travel, would mostly require a re-evaluation of our perspectives rather than a re-evaluation of physics

Hmm, similar to how heavier than air flight was thought to be impossible for humans?

Interesting how the will of a few can shape humanity. I love it.


Probably a pretty good analogy! Even now, it's not known whether time has directionality in any way that is meaningful in physics. FTL travel would resolve the "Arrow of Time" question (https://en.wikipedia.org/wiki/Arrow_of_time) but this wouldn't invalidate much/any of physics because the arrow is essentially non-observable at the micro level

He discusses it extensively in The Physics of Star Trek.

Also, supposed locally FTL travel for subatomic particles is pretty different from locally non-FTL but globally FTL travel a la Star Trek, Alcubierre drives, etc

>> Causality isn't nearly as central to physics as it may be to your worldview

No, but locality seems to be a sacred cow in physics. They'll latch on to just about anything to explain violation of the Bell inequalities, but nobody is willing to part with locality.


Generally I'm suspicious whenever anyone seems to imply that an entire field is barking up the wrong tree

At least at quantum level there are experiments where the outcome depends on things that did _not_ happen (according to our intuitive understanding of causality). Mostly in weak measurement research.

So I'd say while causality is real, there are still some surprises to be expected.


Warp in this case I believe is referring to bending space time or traversing something similar to a wormhole, it's not talking about Star Trek and tachyons and FTL. Hence the discussion points of exotic matter that exudes negative mass, which is what is required to stabilize mathematical models of Einstein Rosen bridges.


Doesn't matter, if you make it from here to the moon in less than a second, no matter if it's using a warp bubble or something else, then there is a frame of reference where you arrive before you leave.


You'd need negative mass either way. Bending spacetime is the most plausible implementation of FTL travel a la Star Trek; doesn't matter a whole lot if you're using the negative matter to keep the wormhole open or create an Alcubierre drive-style spacetime wave on which to surf


there may be a a way around that with hard determinism. at least that is one of the explanations of delayed choice quantum eraser experiments.


Not a physicist, but some friends who are almost-barely physicists have pointed out that the concept of worldline numerics comes from string theory, which itself may or may not be generally bogus depending on who you're asking.

As far as I understand the main problem of string theory is that it has been carefully constructed to fit all the physics we know about, and has not actually predicted anything we didn't already know about that we could experimentally verify. So in a practical sense string theory has been quite useless.

If indeed it is true that string theory predicts this effect, and the effect can be experimentally verified (afaik this was dubious at best) then that would be a huge boon for string theory.

> As far as I understand the main problem of string theory is that it has been carefully constructed to fit all the physics we know about.

Doesn't string theory have a lot fewer tunable parameters than the standard model?

> and has not actually predicted anything we didn't already know about that we could experimentally verify

I don't like that argument for a couple of reasons.

First, it depends on the timing of when we discover things. If the experimentalists had taken longer to discover some things a theory that we would criticize today as not predicting anything we don't already know about would be a theory that would be celebrated for its predictions if it had just been posed earlier.

I think a better thing to look at is if a theory can correctly calculate things that weren't built into the theory. I remember reading [1] that string theory correctly calculated some things concerning black holes that none of the people who had developed string theory to that point knew about.

Second, I don't like it from a philosophical point of view. It suggests that if say some pre-Newton gravity theorist had just included enough epicycles, science should have rejected Newtonian gravity because it would not have explained anything that the epicycle theory didn't.

[1] Probably either in Brian Greene's "The Elegant Universe" or Lawrence Krauss' "The Greatest Story Ever Told--So Far". Possibly Greene's "The Fabric of the Cosmos", but that seems unlikely. I distinctly remember reading it, and my copy of "The Fabric of the Cosmos" is an audiobook. The other two are Kindle books.

> I think a better thing to look at is if a theory can correctly calculate things that weren't built into the theory. I remember reading [1] that string theory correctly calculated some things concerning black holes that none of the people who had developed string theory to that point knew about.

Well if that's true, then string theory definitely isn't useless, so I'd be interested in the specifics. This prediction from string theory wasn't explained by any of the accepted physical models?

> Second, I don't like it from a philosophical point of view. It suggests that if say some pre-Newton gravity theorist had just included enough epicycles, science should have rejected Newtonian gravity because it would not have explained anything that the epicycle theory didn't.

Well yeah, but why not reject Newtonian gravity if everything observable can be predicted by the epicyclical view? The goal is not to generate pretty models, the goal is to make accurate predictions of the physical universe, ideally through the simplest model we can come up with just to guard against overfitting.

Anyway, it's not that I (or anyone, I don't have any physics authority certainly :P), am rejecting string theory, it's just that as far as I understand string theory hasn't had any real use to our physical models. I certainly don't have an alternative to string theory.


Well, string theory has actually predicted Supersymmetry, which so far seems not to exist (though it is always possible to say it exists only on some higher energy than we can detect right now).


As I understand it, supersymmetry was taken as a concept and baked into the strings concept, to come up with superstrings. So it's not really a prediction as much as it is baked-in by the prevailing notions of the time.


It's possible to test string theory experimentally, just the amount of energy required is far too big for any foreseeable future. With a bruteforce approach, at least.

The issue seems to be that the alcubierre drive requires things like external negative energy which are 'theoretically possible' but have never been detected and probably should have been already if they existed. Also, even if they worked, the negative energy requirements is something like the mass energy of jupiter on the high end or the moon on the low end.

This is an hour long interview Dr Miguel Alcubierre who wrote the paper describing the Alcubierre drive.

https://www.youtube.com/watch?v=JafY92PhgKU

For a shorter 10 minute video, see this PBS Spacetime video

https://www.youtube.com/watch?v=94ed4v_T6YM

> The issue seems to be that the alcubierre drive requires things like external negative energy

Yes that's traditionally been the stumbling block. But the whole point of the linked article is that they have predicted a way to meet this requirement:

> a micro/nano-scale structure has been discovered that predicts negative energy density distribution that closely matches requirements for the Alcubierre metric

Obviously it's still a very very long way from a practical application re. Alcubierre (if such a thing is possible), but it's certainly an intriguing result if correct.

From a quick flick through the paper, the above commenter seems to be correct in saying that they haven't yet completed a practical experiment to confirm. So nothing more than a simulated result at this stage.

Yes again it seems like one of those 'theoretically possible but never seen in a lab' kind of requirements, just like negative energy.

The PBS spacetime video mentions you can 'do away' with the negative energy requirements at quantum scales, but those aren't really productive scales for useful space flight. Maybe for very tiny micro exploring robots? I'm not sure how such a device would usefully communicate with us though.


Cynical prediction: a nano-scale implementation could be a step towards a cable that can transmit signals faster than light... which would only be useful for HFT


I have a drunken plan involving sending neutrinos through the earth as a form of communication. Friends inform me that it has only three problems: producing the neutrinos; modulating the signal; and detecting it at the other end. I shall continue pottering.


Atomic bombs produce large bursts of neutrinos, which should be sufficient for the Super-Kamiokande detector in Japan. Amplitude modulation is possible through Dial-a-Yield and MIRV warheads can convey multiple bytes. Environmental impact may rival Bitcoin though..

That's surprisingly similar to my build-a-laser-to-traverse-the-core-and-then-beam-financial-data-through-it.

Current state of research: my remote cousins in New Zealand and I have made a "planet sandwich" by laying bread on the floor of our antipodal dwellings.

Basic loop: Price of X rises to Y at time T2, time-traveling trade packet is transmitted to T0, past-X is purchased at price W increasing demand of X and therefore increasing price to Z at T1.

If T1 == T2 and Z == Y: the loop is stable. If T1 > T2 and Z >= Y: the loop is stable and not paradoxical. Every other option produces interesting results depending on the trading algorithm.

Hypothesis: all unstable loop scenarios will converge on Z == Y == W, negating the sending of the signal altogether. Or from another perspective, the superposition-timelines where Z == Y == W does not occur destructively interfere with each other.


Reminds me of the "portals" (quantum spacial entanglement devices) in Peter F Hamilton's Salvation Sequence, which as well as people, vehicles and equipment, are used for electrical and communications cables.

There's nothing which specifically stops you from having wormholes that don't violate relativity but do allow you to bypass annoying things like solid matter, but stabilizing them is something you also need negative energy to do.

Replacing all radio communications with point to point lasers via wormhole-on-a-chip devices would be a heck of a communications revolution (probably also an answer to the Fermi paradox).

> probably also an answer to the Fermi paradox

I'm curious, why is that? Because the way I understood the Fermi paradox was that highly advanced civilizations should be visible from their energy requirements alone (e.g. via Dyson Swarms), not specifically in the way they communicate.


Radio waves wise, it would explain why SETI hear's nothing: the radio age would have lasted about 100 years before vanishing completely.

The assumption that ET communicates via radio waves has always seemed unusually anthropocentric compared to the rest of SETI attitudes.

If it's not sending lasers through wormholes, it will be something else – but it seems the height of arrogance to assume that an advanced civilization would communicate via radio waves just due to our own familiarity with them.

You don't see this elsewhere – SETI is always keen to downplay "little gray men" (they might not even take physical form! Maybe we can't conceive of them!) or "carbon based lifeforms" (maybe they're made of silica!). But for some reason there's less questioning of any assumptions about their communications media. I wonder if this is due to SETI betting the bank on radio waves.


Depends on how easy it is to project a wormhole endpoint elsewhere in the universe. If it can be done at >= C then sensitive radio receivers don't have much value.

I suppose an optimist would say that success, even at a tiny scale, would be a step in the right direction. But you're right, it could well be possible that the effect doesn't scale up.

I think gram-scale probes are definitely being considered -- though not with Alcubierre drives obviously lol. Breakthrough Starshot think they can transmit back from Alpha Centauri at 2.6-15 baud per watt by using their light sail as a laser reflector [0]. Pretty crazy.

[0] https://en.wikipedia.org/wiki/Breakthrough_Starshot#Laser_da...

> 'theoretically possible but never seen in a lab'

Not unlike gravitational waves and the Higgs Boson up until relatively recently...


Negative energy is "theoretically possible" in the same way that you can take the formula for compound interest and substitute it an imaginary or complex interest rate: it will generate results, but the whole premise is profoundly suspect and would never occur in reality.

Just need a tiny, stable black hole and some way of containing it. And then some way of scaling up the Casimir effect away from nanoscale electron microscopy.

Maybe you could create a collider scale facility and warp particles? From what I can understand, there's nothing wrong in principle, it's just that the conditions are currently absurdly beyond our reach.


There's lots of things wrong with the naivest formulation, and plenty still wrong with the most sophisticated constructions: for example, you cannot steer your ship, as that requires violating casualty locally, and also you'd blast your destination with a shower of infinitely blue-shifted radiation.

The Reddit post didn't link directly to the study. Reactions there range from "omg Star Trek" to "nothing to see here, it's all math/computer models."

I'll admit that I lack sufficient math and physics education to fully appreciate the study, but after reading the actual abstract I must say that it certainly seems more grounded than either of those two extreme reactions.

There's a lot of suggestive and uncertain language being used like "to be qualitatively quite similar", "correlates well with", or "correlation would suggest", and I'm uncertain if I should Interpret the rhetoric as conservative excitement sprinkled with self doubt, or if this is the physics PhD researcher version of ostentatious nerd flexing with a bit of CYA in case future research/results don't pan out.

I'm looking forward to reading the comments of the European physics and engineering PhDs in the morning.

Same here, though they say this in the paper, about the experimental confirmation they propose: "...To be clear, this would not be some simple analogue or proxy representation of a space warp phenomenon, rather it would be a genuine implementation of the idea in physical fact with observable consequences in the laboratory – just not in the dramatic form of a craft bound for a distant stellar destination."

It seems that they're proposing a real, achievable-with-current-manufacturing-skill experimental contraption that they think will be able to create a real, not analogous, not hypothetical, honest to god warp bubble. So I'm waiting for a real physicist to wake up, come and crush my dreams on this one.

A bit tangential, but is it typical to mention your funding source multiple times in the abstract for an academic paper in this field?

Actually, to be frank, mentioning multiple times that they are DARPA-funded in the abstract makes me wonder if they are aware that their claims stretch the suspension of disbelief, and if perhaps they'd like to emphasize their association with a big official organization to reinforce their credibility. Which is not at all a good argument against them, but it gives me a little bad smell. Maybe I'm just jaded and they'd like to preemptively offer thanks to DARPA, though.

Something like this :

https://en.m.wikipedia.org/wiki/Great_Oil_Sniffer_Hoax

"The Great Oil Sniffer Hoax was a 1979 scandal involving French oil company Elf Aquitaine. The company spent millions of dollars to develop a new gravity wave-based oil detection system, which was later revealed to be a scam. Elf lost over $150 million to the hoax. In France, the scandal is known as the "Avions Renifleurs" ("Sniffer Aircraft")."


The best part about this is that it should be experimentally testable, and the paper seems to focus on that aspect. Bravo!

A sort of toy solution for an arrangement of spacetime valid in general relativity allows faster than light travel but requires all sorts of things which are varying levels of probably impossible / this doesn't exist.

Some experimentalists playing around with the Casimir effect (very real) which has to do with very close together things being pushed together while in a vacuum due to quantum effects of a vacuum found something in a simulation that looks quite like the toy solution that allows faster than light travel, involving things a few millionths of a meter across.

They figure you could actually build this contraption which is essentially just a little ball in a little tube and test its effects on electrons to confirm that they really can create tiny warp fields and study them… or they could have just made mistakes or found a way to invalidate some physics.

It doesnt seem like there's a lot you could do with a micro scale warp field as the effect it relies on really depends on scale, but it is an interesting possibility.

Like most cases of "huh, that's weird" it's probably nothing, but it might be something, definitely worth the research funding to build the little device, but hold off on your vacation plans to Alpha Centauri.

I wonder if we could discover new physics/applications or build a warp drive just by brute-forcing/mass experimentation or simulation.

What if our universe is simulation by an "alien" species who wants to achieve FTL travel but are out of ideas on how to do it; but, they can run obscenely massive simulations. And the simulation stops as soon as someone inside the universe achieves FTL travel.


Aside from the general crankery of this musing, you realize that us being able to discover FTL travel would depend on the underlying physics being simulated deeply and accurately enough in the first place, and probably with some headroom?


It doesn't matter if the simulation is kind of sketchy. Because we can find holes in it, and exploit those to move around. To us simple simulated beings, we achieved FTL and we can conquer the simulated universe. The aliens on the next level, probably will find out that the simulation broke. Or a dilemma, maybe we can't find FTL, because the simulation was already patched from previous iterations.

If a civilization is capable of simulating universes, most likely it know if FTL is actually possible or not in their own universe.

But it could also be that even the original universe has too many free/independent parameters and the original civilization is as scientifically agnostic as us if FTL is actually technologically possible, so they could say "we thought of everything we could and still cannot prove or disprove FTL, let's create a totally different species and let them have a go, maybe they come up with radically different ideas, but it's better not to create them in our own Universe or they might be smarter than us and turn against us, let's sandbox them in their own simulated universe".

They could probably use "A.I." like AlphaGo to try and build structures within their own universe limitations that create Alcubierre drives, but it could also be that that also failed and after searching for aeons, they just raised their tentacles and said "that's it, we're stuck, the only thing left is to try the simulated universe approach".


Even if the FTL aspect turns out to be impossible, I think it's still an interesting hypothetical alternative to chemical and electric thrusters. More realistically I'd bet it's even further off than terrestrial/lunar mass drivers to solve the payload problem but it's an interesting thought nonetheless.


To your point, even if the theoretical drive is feasible but limited to sub C speeds, a drive that requires zero propulsion mass and where the contents of the bubble don't feel any acceleration would be absolutely huuuuuge for manned space flight.


yeah 100%. I do wonder about the energy requirements though because intuitively they don't totally make sense. I still don't understand why it couldn't be used as an infinite energy generator. Is it really just that the ship would move without gaining any momentum since it's the space around it moving instead of the ship itself?


If you had a tiny warp bubble you could use it to transmit information faster than light.


An Alcubierre drive is a theorized device for faster-than-light travel that kind of doesn't technically violate any known physical laws. Instead of accelerating a ship beyond the speed of light (which we think should take infinite energy), it instead manipulates spacetime itself (contracting the spacetime in front of the ship and expanding the spacetime behind the ship) in order to achieve apparent faster-than-light travel. But just because it can be theorized doesn't mean that such a drive is possible to build, particularly with regard to the requirement for negative energy density. This paper appears to suggest that one candidate for providing negative energy density might be achievable.


Thanks for the link. Of the three major concerns raised by Sabine about halfway through the article, my (uneducated) take is that this work addresses the first one: production of an apparent negative energy distribution.


The author list is half a joke. Some dude who makes clean room equipment in his garage and an official representative of NASA's crank group. I don't understand how this crap gets funded.

> Some dude who makes clean room equipment in his garage

Why is this a problem? One absolutely can make clean room equipment in ones garage. Maybe not for 10nm processes but absolutely anything exceeding 80s standards. It's even not that expensive.


Nothing wrong with making clean room equipment in your garage. It just doesn't inspire confidence in their ability to contribute to cutting edge gravity research. In other words, the author list and the paper as a whole fails the sniff test miserably.

What? Because a professional scientist has a hobby of fabricating chips in their garage, they're somehow less qualified to "contribute to cutting edge gravity research?"

What are your hobbies?

For sure, but isn't this already the most important leap we needed? That we can imagine that it's at least theoretically possible?

In all this excitement hopefully we don't forget the horde of luddites that's the modern physics community that couldn't imagine the possibility of faster than light travel. Like I get it, it's absurd to consider FTL in regular space time, but when a normal person asks this question all they care about is whether they can realistically imagine going from one star to another in meaningful timespans. Pretty much every physicist I talked to online and offline would scoff and call the concept "absurd" (which is technically true in regular space time). I even believed them for a while. They'd shut down any discussion or imagination before it was even possible. Even now you see the same backwards thinking instead of being excited at least by the theoretical possibility.

Even last year when I brought up the Alcubierre drive to a physics PhD he wasn't aware of it, and couldn't imagine this being anything other than a joke (even after showing him the Wikipedia page).

Like a bunch of science fiction writers could think of something called "warp drive" and it took decades for some dude in Mexico to imagine a theoretical potential method to do this?

The last great physicist in my opinion is Carl Sagan, the entire field just like most sciences have rotten to the core into completely unimaginative folks who cannot fathom a world that's only marginally different from what's currently around them.

Richard Rhodes spends a lot of time in his "making of the atomic bomb" to figure out if the original inspiration for the atom bomb might have come from HG Wells. We've had great imagination in our science fiction, only our scientists have lost whatever mojo used to be there (and I suppose a gun to the head in the name of Nazism) to inspire from it.

> Richard Rhodes spends a lot of time in his "making of the atomic bomb" to figure out if the original inspiration for the atom bomb might have come from HG Wells.

Nonsense. Szilard thought chain reactions were possible and worked years to prove this is the case. The bomb was an immediate and obvious consequence.

Science fiction writers don't push physics or science forward. Physics and science push science fiction forward. As we discover more, what we can imagine also grows.

> Like a bunch of science fiction writers could think of something called "warp drive" and it took decades for some dude in Mexico to imagine a theoretical potential method to do this?

People 5,000 years go could imagine flying. But that doesn't help you with flying at all. Just because someone wrote a cute story about how nice it would be to fly, doesn't mean you're any closer to doing it.

Regarding H.G. Wells, Leo Szilard, and the atomic bomb, is this letter something other than exactly what it looks like? https://library.ucsd.edu/dc/object/bb58377715/_1.pdf

More broadly, I don't buy for a second that science fiction doesn't feed back into the minds of scientists and engineers working on the next big thing. It would be strange if it didn't, and I think the burden of proof is on you to show that scientists and engineers are universally exercising the sort of perfect mental hygiene it would take to isolate themselves from the baseless speculations of writers.

> More broadly, I don't buy for a second that science fiction doesn't feed back into the minds of scientists and engineers working on the next big thing. It would be strange if it didn't, and I think the burden of proof is on you to show that scientists and engineers are universally exercising the sort of perfect mental hygiene it would take to isolate themselves from the baseless speculations of writers.

You don't need to isolate yourself. The baseless speculations of writers just don't help in doing science. If you pick any discovery, you'll see that it was a consequence of a lot of work and intuition about a particular area of math, physics, engineering, or biology. Scientists aren't waiting for the next Star Trek episode hoping to find some ideas. The reason is simple: the ideas in science fiction don't work. Almost always what we get is totally different from what was predicted and usually more amazing, and it's shaped by the math/physics/engineering/biology that led us there, not b some outcome we want to match in science fiction. It's just a cognitive bias that you remember good SF predictions and forget bad ones.

This whole Wells atomic bomb business is a prime example. Szilard couldn't possibly get any ideas from Wells. Because Szilard's actual contribution has nothing to do with what Wells wrote. He discovered chain reactions. And how he did so is well documented, by analogy to chemical chain reactions. Literally nothing in Wells would ever help you discover nuclear chain reactions. Nor is it obvious that what Wells wrote requires chain reactions, you can think of other fanciful mechanisms.


As a working physicist I would agree with you for the most part. Science fiction doesn't help because it doesn't contain ideas, which work beyond a surface plausibility level. However you can still get inspired by them in indirect ways. I find Paul Feyerabend's work interesting in that regard. In "Against Method" he argues that most breakthrough scientific discoveries can't be cleanly attributed to the scientific method.

I literally posted a link to what appears to be a letter written by Szilard himself (please correct me if I'm wrong), directly citing Wells' story as inspiration in an account of his own relationship with the genesis of the atomic bomb.

I can't help but notice that you seem to have completely ignored it in the construction of your response. Perhaps you didn't read my comment very thoroughly.

I read it. You should read it carefully too. He says he read a book. He never says it was an inspiration.

Just because I do AI research and tell someone I watched Star Trek, does that mean I'm saying that my latest NeurIPS paper is inspired by Star Trek? No way.

The suggestion that he'd mention it in the very first sentence of an account of how he arrived at the idea for the atomic bomb, when in fact it was totally irrelevant at the time, is absurd on its face; and your analogy is terrible. A better analogy might be somebody asking you how you developed your "latest NeurIPS paper" and you responding by describing a Star Trek episode in which a (superficially) similar technology is used, before even mentioning your own work. And later, perhaps, recounting how you exclaimed Star Trek, here we come! when you had your key insight.

You've overextended your argument, and you know it. There is no use continuing this conversation if you're going to go to such absurd lengths to deny what's obvious.


Dudes hell bent on insisting that we have the greatest scientific system today, not sure there's much to say to convince otherwise.

> the ideas in science fiction don't work

Like robots, flying cars, space stations, tablets, electric cars, AI...?

Sci-fi lets us set impossible looking goals, until somebody comes up with something similar and practical. I'd give sci-fi more credit for shaping our current world. Writers dream, science builds which I think is a beautiful symbiosis.

Szilard knew Wells very closely and definitely read Wells work which talks about atom bombs dictating the world order in the fifties. He says himself that it didn't exactly seed the thought but I find that hard to believe personally. This doesn't take anything away from him, if that's somehow offending you.

The correct analogy is about how long you go from someone imagining a flying machine where the wings don't move to an actual plane. They called it "warp drive" for heavens sake.

Also comparing the lack of progress today to the lack of progress in medieval times is not the flex you think it is.

> The correct analogy is about how long you go from someone imagining a flying machine where the wings don't move to an actual plane. They called it "warp drive" for heavens sake.

No way. The idea that your wings don't have to move wasn't new, it's ancient. People were building paper planes thousands of years ago. If all it took to fly is to not move your wings, people would have been flying around thousands of years ago.

Making a flying machine takes far more than that. You need to understand the lift equation, how airfoils work, how and why you can and can't control wings and attitude, etc. The Wright brothers didn't strap wings to a bicycle, they spent years working on the physics and engineering of flight, including building wind tunnels.

From the outside, at a high level, without understanding the physics or engineering of these systems, it's easy to say "Oh, it's just X". Like looking at the solution to a chess problem and saying "Of course, I would have seen that". You wouldn't have. As evidenced by the fact that no one did. For millenia.

> Also comparing the lack of progress today to the lack of progress in medieval times is not the flex you think it is.

I never said there's a lack of progress today. Scientific progress is amazing today, far faster today than at any point in history.


I doubt that people were building paper planes thousands of years ago given the fact that cheap universally available paper is pretty recent invention

>Scientific progress is amazing today, far faster today than at any point in history

What're you basing this on/how're you defining the growth rate here? Not rhetorical, would be interested to see your data since it seems quite a common argument to hold that it's slowing in lots of areas

> What're you basing this on/how're you defining the growth rate here? Not rhetorical, would be interested to see your data since it seems quite a common argument to hold that it's slowing in lots of areas

One kind of metric to look at are published papers, patents filed, money invested into science, total citations. All of them are increasing a lot. But these are terrible and unconvincing, you could see the numbers go up if we were spinning our wheels.

The value of science and engineering should really be measured in terms of how much easier they make our lives. If you look at that, it's hard to find a metric that doesn't show that scientific progress is healthy and increasing. Moore's law is still going. The cost of solar per Watt is down like 100x in 30 years. The cost of batteries is down 50x in 30 years. The cost of sequencing a genome is down 10,000x in 20 years. Productivity per worker doubled in 30 years. 30 years ago digital cameras were super low resolution, now they're amazing. 20 years ago computer vision could barely detect a person walking in front of car, it was state of the art research; it's now so reliable the new infrastructure bill makes it mandatory for new cars.

I picked examples from all sorts of areas of the economy and human life for a reason: none of these are down to one discovery. They required countless advances from material science, to basic physics, even the mathematics, engineering, etc.

Everything is far cheaper to make today and people are far more productive compared to 30 years ago, and it's just incomparable compared to 60 years ago.

But I get it. It doesn't feel that way. That's not a science problem. That's a politics problem. The gains from all of these improvements at a societal level are mostly going to the ultra-rich sadly, because people vote against their own best interests routinely.

You're confusing engineering and technology with fundamental research.

Digital cameras and batteries aren't in the same league as game changing concepts like quantum theory and relativity.

Game changers don't just mean you can make stuff cheaper, they mean you can imagine completely new kinds of stuff that were literally unthinkable before the game changed.

Before you can improve batteries you have to invent the concept of a battery. Which means having some basic understanding of electricity. Before you can improve computer vision you have to invent the concept of a computer. Which requires inventing a theory of computability.

And so on.

The point is there really hasn't been a lot happening at the game changer level for a long time now. Refinement is fine, but it's unwise to confuse it with fundamentals.

> You're confusing engineering and technology with fundamental research.

This pretty much gives away that you aren't a scientist. The vast majority of fundamental research opens up new ground in highly specialized areas. It slowly trickles out as improvements that you don't seem like "game changing concepts" but they required game changing concepts at a low level to get things done. That's scientific progress and that's the game changer.

> Game changers don't just mean you can make stuff cheaper, they mean you can imagine completely new kinds of stuff that were literally unthinkable before the game changed.

And I don't think you've ever dealt with transitioning science from the lab to industry. The game changer is the cost and availability. There are plenty of amazing things that don't matter in real life because they aren't practical. They aren't game changers.

> Before you can improve batteries you have to invent the concept of a battery. Which means having some basic understanding of electricity. Before you can improve computer vision you have to invent the concept of a computer. Which requires inventing a theory of computability.

You definitely don't need computability to invent a computer. And you've got the discovery of the battery exactly backward. First Volta made a battery by trying to replace frog parts with paper and brine. Then we could go back and understand electricity; that was Volta's real lasting contribution. Before we had batteries electricity wasn't understood at all.

> The point is there really hasn't been a lot happening at the game changer level for a long time now. Refinement is fine, but it's unwise to confuse it with fundamentals.

This is nonsense. Who are you to decide what is or isn't fundamental? Why are scientists and engineers supposed to bow to your aesthetic sense?

No. All that matters is results. And the result is, 3x productivity increase in 50 years. And all of those other things I showed you, hundreds of x improvements in all sorts of practical engineering areas that make daily life far better. What matters are all of the incremental gains because they enable technological revolutions.

>how airfoils work

I don't think anyone actually understands how they work.


Yeah but you try really hard to fly and eventually succeed whereas otherwise you would never try and consequently never fly.

There's a lot of science that was discovered by people who had the ability to devote their time and energy to discovery for it's own sake.

There's no reason academia and research labs should have a monopoly on such opportunities these days. All it really takes is burn time, dedication and intelligence.

Why don't some of us just spend our time on these kinds of things? Far as I can tell, the equipment expense is well within the budget of a co-op of well paid tech folks.


Absolutely. I'm a biologist myself but I NOped out of academia with my PhD. Just making my money now before I establish my own garage lab to study connectomics.


I'm a physicist, and I think FTL is easy to imagine. In fact, I think that if it's possible, it'll be here sooner than we think, like maybe four or five years ago.


I was making a back-handed reference to the idea that FTL travel, and erasing the distinction between past and future, are similar ideas. ;-)

> In all this excitement hopefully we don't forget the horde of luddites that's the modern physics community that couldn't imagine the possibility of faster than light travel

There's a kind of funny example of that involving Kip Thorne, or rather Kip Thorne's students. He ran into his friend Carl Sagan at some event--I think it was something like they arrived at the airport at the same time for some conference they were both attending and ended up sharing a cab [1].

At the time Sagan was writing his novel "Contact", needed FTL travel in it, and was hand waving away how that would be done. Sagan was aware the one of Thorne's pet peeves was science fiction that hand waved away things like that without at least trying to come up with something that might be plausible or at least wasn't known to be impossible.

Sagan carefully broached the subject of "Contact" and that he was hand waving FTL in it, and asked if Thorne could come up with some justification for Sagan's FTL. That set Thorne onto researching wormholes for travel and he found solutions that didn't appear to be definitely physically impossible. Thorne wrote a guide to wormhole travel for Sagan to use to make "Contact" less hand-wavy.

Thorne then put a problem on the final exam for Caltech's Ph 236, General Relativity, that set up the conditions for a pair of connected wormholes and asked the students to calculate what would happen.

Most students got that you would get a pair of connected wormholes, but Thorne was disappointed that none of the students had noticed that this seems to give a way for FTL travel.

[1] I'm going by memories of reading drafts of a book Thorne was working on 40 years ago so details are a little fuzzy. He was writing a book for the general public that was going to be about contemporary physicists. Each chapter was going to feature an interview by Thorne of a contemporary physicist and an explanation of their work. Thorne had his notes, interviews, and draft chapters on his account at the VAX of the Caltech High Energy Physics department, and it was world readable so was widely read by the other people with accounts on the machine. It's too bad it never came out. My recollection was it looked like it was going to be very interesting.


Alcubierre was the first person to connect this idea to a spacetime metric which obeys the equations of general relativity, with actual math.


If you don't need the self-consistent mathematical theory behind it you should call it the Campbell warp drive, since John W. Campbell laid out the mechanism in 1957. Indeed, you could even call it the Brown warp drive, after the first use of the term "space-warp drive" by Fredric Brown in 1949.


You know, you're getting downvoted a lot, but there's a sense in which I agree in part and disagree in part. On the disagreement side, what is the purpose of being caught up in this exuberance for something widely considered to be possible only if something else impossible becomes possible? On the agreement side, the sciences have become unimaginative because they have become so competitive and careerist. In that context, people are encouraged to sell snake oil even just to tread water in the academic system. (It's much easier than actually doing serious honest work. And a more reliable career path!) In that context, there must be a sufficiently strong repressive skepticism, or else you just end up with unscientific fields full of unrigorous snake oil: like nutrition, nanotech, aging research, etc.

> In all this excitement hopefully we don't forget the horde of luddites that's the modern physics community that couldn't imagine the possibility of faster than light travel.

https://doi.org/10.1007%2Fs10701-011-9539-2 - cited 57 times

https://doi.org/10.1088/1361-6382/aafcea - cited 47 times

https://doi.org/10.1103/PhysRevD.56.2100 - cited 123 times

Each of these papers propose a separate method of faster-than-light travel. Where's the lack of imagination?

> Like I get it, it's absurd to consider FTL in regular space time, but when a normal person asks this question all they care about is whether they can realistically imagine going from one star to another in meaningful timespans. Pretty much every physicist I talked to online and offline would scoff and call the concept "absurd" (which is technically true in regular space time). I even believed them for a while. They'd shut down any discussion or imagination before it was even possible. Even now you see the same backwards thinking instead of being excited at least by the theoretical possibility. Even last year when I brought up the Alcubierre drive to a physics PhD he wasn't aware of it, and couldn't imagine this being anything other than a joke (even after showing him the Wikipedia page).

What's the difference between "regular" and "non-regular" spacetime? Also, maybe the physicists you talked to worked in other subfields of physics and didn't have expertise in this topic? Imagine going up to a professor in computer networking and asking what they think about the P=NP problem.

> Like a bunch of science fiction writers could think of something called "warp drive" and it took decades for some dude in Mexico to imagine a theoretical potential method to do this?

A bunch of science fiction writers also thought up of something called "transporters" and we have no way of making it real. Another group came up with PADDs and we have them now. You're cherrypicking one data point and generalizing it.

> The last great physicist in my opinion is Carl Sagan, the entire field just like most sciences have rotten to the core into completely unimaginative folks who cannot fathom a world that's only marginally different from what's currently around them.

Roger Penrose is still alive, and has a Nobel Prize in physics. We also took a picture of a black hole two years ago. Isn't that a great scientific achievement as well? It seems as though you're mistaking Sagan's scientific outreach efforts for actual scholarship on the subject.

I suppose what I'm trying to say is that I see the opposite of what you do in academia. For me, it seems to be still working well. Sure, there's still weird politics and worrying about being scooped, but ideas that are credible won't be shot down, but anything that doesn't make sense would be ripped to shreds.

You saying a random computer scientist doesn't need to have an opinion on P-NP is the fundamental problem here. Everyone specializes in one narrow thing and is incentivized and programmed to think of the world only through it.

Carl Sagan was not just great at outreach but also doing good science. You can watch his cosmos show half a decade later and think wait what exactly have we discovered new since then and draw a blank. Taking a photo of the black hole is great but if that's all you have as progress for All of humanity in 50 years then think for yourself.

penrose and Hawking are alright but I didn't mention penrose thanks to him becoming a crackpot in recent times. Point is such folks are few and far. I'm not convinced we have gotten dumber, but it's definitely true that we have been programmed into complacency in the imagination department.


Is the notion that we could create a craft that travels faster than the speed of flight more absurd, less absurd, or equally as absurd as the notion that we could create a perpetual motion machine?


From my current understanding, it's less absurd. Again the pedantic ones would say this is still not FTL but again to them who cares? We are not breaking laws of physics and we get from point a to point b fast.


No matter how we do it, we end up violating causality. Even if we can only create micron-sized warp drives, there are still some huge implications. We could probably build a machine that executes nondeterministic algorithms, for example — doesn't matter that P≠NP if we nevertheless have a way to solve such problems in P time.

I've also wondered about the implications of a causality-violating miniature warp field, and the hypothetical application I found most intriguing is the possibility of it allowing information to be sent back in time from the future.

Of course it would have the limitation that you could only send information back to a point after the "time machine" had been switched on, and you could only communicate to entities in the future who knew of your time machine's existence and design, but that's enough of a pinhole to send lottery numbers back through.

This has probably been considered before in fiction, where presumably the author thought about the other practical problems, like the time machine being on a rotating and Sun-orbiting Earth, meaning that the spatial distance between the time machine's location on subsequent days can be considerably large (relative to reference frame of the Sun). I don't know what the engineering consequences of that are.

> This has probably been considered before in fiction

"Thrice Upon a Time" by James P. Hogon, 1980. [1] It plays with interesting implications. In the book, sending a message to the past changed that version of the past.

My under standing is temporal communication in our universe is likely to be different. You couldn't change your past, so somehow things would line up no matter what you did.

But perhaps you could impact events you had not yet measured, since that impact, and your after-sending sampling of them, would leave a lot room for impact without contradicting any information you had at the time you sent.

For instance, you send a message back to your collegue to buy a stock yesterday, and you haven't seen him since last Tuesday. So nothing weird would have to happen to achieve consistency. Presumably your friend bought the stock, knew not to tell you until tomorrow to give you time to have sent that back to him. If the colleague tried to tell you earlier, something would stop him - because obviously he wasn't able to before you sent the message.

A good way for the colleague to risk their life since a serious accident would be the best way for time to prevent a determined colleague from changing the universe!

That means things in the time loop (from received time to sent time) could get very weird to achieve consistency.

[1] https://en.wikipedia.org/wiki/Thrice_Upon_a_Time

> For instance, you send a message back to your collegue to buy a stock yesterday, and you haven't seen him since last Tuesday. So nothing weird would have to happen to achieve consistency. Presumably your friend bought the stock, knew not to tell you until tomorrow to give you time to have sent that back to him. If the colleague tried to tell you earlier, something would stop him - because obviously he wasn't able to before you sent the message.

I just don't see how this works in the extremely simple case where you send the message to your past self. Using the colleague just tries to sidestep this. What physically happens if you attempt to send a message to yourself that you don't remember receiving in your past? Does the system just consistently malfunction? Does it turn out that you had an accident between receiving and sending that caused memory loss?


There are many people that feel many ways about these things, the most obvious way to get answers is to wait for a time machine and do some tests :)


Waiting for a time machine seems like a remarkably inefficient use of the machine

Why can't you just suddenly remember it?

It's not like you always, at any moment, remember every single moment of your life. You send it back, and then you remember it. You couldn't have remembered it before you've sent it back, but as soon as you did, you remember reading it.

Makes perfect sense to me. I see no reason why the memory wouldn't spontaneously form.

I think about this a lot.

It definitely steps around P=NP. You could try a large number of costly experiments and "send back" the working solution. The parallel experiments collapse into one.

You can send canary messages back to yourself to know if you're in danger, assuming it's you on the other side.

But perhaps the signals from the future are actually from an advanced adversary, such as an AI, and they're telling you to make moves that will lead to your demise (or the rise of the adversary).

The future adversary knows your actions and can send crafted messages to nudge you to the desired outcome. Perhaps similarly to the "P=NP" parallel explorations in the future from the past, the future has parallel simulations of your past behavior to optimize the future outcome.

It breaks the rules, but it'd be fun fiction.


There were times when physics PhDs would chuckle and say absurd about going faster than sound. The equations that we had back then also predicted all kind of infinities in pressure, etc.

> There were times when physics PhDs would chuckle and say absurd about going faster than sound.

Oh come on now, that's obvious bullshit. The term "sound barrier" doesn't predate the 20th century and was only popularized during WWII. Any physicist claiming it was a fundamental limitation would've needed to be completely ignorant to the existence of artillery. Who were these supposed physicists and what specifically did they claim?

Prandtl–Glauert singularity

https://en.wikipedia.org/wiki/Prandtl–Glauert_singularity

> The Prandtl–Glauert singularity is a theoretical construct in flow physics, often incorrectly used to explain vapor cones in transonic flows. It is the prediction by the Prandtl–Glauert transformation that infinite pressures would be experienced by an aircraft as it approaches the speed of sound. Because it is invalid to apply the transformation at these speeds, the predicted singularity does not emerge.

> The incorrect association is related to the early-20th-century misconception of the impenetrability of the sound barrier.

I learnt about it from Destin's video: https://youtu.be/p1PgNbgWSyY?t=816

Anyone using that formula must have either (1) known that the formula was inaccurate at that airspeed, or (2) denied the existence of supersonic shells and bullets.

Likewise today, we know something is wrong with the combination {relativity, quantum mechanics} because if our formula for both were completely true, the universe wouldn't exists.

The supersonic munitions of the era were rather difficult to ignore, so you need better evidence to claim PhDs would've laughed at the idea, than merely that the best formula they had at the time was not good enough.

> There were times when physics PhDs would chuckle and say absurd about going faster than sound.

This never happened. I have no idea how this myth entered the popular consciousness, but it's nonsense.

There was a superstition by pilots that going past the speed of sound was inherently dangerous because of many accidents when planes did go faster.

Prandtl–Glauert singularity

https://en.wikipedia.org/wiki/Prandtl–Glauert_singularity

> The Prandtl–Glauert singularity is a theoretical construct in flow physics, often incorrectly used to explain vapor cones in transonic flows. It is the prediction by the Prandtl–Glauert transformation that infinite pressures would be experienced by an aircraft as it approaches the speed of sound. Because it is invalid to apply the transformation at these speeds, the predicted singularity does not emerge.

> The incorrect association is related to the early-20th-century misconception of the impenetrability of the sound barrier.

I learnt about it from Destin's video: https://youtu.be/p1PgNbgWSyY?t=816

This is a theoretical result, they didn't make an experiment yet.

The device must be constructed inside a Casimir cavity, i.e. a special cavity that is so small some weird quantum effects are not negligible. So it will be very difficult to scale this up, or make a similar thing outside the walls. Being very optimistic, this can be useful to construct a FTL hollow "wire", in which you can send small things inside Alcubierre bubbles inside the wire. (Can I all it "hyperloop"?) I don't think it's possible to use this to travel to another star, unless you construct a loooooong tube connecting both.


As I understand it there is no proposed warp bubble geometry that is mathematically sound and is capable of crossing the lightspeed barrier; they all require an initial condition of already being superluminal.

A thought I had a long time ago on the Alcubierre drive: it violates everything we know about space-time and causality to travel in any way (and thus transfer information) faster than the speed of light, but what if you could use this to travel at or near the speed of light?

If we could build one we might even run into a causality protection principle effect where if you try to exceed 'c' something bad happens or something rapidly goes to infinity, but you can warp around at 'c' just fine.

That would let us easily get around the solar system and would enable star travel in 5-10 years.

> This qualitative correlation would suggest that chip-scale experiments might be explored to attempt to measure tiny signatures illustrative of the presence of the conjectured phenomenon: a real, albeit humble, warp bubble.

what a delivery. that's a punchline i'd like to be an author of.

Scotty, did you install the new NVIDIA warp card?

Och, Cap'n, they're still sayin' delivery is delayed due to supply chain disruption!


Finally a post in which I understand absolutely nothing, with the feeble exception of the word 'warp'. I can lay off all pretense, skip the attempt to incorporate this into my model of the world, and blissfully skate right past.

Looking at the proposed array of negative energy generators, I am struck by the awkwardness of using the sphere-in-a-cylinder arrangement.

Wouldn't a cylinder-in-a-cylinder arrangement work just as well (and be easier to make)?


To my knowledge an alcubierre effect requires negative energy. If someone's invented something that behaves like negative energy why isn't that the headline? Warp drive is cool but it's a second-order consequence of what would be a physics revolution greater than relativity.


This is what makes me think the warp drive is a no go here. The interactions between these surfaces are already described by QM without needing to appeal to negative energy.


The casimir effect is established physics that leads to negative energy densities on small scales

This is a very big deal if the math is right and it's possible to build.

A practical negative vacuum energy device, even on a small scale, that you can hook up in the lab and experiment with?


It looks like, with effort, it could be done in a (very advanced) hobbyist setting - I didn't see any exotic materials when I scanned the paper, though I might have missed something.


Just resin coated in silver, although you'll need a 3d printer with extraordinary resolution.


What applications for microscopic volumes of negative energy density might there be? Could we use these in MEMS devices to create novel types of sensors with them?

Source: https://news.ycombinator.com/context?id=29470324

Posted by: jonathonvolkmere0206909.blogspot.com