mardi 9 juillet 2019

Are nuclear decay rates as wavering as astrophysical neutrinos

Periodically does a measured value of Radon decay rate change, mad be the scientist who will assert influences by solar and cosmic neutrinos on beta decay... 

... before having checked more mundane causes with scrutiny!  

It is well-known that a radioactive substance follows a fixed exponential decay, no matter what you do to it. The fact has been set in stone since 1930 when the “father” of nuclear physics Ernest Rutherford, together with James Chadwick and Charles Ellis, concluded in their definitive Radiations from Radioactive Substances that “the rate of transformation…is a constant under all conditions.”  
But this is no longer the view of a pair of physicists in the US. Ephraim Fischbach and Jere Jenkins of Purdue University in Indiana are claiming that, far from being fixed, certain decay “constants” are influenced by the Sun. It is a claim that is drawing mixed reactions from others in the physics community, not least because it implies that decades of established science is flawed.
02 Oct 2008




An analysis of 85,000 measurements of gamma radiation associated with the decay of radon and its progeny in a sealed container located in the yard of the Geological Survey of Israel (GSI) in Jerusalem, between February 15, 2007 and November 7, 2016, reveals variations in both time of day and time of year with amplitudes of 4% and 2%, respectively. The phase of maximum of the annual oscillation occurs in June, suggestive of a galactic influence. Measurements made at midnight show strong evidence of an influence of solar rotation, but measurements made at noon do not. We find several pairs of oscillations with frequencies separated by 1 year⁻¹, indicative of an influence of rotation that is oblique with respect to the normal to the ecliptic, notably a pair at approximately 12.7 year⁻¹ and 13.7 year⁻¹ that match the synodic and sidereal rotation frequencies of the solar radiative zone as determined by helioseismology. 
Another notable pair (approximately 11.4 year⁻¹ and 12.4 year⁻¹ ) may correspond to an obliquely rotating inward extension of the radiative zone. [Thus these results may have implications concerning solar structure.] We also find a triplet of oscillations with approximate frequencies 7.4 year⁻¹, 8.4 year⁻¹ and 9.4 year⁻¹ which, in view of the fact that the principal oscillation in Super-Kamiokande measurements is at 9.4 year⁻¹, may have their origin in an obliquely rotating core. We propose, as a hypothesis to be tested, that neutrinos can stimulate beta decays and that, when this occurs, the secondary products of the decay tend to travel in the same direction as the stimulating neutrino. The effective cross-section of this process is estimated to be of order 10⁻¹⁸ cm². [this influence of neutrinos on radioactive material may be large enough to be open to experimental tests of force and torque as suggested in Sturrock. Fischbach, Javorsek et al...  
To be cautious, one might also consider the possibility that neutrinos may stimulate alpha decays, but – to the best of our knowledge – there is no evidence of variability in nuclear processes that involve only alpha decays... Our current analysis of GSI data is restricted to measurements of gamma radiation that has its origin in beta decays, not in alpha decays. For these reasons, we here confine our hypothesis to a possible influence of neutrinos on beta decays. If future experiments yield evidence of an intrinsic variability of alpha decays, it will be necessary to revise this hypothesis.]
The striking diurnal asymmetry appears to be attributable to a geometrical asymmetry in the experiment... Night-time data show a number of curious “pulses” of duration 1 - 3 days... We have been unable to identify any similar features in any record of known solar phenomena. This raises the possibility that they may have their origin in cosmic neutrinos that have passed through or near the Sun and are traveling away from the Sun.  
If the association of neutrino and beta-decay oscillations with solar rotation proves valid, one will need to understand theoretically the mechanism that leads to this association. One promising theoretical approach seems to be the Resonant Spin Flavor Precession mechanism by which neutrinos of one flavor, traveling through a plasma permeated by magnetic field, can change to a different flavor ({Voloshin 1986,} Akhmedov, 1988...)
P.A. SturrockG. SteinitzE. Fischbach (Submitted on 8 May 2017)



Here is the last rebuttal by S. Pommé based on a detailed analysis of the largest amount of data
communicated to him up to now:
The instabilities in the radon measurements at the Geological Survey of Israel are obviously related with solar irradiance and rainfall and cannot be ascribed to neutrinos. Besides sensitivity of the electronics, free movement of radon gas inside the tank along temperature gradients is the most likely mechanism behind the diurnal and seasonal decay rate changes in the small gas volumes monitored by the detectors. 
The observation of “neutrino-induced decay” appears to be an illusion fed by confirmation bias. The experiment has not been conducted with sufficient care to eliminate environmental influences and counter-evidence has been systematically ignored to maintain the claim of new physical discoveries. The evidence does not suggest that radioactive decay is triggered by neutrinos. The subsequently emitted radiation is not aligned with neutrino flux. There are no cyclic deviations from the exponential decay law. Ensuing inference about solar dynamics is unfounded.
Acknowledgements: The authors thank Dr. Gideon Steinitz for kindly providing the additional data set for the three counters recorded between 2007 and 2011.
S. Pommé (Received: 5 December 2018 / Accepted: 11 January 2019)


A nice review article on the different aspects of neutrino oscillations to celebrate 50 years of discoveries thanks to solar neutrinos

“If the oscillation length is large. . . from the point of view of detection possibilities an ideal object is the Sun.” This statement from Pontecorvo’s 1967 paper [1] published before release of the first Homestake experiment results [2] can be considered as the starting point for the solar neutrino studies of new physics. 
Observation of the deficit of signal in the Homestake experiment was the first indication of existence of oscillations. This result had triggered vast experimental [3] and theoretical developments in neutrino physics. On theoretical side, various non-standard properties of neutrinos have been introduced and new effects in propagation of neutrinos have been proposed. These include: 
1. Neutrino spin-precession in the magnetic fields of the Sun due to large magnetic moments of neutrinos [4, 5]: electromagnetic properties of neutrinos have been studied in details. 
2. Neutrino decays: Among various possibilities (radiative, 3ν decay, etc.) the decay into light scalar, e.g., Majoron, is less restricted [6, 7]. 
3. The MSW effect: The resonance flavor conversion inside the Sun required neutrino mass splitting in the range ∆m2 = (10−7 − 10−4 ) eV2 and mixing sin2 2θ > 10−3 [8– 13]. This was the first correct estimation of the neutrino mass and mixing intervals. With adding more information three regions of ∆m2 and sin2 2θ have been identified: the so called SMA, LMA and LOW solutions. 
4. “Just-so” solution: vacuum oscillations with nearly maximal mixing and oscillation length comparable with distance between the Sun and the Earth have been proposed [14]. 
5. Oscillations and flavor conversion due to non-standard neutrino interactions of massless neutrinos [8, 9, 15, 16]. 
6. Resonant spin-flavor precession [17, 18], which employs matter effect on neutrino spin precession in the magnetic fields. The effect is similar to the MSW conversion. 
7. Oscillation and conversion in matter due to violation of the equivalence principle [19], Lorentz violating interactions [20], etc. 
In turn, these proposals led to detailed elaboration of theory of neutrino propagation in different media as well as to model-building which explains non-standard neutrino properties. 
Studies of the solar neutrinos and results of KamLAND experiment [21–23] led to establishing the LMA MSW solution as the solution of the solar neutrino problem. Other proposed effects are not realized as the main explanation of the data. Still they can be present and show up in solar neutrinos as sub-leading effects. Their searches allow us to get bounds on corresponding neutrino parameters. Thus, the Sun can be used as a source of neutrinos for exploration of non-standard neutrino properties. In this review we summarize implications of results from the solar neutrino studies for neutrino physics, the role of solar neutrinos in establishing the 3ν mixing paradigm, in searches for new physics beyond the standard model. 
(Submitted on 19 Jul 2015 (v1), last revised 16 Jan 2017 (this version, v4))

vendredi 24 août 2018

When you are in the field you know...

What means "I expect supersymmetry at the LHC"?

Now, you will invariably find people who said that due to naturalness new particles beyond the Higgs boson must be found at the LHC, but it’s hardly newsworthy unless it’s a core belief of the community involved. The community was never behind any statements of “must.” Also, the existence of bets that supersymmetry would be found at the LHC is far from proof that people felt it must be found, just as the existence of bets on Germany winning the 2018 World Cup of soccer was proof that the placer of the bet thought it was guaranteed Germany would win. 
Well, but a few very good people seem to have said that they were sure new physics or supersymmetry would be found. Among those very good people who said it, I am sure that they said it with an ever-present, unsaid, implicit softening background assumption that we all understood. A full version of what was meant and understood by such statements in the context of supersymmetry has usually been (maybe even always been), “If a minimal version of supersymmetry is correct and the Higgs potential is not finetuned to more than a few percent I fully expect that superpartners will show up at the LHC unless we are unlucky and the kinematics turn out to be too challenging, such as small mass splittings that we can’t trigger on very easily, etc.” But you really don’t want to say all those words every time you say, “I expect supersymmetry at the LHC.” When you’re in the field, you know. When you overhear or read it from the outside, you can easily misunderstand. There are an infinite number of “short hands” like that when you speak in a disciplinary field, and if you have to speak so carefully every time so that a robot can give it meaning then efficient communication really becomes impossible.
Naturalness, Supersymmetry, and Predictions for New Physics at the LHCJames D. Wells, 2018-07-03


Since when minimal supersymmetry has been under pressure from the simplest interpretations of naturalness?

... it was recognized very early on, and especially after LEP-2 [an e⁺e⁻ collider at CERN which looked for the Higgs boson and new physics without success, ending its run in 2000.] did not find superpartners, that minimal supersymmetric was under pressure from the simplest interpretations of naturalness. It had not found superpartners, which many expected, nor had it found the Higgs boson. To most who thought about such things, not having found the Higgs boson was the bigger worry. After all, LEP 2 did not get to very impressive energies at all to find superpartners, but the correlating scale of superpartners to the lower limit of 114 GeV for the Higgs mass was a cause for concern. [In supersymmetry, the Higgs boson mass is a function of superpartner masses — the higher the superpartner masses the higher the Higgs mass, in general. LEP’s mₕ > 114 GeV required superpartner masses to be unnaturally heavy in the eyes of some.]
In the face of null results from LEP 2 there were many directions to go: identify forms of supersymmetry that would give mh > 114 GeV without straining naturalness (e.g., adding an additional singlet Higgs boson, etc.), abandon supersymmetry (but other new physics ideas suffer from similar naturalness concerns), or abandon rigid naturalness criteria altogetherSince naturalness is not a hard-core data driven criteria, that’s a direction that several of us pursued, well before the LHC turned on. This now sometimes goes under the name of split supersymmetry. Its hallmark is to put more emphasis on data requirements and less emphasis on extra-empirical concerns. 
A prediction of abandoning naturalness in this approach was that there was no special reason to see supersymmetry at the LHC, but there was interesting new enhanced reasons to expect to see dark matter through annihilations in the center of the galaxy, and the electric dipole moment of the electrons might be within reach of experiment in the not-so-distant future, to name two examples...
Id. 

vendredi 29 décembre 2017

About Little Green Men, White Dwarfs and Pulsars or excitement over Christmas time

Celebrating the 50th anniversary of pulsar thus neutron star discovery


In the winter of 1967 Cambridge radio astronomers discovered a new type of radio source of such an artificial seeming nature that for a few weeks some members of the group had to seriously consider whether they had discovered an extraterrestrial intelligence. Although their investigations lead them to a natural explanation (they had discovered pulsars), they had discussed the implications if it was indeed an artificial source: how to verify such a conclusion and how to announce it, and whether such a discovery might be dangerous. In this they presaged many of the components of the SETI Detection Protocols and the proposed Reply Protocols which have been used to guide the responses of groups dealing with the detection of an extraterrestrial intelligence. These Protocols were only established some twenty five years later in the 1990s and 2000s... 
In July 1967 a new low-frequency radio telescope started working at the Lord’s Bridge station of the Mullard Radio Astronomy Observatory (MRAO) of the University of Cambridge. Antony Hewish had led the design and construction of this novel telescope, a collection of wooden poles with wires strung between them, built to discover more of the newly found quasars and measure their sizes, by watching them flicker as the Interplanetary Medium passed in front of them. Covering two hectares, this was the largest telescope then working at this long (4-m) wavelength.

(Submitted on 4 Feb 2013)

Our method of utilising scintillation for the quantitative measurement of angular sizes demanded repeated observations so that every source could be studied at many different solar elongations. In fact we surveyed the entire range of accessible sky at intervals of one week. To maintain a continuous assessment of the survey we arranged to plot the positions of scintillating radio sources on a sky-chart, as each record was analysed, and to add points as the observations were repeated at weekly intervals. In this way genuine sources could be distinguished from electrical interference since the latter would be unlikely to recur with the same celestial coordinates. It is greatly to Jocelyn Bell’s credit that she was able to keep up with the flow of paper from the four recorders.  
One day around the middle of August 1967 Jocelyn showed me a record indicating fluctuating signals that could have been a faint source undergoing scintillation when observed in the antisolar direction. This was unusual since strong scintillation rarely occurs in this direction and we first thought that the signals might be electrical interference. So we continued the routine survey. By the end of September the source had been detected on several occasions, although it was not always present, and I suspected that we had located a flare star, perhaps similar to the M-type dwarfs under investigation by Lovell. But the source also exhibited apparent shifts of right ascension of up to 90 seconds which was evidence against a celestial origin. We installed a highspeed recorder to study the nature of the fluctuating signals but met with no success as the source intensity faded below our detection limit. During October this recorder was required for pre-arranged observations of another source, 3C 273, to check certain aspects of scintillation theory, and it was not until November 28th that we obtained the first evidence that our mysterious source was emitting regular pulses of radiation at intervals of just greater than one second. I could not believe that any natural source would radiate in this fashion and I immediately consulted astronomical colleagues at other observatories to enquire whether they had any equipment in operation which might possibly generate electrical interference at a sidereal time near 19h 19m . 
In early December the source increased in intensity and the pulses were clearly visible above the noise. Knowing that the signals were pulsed enabled me to ascertain their electrical phase and I reanalysed the routine survey records. This showed that the right ascension was constant. The apparent variations had been caused by the changing intensity of the source. Still sceptical, I arranged a device to display accurate time marks at one second intervals broadcast from the MSF Rugby time service and on December 11th began daily timing measurements. To my astonishment the readings fell in a regular pattern, to within the observational uncertainty of 0.1s, showing that the pulsed source kept time to better than 1 part in 10^6 . Meanwhile my colleagues Pilkington, and Scott and Collins, found by quite independent methods that the signal exhibited a rapidly sweeping frequency of about -5 MHz s-1 . This showed that the duration of each pulse, at one particular radio frequency, was approximately 16 ms.  
Having found no satisfactory terrestrial explanation for the pulses we now began to believe that they could only be generated by some source far beyond the solar system, and the short duration of each pulse suggested that the radiator could not be larger than a small planet. We had to face the possibility that the signals were, indeed, generated on a planet circling some distant star, and that they were artificial. I knew that timing measurements, if continued for a few weeks, would reveal any orbital motion of the source as a Doppler shift, and I felt compelled to maintain a curtain of silence until this result was known with some certainty. Without doubt, those weeks in December 1967 were the most exciting in my life.  
It turned out that the Doppler shift was precisely that due to the motion of the Earth alone, and we began to seek explanations involving dwarf stars, or the hypothetical neutron stars. My friends in the library at the optical observatory were surprised to see a radio astronomer taking so keen an interest in books on stellar evolution. I finally decided that the gravitational oscillation of an entire star provided a possible mechanism for explaining the periodic emission of radio pulses, and that the fundamental frequency obtainable from white dwarf stars was too low. I suggested that a higher order mode was needed in the case of a white dwarf, or that a neutron star of the lowest allowed density, vibrating in the fundamental mode, might give the required periodicity. We also estimated the distance of the source on the assumption that the frequency sweep was caused by pulse dispersion in the interstellar plasma, and obtained a value of 65 parsec, a typical stellar distance. 
While I was preparing a coherent account of this rather hectic research, in January 1968, Jocelyn Bell was scrutinising all our sky-survey recordings with her typical persistence and diligence and she produced a list of possible additional pulsar positions. These were observed again for evidence of pulsed radiation and before submitting our paper for publication, on February 8th, we were confident that three additional pulsars existed although their parameters were then only crudely known. I well remember the morning when Jocelyn came into my room with a recording of a possible pulsar that she had made during the previous night at a right ascension 09 h 50 m . When we spread the chart over the floor and placed a metre rule against it a periodicity of 0.25s was just discernible. This was confirmed later when the receiver was adjusted to a narrower bandwidth, and the rapidity of this pulsar made explanations involving white dwarf stars increasingly difficult. 
The months that followed the announcement of our discovery were busy ones for observers and theoreticians alike, as radio telescopes all over the world turned towards the first pulsars and information flooded in at a phenomenal rate. It was Gold (8) who first suggested that the rotation of neutron stars provided the simplest and most flexible mechanism to explain the pulsar clock, and his prediction that the pulse period should increase with time soon received dramatic confirmation with the discovery of the pulsar in the Crab Nebula (9, 10). Further impressive support for the neutron star hypothesis was the detection of pulsed light from the star which had previously been identified as the remnant of the original explosion. This, according to theories of stellar evolution, is precisely where a young neutron star should be created. Gold also showed that the loss of rotational energy, calculated from the increase of period for a neutron star model, was exactly that needed to power the observed synchrotron light from the nebula
PULSARS AND HIGH DENSITY PHYSICS Nobel Lecture, December 12, 1974 by ANTONY HEWISH

I had sole responsibility for operating the telescope and analyzing the data, with supervision from Tony Hewish. We operated it with four beams simultaneously, and scanned all the sky between declinations +50' and -10' once every four days. The output appeared on four 3-track pen recorders, and between them they produced 96 feet of chart paper every day. The charts were analyzed by hand by me. We decided initially not to computerize the output because until we were familiar with the behavior of our telescope and receivers we thought it better to inspect the data visually, and because a human can recognize signals of different character whereas it is difficult to program a computer to do so. 
After the first few hundred feet of chart analysis I could recognize the scintillating sources, and I could recognize interference. (Radio telescopes are very sensitive instruments, and it takes little radio interference from nearby on earth to swamp the cosmic signals; unfortunately, this is a feature of all radio astronomy.) Six or eight weeks after starting the survey I became aware that on occasions there was a bit of "scruff' on the records, which did not look exactly like a scintillating source, and yet did not look exactly like man-made interference either. Furthermore I realized that this scruff had been seen before on the same part of the records - from the same patch of sky (right ascension 1919). 
The source was transiting during the night - a time when interplanetary scintillation should be at a minimum, and one idea we had was that it was a point source. Whatever it was, we decided that it deserved closer inspection, and that this would involve making faster chart recordings as it transited... 
A few days after that at the end of November '67 I got it on the fast recording. As the chart flowed under the pen I could see that the signal was a series of pulses, and my suspicion that they were equally spaced was confirmed as soon as I got the chart off the recorder. They were 11/3 seconds apart. I contacted Tony Hewish who was teaching in an undergraduate laboratory in Cambridge, and his first reaction was that they must be manmade. This was a very sensible response in the circumstances, but due to a truly remarkable depth of ignorance I did not see why they could not be from a star. However he was interested enough to come out to the observatory at transit-time the next day and fortunately (because pulsarsrarely perform to order)the pulses appeared again. This is where our problems really started. Tony checked back through the recordings and established that this thing, whatever it was, kept accurately to sidereal time. But pulses 11/3 seconds apart seemed suspiciously manmade. Besides 11/3 seconds was far too fast a pulsation rate for anything as large as a star. It could not be anything earth-bound because it kept sidereal time (unless it was other astronomers). We considered and eliminated radar reflected off the moon into our telescope, satellites in peculiar orbits, and anomalous effects caused by a large, corrugated metal building just to the south of the 41/2 acre telescope. 
Then Scott and Collins observed the pulsations with another telescope with its own receivers, which eliminated instrumental effects. John Pilkington measured the dispersion of the signal which established that the source was well outside the solar system but inside the galaxy. So were these pulsations man-made, but made by man from another civilization? If this were the case then the pulses should show Doppler shifts as the little green men on their planet orbited their sun. Tony Hewish started accurate measurements of the pulse period to investigate this; all they showed was that the earth was in orbital motion about the sun. 
Meanwhile I was continuing with routine chart analysis, which was falling even further behind because of all the special pulsar observations. Just before Christmas I went to see Tony Hewish about something and walked into a high-level conference about how to present these results. We did not really believe that we had picked up signals from another civilization, but obviously the idea had crossed our minds and we had no proof that it was an entirely natural radio emission. It is an interesting problem - if one thinks one may have detected life elsewhere in the universe how does one announce the results responsibly? Who does one tell first? We did not solve the problem that afternoon, and I went home that evening very cross here was I trying to get a Ph.D. out of a new technique, and some silly lot of little green men had to choose my aerial and my frequency to communicate with us. However, fortified by some supper I returned to the lab that evening to do some more chart analysis. Shortly before the lab closed for the night I was analyzing a recording of a completely different part of the sky, and in amongst a strong, heavily modulated signal from Cassiopea A at lower culmination (at 1133) 1 thought I saw some scruff. I rapidly checked through previous recordings of that part of the sky, and on occasions there was scruff there. I had to get out of the lab before it locked for the night, knowing that the scruff would transit in the early hours of the morning. 
So a few hours later I went out to the observatory. It was very cold, and something in our telescope-receiver system suffered drastic loss of gain in cold weather. Of course this was how it was! But by flicking switches, swearing at it, breathing on it I got it to work properly for 5 minutes - the right 5 minutes on the right beam setting. This scruff too then showed itself to be a series of pulses, this time 1.2 seconds apart. I left the recording on Tony's desk and went off, much happier, for Christmas. It was very unlikely that two lots of little green men would both choose the same, improbable frequency, and the same time, to try signalling to the same planet Earth. 
Over Christmas Tony Hewish kindly kept the survey running for me, put fresh paper in the chart recorders, ink in the ink wells, and piled the charts, unanalyzed, on my desk. When I returned after the holiday I could not immediately find him, so settled down to do some chart analysis. Soon, on the one piece of chart, an hour or so apart in right ascension I saw two more lots of scruff, 0834 and 0950. It was another fortnight or so before 1133 was confirmed, and soon after that the third and fourth, 0834 and 0950 were also. Meanwhile I had checked back through all my previous records (amounting to several miles) to see if there were any other bits of scruff that I had missed...  
At the end of January the paper announcing the first pulsar was submitted to Nature. This was based on a total of only 3 hours' observation of the source, which was little enough. I feel that comments that we kept the discovery secret too long are wide of the mark. At about the same time I stopped making observations and handed over to the next generation of research students, so that I could concentrate on chart analysis, studying the scintillations and writing up my thesis. 
A few days before the paper was published Tony Hewish gave a seminar in Cambridge to announce the results. Every astronomer in Cambridge, so it seemed, came to that seminar, and their interest and excitement gave me a first appreciation of the revolution we had started. Professor Hoyle was there and I remember his comments at the end. He started by saying that this was the first he had heard of these stars, and therefore he had not thought about it a lot, but that he thought these must be supernova remnants rather than white dwarfs. Considering the hydrodynamics and neutrino opacity calculations he must have done in his head, that is a remarkable observation!
... It has been suggested that I should have had a part in the Nobel Prize awarded to Tony Hewish for the discovery of pulsars. There are several comments that I would like to make on this: First, demarcation disputes between supervisor and student are always difficult, probably impossible to resolve. Secondly, it is the supervisor who has the final responsibility for the success or failure of the project. We hear of cases where a supervisor blames his student for a failure, but we know that it is largely the fault of the supervisor. It seems only fair to me that he should benefit from the successes, too. Thirdly, I believe it would demean Nobel Prizes if they were awarded to research students, except in very exceptional cases, and I do not believe this is one of them. Finally, I am not myself upset about it - after all, I am in good company, am I not?
By S. Jocelyn Bell Burnell
8th Texas Symposium on Relativistic Astrophysics 
13-17 Dec 1976. Boston, Mass.




lundi 4 décembre 2017

Cosmic rays are as mischievous as the Monkey King / 宇宙射线像孙悟空一样恶作剧

A tribute to current Chinese science and technology and a wink to its literary classic The Journey to the West

Thanks to the Chinese satellite Wukong (aka Monkey King), also known as ...
The DArk Matter Particle Explorer (DAMPE), a high energy cosmic ray and γ-ray detector in space, has recently reported the new measurement of the total electron plus positron flux between 25 GeV and 4.6 TeV. A spectral softening at ∼0.9 TeV and a tentative peak at ∼1.4 TeV have been reported. We study the physical implications of the DAMPE data in this work... Both the astrophysical models and the exotic DM annihilation/decay scenarios are examined. Our findings are summarized as follows. 
The spectral softening at ∼ 0.9 TeV suggests a cutoff (or break) of the background electron spectrum, which is expected to be due to either the discretness of cosmic rays (CR) source distributions in both space and time, or the maximum energies of electron acceleration at the sources. The DAMPE data enables a much improved determination of the cutoff energy of the background electron spectrum, which is about 3 TeV assuming an exponential form, compared with the pre-DAMPE data. 
Both the annihilation and decay scenarios of the simplified DM models to account for the sub-TeV electron/positron excesses are severely constrained by the CMB and/or γ-ray observations. Additional tuning of such models, through e.g., velocity-dependent annihilation, is required to reconcile with those constraints
The tentative peak at ∼ 1.4 TeV suggested by DAMPE implies that the sources should be close enough to the Earth (.0.3 kpc) and inject nearly monochromatic electrons into the Galaxy. We find that the cold and ultra-relativistic e⁺e wind from pulsars is a possible source of such a structure. Our analysis further shows that the pulsar should be middle-aged, relatively slowlyrotated, mildly magnetized, and isolate in a density cavity (e.g., the local bubble).
• An alternative explanation of the peak is the DM annihilation in a nearby clump or a local density enhanced region. The distance of the clump or size of the overdensity region needs to be .0.3 kpc. The required parameters of the DM clump or over-density are relatively extreme compared with that of numerical simulations, if the annihilation cross section is assumed to be 3×10⁻²⁶ cm³ s⁻¹ . Specifically, a DM clump as massive as 10⁷−10 M or a local density enhancement of 17 − 35 times of the canonical local density is required to fit the data if the annihilation product is a pair of e⁺e . Moderate enhancement of the annihilation cross section would be helpful to relax the tension between the model requirement and the N-body simulations of the CDM structure formation. The DM clump model or local density enhancement model is found to be consistent with the Fermi-LAT γ-ray observations. 
The expected anisotropies from either the pulsar model or the DM clump model are consistent with the recent measurements by Fermi-LAT. Future observations by e.g., CTA, will be able to detect such anisotropies and test different models. 
DAMPE will keep on operating for a few more years. More precise measurements of the total e⁺+e spectrum extending to higher energies are available in the near future. Whether there are more structures in the high energy window, which can critically distinguish the pulsar model from the DM one, is particularly interesting. With more and more precise measurements, we expect to significantly improve our understandings of the origin of CR electrons.

The total e⁺+e fluxes (right) for a model with two nearby pulsars

Fluxes of the total e⁺+e  , from the sum of the continuous background and the DM annihilation from a nearby clump. This panel is for DM annihilation into all flavor leptons with universal couplings. Three distances of the clump, as labelled in the plot, are considered.


(Submitted on 29 Nov 2017)

update 12/06/2017:

If the spectral feature comes from dark matter what can we learn from the former about the latter ?

We performed a model-independent analysis of particle dark matter explanations of the peak in the DAMPE electron spectrum and whether they can simultaneously satisfy constraints from other DM searches. We assumed that the signal originated from DM annihilation in a nearby subhalo with an enhanced density of DM. To account for the inevitable energy loss, we assumed a DM mass of about 1.5 TeV, which is slightly greater than the location of the observed peak. Rather than working in a specific UV-complete model, we investigated all renormalizable interactions between SM leptons, DM of spin 0 and 1/2, and mediators of spin 0 and 1... 
We found that 10 of 20 possible combinations of operators are helicity or velocity suppressed and cannot explain the DAMPE signal. Of the remaining combinations, PandaX strongly constrains the unsuppressed scattering cross sections in three models and LEP strongly constrains the mass of the mediator in the other 7. The remaining candidates are (1) a spin 0 mediator coupled to scalar DM, (2) a spin 0 mediator pseudoscalar coupled to fermionic DM, and (3) a spin 1 mediator vector coupled to Dirac DM. LEP constraints on four-fermion operators force the mediator mass to be heavy, ~2 TeV, in all of these scenarios.
(Submitted on 30 Nov 2017 (v1), last revised 5 Dec 2017 (this version, v2))


O cosmic rays! from where art thou? 

Nearby sources may contribute to cosmic-ray electron (CRE) structures at high energies. Recently, the first DAMPE results on the CRE flux hinted at a narrow excess at energy ~1.4 TeV. We show that in general a spectral structure with a narrow width appears in two scenarios: I) "Spectrum broadening" for the continuous sources with a delta-function-like injection spectrum. In this scenario, a finite width can develop after propagation through the Galaxy, which can reveal the distance of the source. Well-motivated sources include mini-spikes and subhalos formed by dark matter (DM) particles χs which annihilate directly into e+e- pairs. II) "Phase-space shrinking" for burst-like sources with a power-law-like injection spectrum. The spectrum after propagation can shrink at a cooling-related cutoff energy and form a sharp spectral peak. The peak can be more prominent due to the energy-dependent diffusion. In this scenario, the width of the excess constrains both the power index and the distance of the source. Possible such sources are pulsar wind nebulae (PWNe) and supernova remnants (SNRs). We analysis the DAMPE excess and find that the continuous DM sources should be fairly close within ~0.3 kpc, and the annihilation cross sections are close to the thermal value. For the burst-like source, the narrow width of the excess suggests that the injection spectrum must be hard with power index significantly less than two, the distance is within ~(3-4) kpc, and the age of the source is ~0.16 Myr. In both scenarios, large anisotropies in the CRE flux are predicted. We identify possible candidates of mini-spike (PWN) sources in the current Fermi-LAT 3FGL (ATNF) catalog. The diffuse gamma-rays from these sources can be well below the Galactic diffuse gamma-ray backgrounds and less constrained by the Ferm-LAT data, if they are located at the low Galactic latitude regions... 
The current experiments have entered the multi-TeV region where the CRE spectrum is unlikely to be smooth. We have proposed generic scenarios of the origins of the CRE structures and analysed the nature of sources responsible for the possible DAMPE excess. The predictions of these scenarios are highly testable in the near future with more accurate data.
(Submitted on 30 Nov 2017)

lundi 16 octobre 2017

Which neutron star merger with gold-plated nuclear waste mushroom at 130 million light years?

[A new exciting, another boring usual] astrophysical event?


The discovery, announced Monday at a news conference and in scientific reports written by some 3,500 researchers, solves a long-standing mystery about the origin of these heavy elements — which are found in everything from wedding rings to cellphones to nuclear weapons. 
It's also a dramatic demonstration of how astrophysics is being transformed by humanity's newfound ability to detect gravitational waves, ripples in the fabric of space-time that are created when massive objects spin around each other and finally collide. 
"It's so beautiful. It's so beautiful it makes me want to cry. It's the fulfillment of dozens, hundreds, thousands of people's efforts, but it's also the fulfillment of an idea suddenly becoming real," says Peter Saulson of Syracuse University, who has spent more than three decades working on the detection of gravitational waves...
What all the images showed was a brand-new point of light that started out blueish and then faded to red. This didn't completely match what theorists thought colliding neutron stars should look like — but it was all close enough that Daniel Kasen, a theoretical astrophysicist at the University of California, Berkeley, found the whole experience a little weird. 
"Even though this was an event that had never been seen before in human history, what it looked like was deeply familiar because it resembled very closely the predictions we had been making," Kasen says. "Before these observations, what happened when two neutron stars merged was basically just a figment of theorists' imaginations and their computer simulations." 
He spent late nights watching the data come in and says the colliding stars spewed out a big cloud of debris. 
"That debris is strange stuff. It's gold and platinum, but it's mixed in with what you'd call just regular radioactive waste, and there's this big radioactive waste cloud that just starts mushrooming out from the merger site," Kasen says. "It starts out small, about the size of a small city, but it's moving so fast — a few tenths of the speed of light — that after a day it's a cloud the size of the solar system." 
According to his estimates, this neutron star collision produced around 200 Earth masses of pure gold, and maybe 500 Earth masses of platinum. "It's a ridiculously huge amount on human scales," Kasen says...
October 16, 201710:01 AM ET


LIGO, with the world’s first two gravitational observatories, detected the waves from two merging neutron stars, 130 million light years from Earth, on August 17th... VIRGO, with the third detector, allows scientists to triangulate and determine roughly where mergers have occurred. They saw only a very weak signal, but that was extremely important, because it told the scientists that the merger must have occurred in a small region of the sky where VIRGO has a relative blind spot...
The merger was detected for more than a full minute… to be compared with black holes whose mergers can be detected for less than a second. It’s not exactly clear yet what happened at the end, however! Did the merged neutron stars form a black hole or a neutron star? The jury is out.
If there’s anything disappointing about this news, it’s this: almost everything that was observed by all these different experiments was predicted in advance. Sometimes it’s more important and useful when some of your predictions fail completely, because then you realize how much you have to learn. Apparently our understanding of gravity, of neutron stars, and of their mergers, and of all sorts of sources of electromagnetic radiation that are produced in those merges, is even better than we might have thought. But fortunately there are a few new puzzles. The X-rays were late; the gamma rays were dim…



jeudi 31 août 2017

Ondes gravitationnelles et résonances d'orages africains/ Gravitational wave signals and unexpectedly strong Schumann resonance transients correlated noise

Raphaël Enthoven,Jacques Perry-salkow

(The validation of) A great discovery requires a genuinely independent analysis of data


To date, the LIGO collaboration has detected three gravitational wave (GW) events appearing in both its Hanford and Livingston detectors. In this article we reexamine the LIGO data with regard to correlations between the two detectors. With special focus on GW150914, we report correlations in the detector noise which, at the time of the event, happen to be maximized for the same time lag as that found for the event itself. Specifically, we analyze correlations in the calibration lines in the vicinity of 35 Hz as well as the residual noise in the data after subtraction of the best-fit theoretical templates. The residual noise for the other two events, GW151226 and GW170104, exhibits similar behavior. A clear distinction between signal and noise therefore remains to be established in order to determine the contribution of gravitational waves to the detected signals

(Submitted on 13 Jun 2017 (v1), last revised 9 Aug 2017 (this version, v2))


A debate about how to sift the astrophysical wheat from the terrestrial chaff


Recent claims in a preprint by Creswell et al. of puzzling correlations in LIGO data have broadened interest in understanding the publicly available LIGO data around the times of the detected gravitational-wave events. We see that the features presented in Creswell et al. arose from misunderstandings of public data products. The LIGO Scientific Collaboration and Virgo Collaboration (LVC) have full confidence in our published results, and we are preparing a paper in which we will provide more details about LIGO detector noise properties and the data analysis techniques used by the LVC to detect gravitational-wave signals and infer their waveforms.

News from LIGO Scientific Collaboration
undated (between 7 July and 1 August 2017)
In our view, if we are to conclude reliably that this signal is due to a genuine astrophysical event, apart from chance-correlations, there should be no correlation between the "residual" time records from LIGO's two detectors in Hanford and Livingston. The residual records are defined as the difference between the cleaned records and the best GW template found by LIGO. Residual records should thus be dominated by noise, and they should show no correlations between Hanford and Livingston. Our investigation revealed that these residuals are, in fact, strongly correlated. Moreover, the time delay for these correlations coincides with the 6.9 ms time delay found for the putative GW signal itself...
During a two-week period at the beginning of August, we had a number of "unofficial" seminars and informal discussions with colleagues participating in the LIGO collaboration... Given the media hype surrounding our recent publication, these meetings began with some measure of scepticism on both sides. The atmosphere improved dramatically as our meetings progressed. 
The focus of these meetings was on the detailed presentation and lively critical discussion of the data analysis methods adopted by the two groups. While there was unofficial agreement on a number of important topics - such as the desirability of better public access to LIGO data and codes - we emphasize that no consensus view emerged on fundamental issues related to data analysis and interpretation.
In view of unsubstantiated claims of errors in our calculations, we appreciated the opportunity to go through our respective codes together - line by line when necessary - until agreement was reached. This check did not lead to revisions in the results of calculations reported in versions 1 and 2 of arXiv:1706.04191 or in the version of our paper published in JCAP. It did result in changes to the codes used by our visitors.
There are a number of in-principle issues on which we disagree with LIGO's approach. Given the importance of LIGO's claims, we believe that it is essential to establish the correlation between Hanford and Livingston signals and to determine the shape of these signals without employing templates. Before such comparisons can be made, the quality of data cleaning (which necessarily includes the removal of non-Gaussian and non-stationary instrumental "foreground" effects) must be demonstrated by showing that the residuals consist only of uncorrelated Gaussian noise. We believe that suitable cleaning is a mandatory prerequisite for any meaningful comparisons with specific astrophysical models of GW events. This is why we are concerned, for example, about the pronounced "phase lock" in the LIGO data.
James Creswell, Sebastian von Hausegger, Andrew D. Jackson, Hao Liu, Pavel Naselsky
August 21, 2017


Disentangling the man-made detectors from the Earth-shaped one


As the LIGO detectors are extremely sensitive instruments they are prone to many sources of noise that need to be identified and removed from the data. An impressive amount of efforts were undertaken by the LIGO collaboration to ensure that GW150914 signal was really the first detection of gravitational waves with all transient noise backgrounds being under a good control [4, 5, 6]. 

It was claimed, however, in a recent publication [7] that the residual noise of the GW150914 event in LIGO’s two widely separated detectors exhibit correlations that are maximized for the same 7 ms time lag as that found for the gravitational-wave signal itself. Thus questions on the integrity and reliability of the gravitational waves detection were raised and informally discussed [8, 9]. It seems at present time it is not quite clear whether there is something unexplained in LIGO noise that may be of genuine interest. It was argued that even assuming that the claims of [7] about correlated noise are true, it would not affect the 5-sigma confidence associated with GW0150914 [8]. Nevertheless, in this case it will be interesting to find out the origin of this correlated noise.
Correlated magnetic fields from Schumann resonances constitute a well known potential source of correlated noise in gravitational waves detectors [11, 12, 13]... Schumann resonances are global electromagnetic resonances in the Earthionosphere cavity [14, 15]. The electromagnetic waves in the extremely low frequencies (ELF) range (3Hz to 3 kHz) are mostly confined in this spherical cavity and their propagation is characterized by very low attenuation which in the 5 Hz to 60 Hz frequency range is of the order of 0.5-1 db/Mm. Schumann resonances are eigenfrequencies of the Earth-ionosphere cavity. They are constantly excited by lightning discharges around the globe. While individual lightning signals below 100 Hz are very weak, thanks to the very low attenuation, related ELF electromagnetic waves can be propagated a number of times around the globe, constructively interfere for wavelengths comparable with the Earth’s circumference and create standing waves in the cavity.

Note that there exists some day-night variation of the resonance frequencies, and some catastrophic events, like a nuclear explosion, simultaneously lower all the resonance frequencies by about 0.5 Hz due to lowering of the effective ionosphere height [16]. Interestingly, frequency decrease of comparable magnitude of the first Schumann resonance, caused by the extremely intense cosmic gamma-ray flare, was reported in [17]. Usually eight distinct Schumann resonances are reliably detected in the frequency range from 7 Hz to 52 Hz. However five more were detected thanks to particularly intense lightning discharges, thus extending the frequency range up to 90 Hz [18].

...  For short duration gravitationalwave transients, like the three gravitational-waves signals observed by LIGO, Schumann resonances are not considered as significant noise sources because the magnetic field amplitudes induced by even strong remote lightning strikes usually are of the order of a picotesla, too small to produce strong signals in the LIGO gravitational-wave channel [4].

Interestingly enough, the Schumann resonances make the Earth a natural gravitational-wave detector, albeit not very sensitive [20]. As the Earth is positively charged with respect to ionosphere, a static electric field, the so-called fair weather field is present in the earth-ionosphere cavity. In the presence of this background electric field, the infalling gravitational wave of suitable frequency resonantly excites the Schumann eigenmodes, most effectively the second Schumann resonance [20]. Unfortunately, it is not practical to turn Earth into a gravitational-wave detector. Because of the weakness of the fair weather field (about 100 V/m) and low value of the quality factor (from 2 to 6) of the Earth-ionosphere resonant cavity, the sensitivity of such detector will be many orders of magnitude smaller than the sensitivity of the modern gravitational-wave detectors

However, a recent study of short duration magnetic field transients that were coincident in low-noise magnetometers in Poland and Colorado revealed that there was about 2.3 coincident events per day where the amplitudes of the pulses exceeded 200 pT, strong enough to induce a gravitational-wave like signal in the LIGO gravitational-wave channel of the same amplitude as in the GW150914 event [21]...

The main source of the Schumann ELF waves are negative cloud-toground lightning discharges with the typical charge moment change of about 6 Ckm. On Earth, storm cells, mostly in the tropics, generate about 50 such discharges per second.

The so-called Q-bursts are more strong positive cloud-to-ground atmospheric discharges with charge moment changes of order of 1000 Ckm. ELF pulses excited by Q-bursts propagate around the world. At very far distances only the low frequency components of the ELF pulse will be clearly visible, because the higher frequency components experience more attenuation than the lower frequency components...

In [22] Earth’s lightning hotspots are revealed in detail using 16 years of space-based Lightning Imaging Sensor observations. Information about locations of these lightning hotspots allows us to calculate time lags between arrivals of the ELF transients from these locations to the LIGO-Livingston (latitude 30.563◦ , longitude −90.774◦ ) and LIGO-Hanford (latitude 46.455◦ , longitude −119.408◦ ) gravitational-wave detectors...

We have taken Earth’s lightning hotspots from [22] with lightning flash rate densities more than about 100 fl km−2 yr−1 and calculated the expected time lags between ELF transients arrivals from these locations to the LIGO detectors... Note that the observed group velocity for short ELF field transients depends on the upper frequency limit of the receiver [21]. For the magnetometers used in [21] this frequency limit was 300 Hz corresponding to the quoted group velocity of about 0.88c. For the LIGO detectors the coupling of magnetic field to differential arm motion decreases by an order of magnitude for 30 Hz compared to 10 Hz [4]. Thus for the LIGO detectors, as the ELF transients receivers, the more appropriate upper frequency limit is about 30 Hz, not 300 Hz. According to (2), low frequencies propagate with smaller velocities 0.75c-0.8c. Therefore the inferred time lags in the Table1 might be underestimated by about 15%...

If the strong lightnings and Q-bursts indeed contribute to the LIGO detectors correlated noise then the distribution of lightning hotspots around the globe can lead to some regularities in this correlated noise. Namely, extremely low frequency transients due to lightnings in Africa will be characterized by 5-7 ms time lags between the LIGO-Hanford and LIGO-Livingston detectors. Asian lightnings lead to time lags which have about the same magnitude but the opposite sign. Lightnings in North and South Americas should lead positive time lags of about 11-13 ms, greater than the light propagation time between the LIGO-Hanford and LIGO-Livingston detectors. 

(Submitted on 27 Jul 2017)

mercredi 22 février 2017

{Bohmian mechanics, is} [subtle, malicious] (?)

Here is my post consisting as usual in quotes from some scientific articles fully available online, underlining (or emphasizing with a bold font) selected parts in order to sketch a draft response to the question in its title. This time, I was mostly inspired by reading this post at another blog named Elliptic Composability.


Inconclusive Bohmian positions in the macroscopic way ...
Bohmian mechanics differs deeply from standard quantum mechanics. In particular, in Bohmian mechanics particles, here called Bohmian particles, follow continuous trajectories; hence in Bohmian mechanics there is a natural concept of time-correlation for particles’ positions. This led M. Correggi and G. Morchio [1] and more recently Kiukas and Werner [2] to conclude that Bohmian mechanics “can’t violate any Bell inequality”, hence is disproved by experiments. However, the Bohmian community maintains its claim that Bohmian mechanics makes the same predictions as standard quantum mechanics (at least as long as only position measurements are considered, arguing that, at the end of the day, all measurements result in position measurement, e.g. pointer’s positions).  
Here we clarify this debate. First, we recall why two-time position correlation is at a tension with Bell inequality violation. Next, we show that this is actually not at odd with standard quantum mechanics because of some subtleties. For this purpose we do not go for full generality, but illustrate our point on an explicit and rather simple example based on a two-particle interferometers, partly already experimentally demonstrated and certainly entirely experimentally feasible (with photons, but also feasible at the cost of additional technical complications with massive particles). The subtleties are illustrates by explicitly coupling the particles to macroscopic systems, called pointers, that measure the particles’ positions. Finally, we raise questions about Bohmian positions, about macroscopic systems and about the large difference in appreciation of Bohmian mechanics by the philosophers and physicists communities... 
Part of the attraction of Bohmian mechanics lies then in the assumption that • Assumption H : Position measurements merely reveal in which (spatially separated and non-overlapping) mode the Bohmian particle actually is.   
A Bohmian particle and its pilot wave arrive on a Beam-Splitter (BS) from the left in mode “in”. The pilot wave emerges both in modes 1 and 2, as the quantum state in standard quantum theory. However, the Bohmian particle emerges either in mode 1 or in mode 2, depending on its precise initial position. As Bohmian trajectories can’t cross each other, if the initial position is in the lower half of mode “in”, then the Bohmian particle exists the BS in mode 1, else in mode 2.

Two Bohmian particles spread over 4 modes. The quantum state is entangled... hence the two particle are either in modes 1 and 4, or in modes 2 and 3. Alice applies a phase x on mode 1 and Bob a phase y on mode 4. Accordingly, after the two beam-splitters the correlations between the detectors allow Alice and Bob to violate Bell inequality... Alice’s first “measurement”, with phase x, can be undone because in Bohmian mechanics there is no collapse of the wavefunction. Hence, after having applied the phase −x after her second beam-splitter, Alice can perform a second “measurement” with phase x′ .

... There is no doubt that according to Bohmian mechanics there is a well-defined joint probability distribution for Alice’s particle at two times and Bob’s particle: P(rA, r′A, rB|x, x′ , y), where rA denotes Alice’s particle after the first beam-splitter and r′A after the third beamsplitter of {the last figure above}... But here comes the puzzle. According to Assumption H, if rA∈′′1′′, then any position measurement performed by Alice in-between the first and second beam-splitter would necessarily result in a=1. Similarly rA ∈′′2′′ implies a=2. And so on, Alice’s position measurement after the third beam-splitter is determined by r ′ A and Bob’s measurement determined by rB. Hence, it seems that one obtains a joint probability distribution for both of Alice’s measurements results and for Bob’s: P(a, a′ , b|x, x′ , y). 
But such a joint probability distribution implies that Alice doesn’t have to make any choice (she merely makes both choices, one after the other), and in such a situation there can’t be any Bell inequality violation.
... Let’s have a closer look at the probability distribution that lies at the bottom of our puzzle: P(rA, r′ A, rB|x, x′ , y)... now comes the catch... as the Bohmian particles’s positions are assumed to be “hidden”... they have to be hidden in order to avoid signalling in Bohmian mechanics. ... it implies that Bohmian particles are postulated to exist “only” to immediately add that they are ultimately not fully accessible... Consequently, defining a joint probability for the measurement outcomes a, a ′ and b in the natural way: 
P (a, a′ , b|x, x′ , y) ≡ P (rA ∈ “a“, rA ∈ “a ′ “, rB ∈ “b“|x, x′ , y) (10) 
can be done mathematically, but can’t have a physical meaning, as P(a, a′, b|x, x′ , y) would be signaling.
In summary, it is the identification (10) that confused the authors of [1, 2] and led them to wrongly conclude that Bohmian mechanics can’t predict violations of Bell inequalities in experiments involving only position measurements. Note that the identification (10) follows from the assumption H, hence assumption H is wrong. Every introduction to Bohmian mechanics should emphasize this. Indeed, assumption H is very natural and appealing, but wrong and confusing.

To elaborate on this let’s add an explicit position measurement after the first beam-splitter on Alice side. The fact is that both according to standard quantum theory and according to Bohmian mechanics, this position measurement perturbs the quantum state (hence the pilot wave) in such a way that the second measurement, labelled x ′ on Fig. 4, no longer shares the correlation (9) with the first measurement, see [4, 5]...

From all we have seen so far, one should, first of all, recognize that Bohmian mechanics is deeply consistent and provides a nice and explicit existence proof of a deterministic nonlocal hidden variables model. Moreover, the ontology of Bohmian mechanics is pretty straightforward: the set of Bohmian positions is the real stuff. This is especially attractive to philosopher. Understandably so. But what about physicists mostly interested in research? What new physics did Bohmian mechanics teach us in the last 60 years? Here, I believe fair to answer: not enough! Understandably disappointing... 
This is unfortunate because it could inspire courageous ideas to test quantum physics. 
Nicolas Gisin (Submitted on 2 Sep 2015)


Probably surrealistic Bohm Trajectories in the microscopic world?

... we maintain that Bohmian Mechanics [BM] is not needed to have the Schrödinger equation "embedded into a physical theory". Standard quantum theory has already clarified the significance of Schrödinger's wave function as a tool used by theoreticians to arrive at probabilistic predictions. It is quite unnecessary, and indeed dangerous, to attribute any additional "real" meaning to the psi-function. The semantic difference between "inconsistent" and "surrealistic" is not the issue. It is the purpose of our paper to show clearly that the interpretation of the Bohm trajectory - as the real retrodicted history of the atom observed on the screen - is implausible, because this trajectory can be macroscopically at variance with the detected, actual way through the interferometer. And yes, we do have a framework to talk about path detection; it is based upon the local interaction of the atom with the photons inside a resonator, described by standard quantum theory with its short range interactions only. Perhaps it is true that it is "generally conceded that.. . [a measurement]... requires a ... device which is more or less macroscopic," but our paper disproves this notion, because it clearly shows that one degree of freedom per detector is quite sufficient. That is the progress represented by the quantum-optical whichway detectors. And certainly, it is irrelevant for all practical purposes whether "somebody looks" or not; what matters only is that the which-way information is stored somewhere so that the path through the interferometer can be known, in principle.

Nowhere did we claim that BM makes predictions that differ from those of standard quantum mechanics. The whole point of the experimentum crucis is to demonstrate that one cannot attribute reality to the Bohm trajectories, where reality is meant in the phenomenological sense. One must not forget that physics is an experimental science dealing with phenomena. If the trajectories of BM have no relation to the phenomena, in particular to the detected path of the particle, then their reality remains metaphysical, just like the reality of the ether of Maxwellian electrodynamics. Of course, the "very existence" of the Bohm trajectory is a mathematical statement to which nobody objects. We do not deny the possibility that some imaginary parameters possess a "hidden reality" endowed with the assumed power of exerting "gespenstische Fernwirkungen" (Einstein). But a physical theory should carefully avoid such concepts of no phenomenological consequence.  
B.-G. Englert, M. O. Scully, G. Süssmann, and H. Walther
received October 12, 1993