On the total amount of non-baryonic dark matter in the Universe

by Johan Hidding ( )
for Actueel Onderzoek


Today most professional astronomers are convinced a large portion of the gravitationally active substance in the Universe (hence forward called: matter) is not composed of the stuff you and I are made from (baryonic matter). Observations in the 1930's by Zwicky and later in the 20th century by others suggested there's more to our galaxy and others far away than meets the eye. Up to ten times the amount of matter that can be accounted for by the stars in the galaxy should be needed to keep up its dynamical properties (see other pages of the class). Thus dark matter was born. However, some people oppose this idea. For example the followers of Millgrom's MOND think its not the amount of matter, but the dynamics thats causes the gap in our bookkeeping. It would be interesting to see if there is more independent evidence for the existence of dark matter. I try to find this in early cosmology.


Einstein and de Sitter discussing life, the Universe and everything[click to enlarge]
Albert Einstein and Willem de Sitter discussing the Universe. In 1932 they published a paper together on the Einstein-de Sitter universe, which is a model with flat geometry containing matter as the only significant substance.


Perhaps the best way to explain the Big Bang is by giving an chronological overview of the discoveries that shaped the way we think about the universe today.

Albert Einstein publishes his theory of General Relativity. Knowing that his theory would never yield a stable solution to the Universe as a whole, he introduces the cosmological constant Λ to solve this problem.
Alexander Friedman finds a solution to GR predicting an expanding Universe.
Based on GR, Georges Lemaître predicts that the Universe must have started small (a primaeval atom), expanding outwards. Lemaître being a priest, the theory was not warmly received in the scientific world.
original plot of Hubble expansion[click to enlarge]
Original plot of Hubble expansion as published by Hubble in 1929. The errors in the distance are very large and the distances themselves were later shown to be systematically underestimated by a factor of 7.
Edwin Hubble publishes his paper titled: ”A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae”. The paper showed that almost all galaxies receded from ours with a velocities proportional to their distances, implying a cosmological expansion. Einstein realises his mistake in adding Λ to his theory, later calling it the biggest blunder of his career.
George Gamov, Robert Dicke and others predict the presence of a background radiation produced shortly after the beginning.
Fred Hoyle, Thomas Gold and Hermann Bondi publish the Steady State theory. These people resented the idea of the Universe having a finite age, and did not believe (among many other professional cosmologist) in what Hoyle whimsically called the Big Bang.
Arno Penzias and Robert Wilson serendipitously discover the CMB. Dicke et al. show that the CMB could be a relic of the Big Bang as predicted by Gamov.
CMB image produced with the COBE sattelite CMB spectrum also by COBE[click to enlarge]
CMB map and black body spectrum made with COBE sattelite. The spectrum shows theory and data, but the error margins are within the thickness of the line. (Images from NASA)
First data from the COBE sattelite is published, showing the CMB is isotropic to 1 part in 10000 and has a perfect black body spectrum, corresponding to a temperature of 2.7 K. This is the definite proof (as far as science goes) of the Big Bang theory. There is no other black body spectrum in nature or in laboratory as perfect as the CMB's. It could only have been produced in the Universe back in its infancy.
Big bang nucleosynthesis and large scale structure show a lack of energy in the Universe. Riess et al. and Perlmutter et al. simultaneously and independently publish data on supernovae showing the Universe's expansion is accelerating. Dark energy is added to the inventory of cosmological substances to have the effect explained. Ironically this puts Λ back into Einstein's equations.
WMAP has it's first data release setting cosmological parameters on firm ground.

Basics of cosmology

For purposes of completeness, I'll give an introduction to the basics of cosmology. It's not meant to be comprehensive, also don't worry if you don't understand all of it. It is meant to give a feeling of what I'm talking about later on. So how does this cosmology developed in the previous century work? It all starts with Einstein's field equation (again, don't panic)

\[G^{\mu\nu} = -\frac{8\pi G}{c^4}T^{\mu\nu}\]

The left side describes the curvature of space, the right side the energy content. It basically boils down to:

”Space tells matter how to move. Matter tells space how to curve.”
Alan Guth

A major difference with Newtonian gravity is that ass well as matter, pressure also has gravity. These are actually ten equations crammed into one tensor so to solve them one needs to make constraints to solve it. The assumption we make is that there is no prefered location in the universe. It is the same everywhere, and everywhere you look it is the same. We are not special.

These two assumptions together are known as the cosmological principle. Using this only three solutions remain for the curvature of the whole Universe; these are expressed in the Robertson-Walker metric.

Flat, positive and negative curvature
From bottom to top: flat, spherical and hyperbolic curvature of a 2D plane.

A metric tells you how to measure a distance in a coordinate system. For example in 2D flat (Euclidian) space, a distance between two points is measured using Pythagoras:

\[{\rm d}s^2 = {\rm d}x^2 + {\rm d}y^2\]

Adding a third and fourth (time) spatial dimension we get the four dimensional space-time of special relativity, in which events are separated by the so called Minkowski-metric:

\[{\rm d}s^2 = c^2 {\rm d}t^2 - {\rm d}x^2 - {\rm d}y^2 - {\rm d}z^2\]

The three forms of the Robertson-Walker metric correspond to three different geometries of space. There's Euclidian (flat or no curvature, κ=0), spherical (positive curvature, κ=+1) and hyperbolic (negative curvature, κ=−1) geometry.

\[{\rm d}s^2 = -c^2{\rm d}t^2 + a(t)^2[{\rm d}r^2 + S_k(r)^2{\rm d}\Omega^2]\]


\[S_k(r) = \left\{\begin{array}{ll} R_0\ \sin(r/R_0) & \kappa = +1\\r & \kappa = 0\\ R_0\ \sinh(r/R_0) & \kappa = -1 \end{array} \right.\]

R0 is the radius of curvature of the Universe at the present, a(t) is the scaling factor. The scaling factor is defined to be 1 today, and thought to be near zero at the moment of the Big Bang. The expansion of the Universe is often expressed as a as a function of time. We see the galaxies receding from us, so we can measure the speed at which the universe expands

\[\frac{\dot{a}}{a} = H_0 \simeq 72 {\rm km s^{-1} Mpc^{-1}}\]

H0 is the Hubble constant, first measured by Edwin Hubble in 1929. What we'd like to know, is how does a evolve with time? This is expressed in the Friedman equation, named after Alexander Friedman who first derived it from Einstein's field equations.

\[\left(\frac{\dot{a}}{a}\right)^2 = \frac{8\pi G}{3c^2}\epsilon(t) - \frac{\kappa c^2}{R_0^2} \frac{1}{a(t)^2} + \frac{\Lambda}{3}\]

Here ε is the energy density of the Universe and Λ is the cosmological constant. The expression is not dissimilar to the field equation of GR. On the left side is the rate of expansion of the Universe, on the right side it's energy content. Also pressure forces are hidden in this equation.Now it is important to realise that different energy components of the universe have a different pressure. This is parametrised with the value of w in the equation of state of each component.

\[P = w\epsilon\]
Dark energy0 < w ≤ 1
Cosmological constant1

A universe with a certain composition will behave different from a universe with another composition. Most important is still the total density. If the density of the Universe is higher than a critical density it will be closed, meaning it will collapse into a big crunch eventually. If the density is lower there is not enough gravity to keep the Universe together and it will continue to expand into infinity. If it is equal to this critical density the Universe has flat curvature. The next image shows different solutions to the Friedmann equation showing these (and some more exotic) scenarios.

scaling factor plotted against time[click to enlarge]
Scaling factor plotted against time, for different scenarios.
density of different components plotted against time for the concordence model[click to enlarge]
Density of components plotted against time in the case of a concordance universe. The time axis is given in the unit of a Hubble time, H0-1 ∼ 14 Gyr.
We can rewrite the Friedman equation into a more friendly posure using Ω values. These are the densities of every component normalised to the critical density ρc.
\[\Omega \equiv \frac{\rho}{\rho_c} = \frac{8\pi G \rho}{3 H^2}\]
\[\Omega_i = \Omega_{i, 0}\quad a^{-3(1+w_i)}\]
\[H^2(a) = H_0^2 \left[\Omega_v + \Omega_m a^{-3} + \Omega_r a^{-4} - (\Omega - 1)a^{-2} \right]\]

Big Bang

History of the Universe (credit: Fermilab)[click to enlarge]
History of the Universe in a concise picture. (with kind permission from Fermilab)

The picture on the left gives a nice description of events happening at the time shortly after the Big Bang.

Quantum gravity epoch
This is the first planck time (10-43 s). In the framework of quantummechanics it is meaningless to say anything about this moment.
Grand unification epoch
In this epoch the Electroweak and strong forces are still behaving as one GUT force. The precise physics of this force are yet unknown, but many theories have been proposed.
Electroweak epoch
The strong force has split off because the Universe cooled down due to its expansion. The electromagnetic force and the weak force are still unified. This is where the Standard model of physics rules. Physicist know pretty well what went on. Quarks are still free particles. This ends after 10-10 s. When they form protons, anti-protons and (anti-)neutrons. At some point there should be an assymetry between the physics ruling normal particles and that ruling anti-particles. This would allow particles to survive where anti-particles would do less so. Otherwise they would anihilate eachother and nothing would be left but radiation. It is estimated by the baryon to proton ratio η ∼ 10-10, that one in every 10 miljard protons survived. This process is called baryogenesis
Radiation dominated era
Radiation was the dominating component in the Universe. The density of radiation dominated the way the Universe expanded. At the end of this era, when the universe is ∼ 1 s old, proton-neutron freeze-out happens and primordeal nucleosynthesis takes place.
Matter dominated era
Matter takes over, because it thins out less rapidly (∝ a-3) than radiation (∝ a-4). This is what their equations of state tell you. Matter becomes the dominant factor in expanding the Universe. After 370.000 years the baryon density drops enough to become transparent to electromagnetic radiation (photons). This is the moment photons and baryons decouple, resulting in a seperate photonic gas and a baryonic gas. This photongas is what we still detect as the Cosmic Microwave Background.
Lambda dominated era
According to the current theories, the Universe has just (on a cosmological scale) become Lambda dominated. (It is not in the picture because in 1989 they didn't know about this)

Cosmic Microwave Background

The CMB is a nearly isotropic microwave remnant of the processes that happened shortly after the Big Bang. Some cosmologist call it God's fingerprint. It has a near perfect black body spectrum with an effective temperature of 2.7 K. It was released when the Universe cooled down to a temperature of ∼ 105 K, which is near the ionisation temperature of hydrogen (which is more like 1.5 × 105 K, however atoms are kept ionised by the tail end of the Planck spectrum because there are so many photons). Atoms formed and free electrons disapeared. This made the Universe transparent. Matter and radiation decoupled. We can calculate when this decoupling happened.

\[T^4 \propto \Omega_{\rm rad} \propto a^{-4}\]
\[a = \frac{T_{\rm now}}{T_{\rm then}} \sim 2.7\ \times\ 10^{-5}\]
\[H_0 t \sim a \rightarrow t \sim H_0^{-1} a = 3.7\ \times\ 10^5\ {\rm yr}\]


\[H_0^{-1} = [72\ {\rm km}\ {\rm s}^{-1}\ {\rm Mpc}^{-1}]^{-1} \sim 14\ {\rm Gyr}\]

The exact time of decoupling is dependent on the expansion rate of the Universe which I just assumed to be linear. However if you look back at figure (2.5), you can see this is dependent on the contents of the Universe.

CMB Anisotropies

WMAP 2006 all sky image[click to enlarge]
WMAP all sky image of CMB after 3rd year data release. (from NASA)

As noted before, the CMB is not completely isotropic. First when observed, you'll see a dipole. This is a redshift of the CMB due to the peculiar motion of the Earth within the frame of the CMB. This motion is composed of the Earth rotation around the Sun, the Sun's orbit around the Milkyway, the Milkyway's speed towards the Andromeda galaxy, the movement of the Local Group towards the Virgo cluster and finally the Virgo clusters' toward the Great Attractor which is the local super cluster of galaxies. Once these are corrected for still some fluctuations in the CMB remain, however they are on the order of 10-5 K. Nontheless they are very important in determining the cosmological parameters of the Universe.

To draw any conclusions from these anisotropies, we first have to study the cause of these fluctuations. Either the part of the Universe that forms a hotspot on the CMB is hotter by one part in a hundred thousand or the radiation is redshifted by gravity. It seems both effects play a role.

Quantum Fluctuations

As the uncertianty principle allows for tiny quantum fluctuations, the naturally untenable position of a young Universe in perfect isotropy is perturbed. The differences in gravity make matter contract a little. But to be attracted to some overdensity one need to feel it's field, which is brought to you by the speed of light. Anything further away than c times the age of the universe is out of causal contact. Anything within causal contact is said to be within the particle horizon. There's a little complication here, because during the -let's say- photon's travel space gets expanded, so to calculate the horizon distance we need to integrate.

\[d_{\rm hor} = c \int_0^{t_0} \frac{{\rm d}t}{a(t)}\]

In a sense, travel was more effective when the Universe was young and tiny. Back to pertubations. The largest pertubations at the time of recombination would be expected to have a size of one particle horizon. Photons leaving from some overdensity would have to overcome the gravitational potential of this well and be redshifted slightly. This is called the Sachs-Wolfe effect. It is not the only effect playing a role, but it is the dominant one.

Now we'd like to know what size these fluctuations would have on our sky. For this we would have to know how large some distance at high redshift (early in the Universe) would look to us. In any earthly (static euclidian) situation the distance to an object can be measured with d = l/δθ. Where l is the known length of a rod and δθ is the angle between the end points of the rod. This leads to the defenition of the angular-diameter distance

\[d_A \equiv \frac{l}{\delta\theta}.\]

In an expanding or curved (or both) universe this distance is not equal to the proper distance of an object. It depends on the way the Universe expanded since the light we receive from the distant object was emitted. Again, this is dependent on the composition of the Universe. Back to the pertubations. We know the characteristic size of the largest pertubations to be one particle horizon, which is a function of time and composition. We know how the angular size of anything that happened back then, depends on the time since decoupling and the composition. We know the time of decoupling given a certain composition. So if we know the angular size of the largest pertubations from the measurements by WMAP, we could be able to tell the total amount of matter in the universe (Ωm). This information is found in the so called angular power spectrum of the CMB. It shows the amplitude of fluctuations against their angular size. We're looking at the largest scale fluctuations so we need to find the location of the first peak in the angular power spectrum. Now all we need to do is to determine the amount of baryonic matter to find the amount of non-baryonic matter.

WMAP 2006 angular powerspectrum[click to enlarge]
WMAP angular power spectrum. The location of the first peak gives us information about the total amount of matter in the Universe. The second peak about the total amount of baryonic matter in the Universe. The amplitude of the first peak is dependent on the speed of sound of the photon-baryon fluid before decoupling which is dependent on η. Any structure smaller than (to the right of) the first peak is caused by acoustic oscillations. (from NASA)

Accoustic oscillations

On smaller scales fluctuations are dominated by accoustic oscilations. Before decoupling baryonic matter and radiation behave as one photon-baryon fluid. When the fluid is perturbed a little, it will cause a sound wave to travel. Of course any characteristics of these oscillations will depend heavily on the speed of sound of the fluid which in turn depends on the total density and the fraction of baryons to photons η. When the photons and baryons decouple, a snapshot is taken of these oscillations, which yields information on these quantities. I will not go into any further detail.

Primordeal Nucleosyntesis

A different and independent way to determine the amount of non-baryonic matter in the Universe is found in primordeal nucleosynthesis. If this method would give a different answer than the previous one, some theory must be flawed.


When the Universe was still less than one second old, protons, neutrons, electrons, photons and neutrinos could freely interact.

\[\begin{array}{rcl} n + \nu_e & \rightleftharpoons & p + e^-\\ n + e^+ & \rightleftharpoons & p + \bar{\nu}_e \end{array}\]

These reactions are controled by the weak force. To have weak interactions occuring one needs very high density and temperature. In the very early universe this was the case. Protons and neutrons were in equilibrium, following Boltzmann's law:

\[\frac{n_p}{n_n} = \exp\left[-\frac{\Delta mc^2}{kT}\right]\]

But then densities dropped and weak interaction failed to keep up the equilibrium. This is the neutrino freeze-out. It is basically the moment neutrinos decoupled from baryons. The fraction of neutrons over protons froze to ∼ 0.2. However neutrons are unstable to decay, with a lifetime of 887 seconds. So the fraction of neutrons over protons started to drop. If nothing would happen soon, neutrons would disappear (until stars would have formed). Luckily at the same time nucleosynthesis kicked in.


binding energy per nucleon as a function of number of nucleons in nucleus.[click to enlarge]
Binding energy per nucleon as a function of number of nucleons in a nucleus. Note the peak where He lies. (from NASA)

A naked neutron may be slightly unstable, enclosed in an atomic nucleus it is not. A neutron may fuse with a proton to form deuterium.

\[p + n \rightarrow D + \gamma\]

The rate of this reaction is dependent on the temperature and pressure of the fluid. At the same time it has to compete with the decay of single neutrons. When enough deuterium is formed it takes several steps to fuse into tritium, 3He and 4He. Then it get's a little harder and a tiny bit of lithium is formed. At some moment the density of deuterium gets too low to support anymore nucleosynthesis, and no new deuterium gets made because the Universe ran fresh out of single neutrons. There are two fractions we need to take into account to know the density of deuterium. That of deuterium to hydrogen, and that of all nuclei (baryons) to non-baryonic matter. After all we know the total density from our cosmological models. Suppose there is an overwhelming amount of non-baryonic matter, then the deuterium abbundance may be still large, but it's density too low to support any nucleosynthesis. On the other hand if there is a lot of baryonic matter, deuterium abundance must drop to stop the fusion. There are allso dependencies of baryonic matter density on the abundancies of the other primordeal elements, but these are not as pretty as in the case of deuterium. One needs to use elaborate relativistic computer models to calculate the full monty.

primordeal abundances against baryon to photon density eta[click to enlarge]
Element abundance plotted against density of baryonic matter relative to photon density η. (from NASA)

This is a plot over η which is the ratio of number of baryons to number of photons. The current density of photons is easy to measure in the CMB (allmost all photons are CMB photons). Of these abundances deuterium is the easiest to measure. There is no other known mechanism that produces deuterium, and it is a stable isotope. Contrary to 4He which is produced in stars. Lithium has a much lower density and is harder to detect. Also deuterium has the nicest dependence on η, introducing less errors.

Detecting deuterium

Quasars are very distant objects and their light travels a portion of the age of the Universe to get here. The light was originally emitted around the Ly-α line at 1215.67 Å. But as the light travels the cosmological expansion shifts it towards the red. On its way to the observer the light that is redshifted towards 1215.67 Å may get absorbed by neutral hydrogen. This absorbtion line is then redshifted further and when the ray hits another cloud of hydrogen, photons that originally had even more energy will have redshifted to 1216 Å, forming another absorbtion line etc. etc. Thus the Lyman-alpha forest is made.

quasar spectrum with nice ly-alpha forest
Spectrum of a quasar showing many Ly-α absorbtion lines.

Deuterium is detected in the spectra of quasars. It behaves allmost like normal hydrogen but the absorbtion lines are shifted just a little to 1215.36 Å. The D/H fraction can be extracted from a quasar spectrum by detailed analysis of the lineshapes.



The latest results of WMAP give the folowing values:
WMAP 2006 results (WMAP only)

these values can be found on the Legacy Archive for Microwave Background Analysis at NASA. If we use these numbers to calculate the fraction of non-baryonic matter in the Universe we get 1 - (Ωbm) = 1-(0.02/0.13) = 82%.

Deuterium abundances

The results from Kirkman et al. gave a value for Ωbh2 of 0.214 ± 0.002. This agrees amazingly well with results from WMAP. To find the fraction of dark matter we still need the amount of total matter found with other measurements (among which WMAP). But the fact remains that we found values for the total amount of baryonic matter in the Universe agree using two independent methods.

Future missions

It doesn't stop here. There is still little known about the nature of dark matter (see "What is the nature of dark matter"). For this a lot of theoretical work still needs to be done. Next year the Planck sattelite will be launched which will look at the CMB in more detail (including its polarisation). Also future gravitational wave observatories might shed some light on this dark issue.


The amount of baryonic matter measured with the two described methods match extremely well. This is strong evidence that the standard theories of cosmology are working. An alternative theory would be hard pressed to produce the same results.


valid HTML 4.01 strict valid CSS by Johan Hidding (), Kapteyn institute, RuG.