Subject:

From: Tony Tyson

Submitted: Wed, 27 Nov 2002 18:16:28 -0500 (EST)

Message number: 24 (previous: 23, next: 25 up: Index)

Subject: memo on LSST weak lensing

LSST Weak Lensing Cosmology              SWG November 2002

The mission of our lensing LSST SWG team is to enumerate the
gravitational weak lensing science drivers, estimate the
state of that field by 2010, produce a list of weak lensing
science goals, determine LSST instrument and operations
requirements to meet those goals, and suggest and carry out
design phase studies of the feasibility of the LSST system
meeting the corresponding data requirements. For each of the
weak lens driven LSST system specifications, we also must
come up with an estimate of the marginal science utility of
improving that specification.  Each of the SWG work groups
have similar tasks.

This first memo summarizes the work on LSST weak lens (WL)
requirements which has already been done.  The intent is to
open discussion on the next steps.  For the studies to date
we have been taking the  PSF as the current LLNL optical
design telescope PSF convolved, for now, with Gaussians of
various FWHM ranging over 0.4 -0.8 arcseconds.   For later
studies we will have to consider uncorrected aberrated PSFs
due to residual jitter in the telescope mirror AO system,
rotator errors, etc. For sampling we should assume the
reference design of 0.2 arcsec pixels, which should
critically sample the median seeing at the best sites.  Once
the Pan-Starrs optics is defined we should consider also how
that system will perform under our WL goals. The outline of
this memo is 1) a list of relevant system parameters for WL
and/or whose control will minimize systematic error, 2) WL
science drivers, 3) a guess at where WL cosmology will be in
ten years, 4)  the unique advance in WL science that LSST
would deliver, and 5) an initial guess at LSST system
performance required to reach those goals.


Relevant system parameters

One important goal is to set requirements for realistic
system performance based on the science requirements.   The
following is a list (probably not exhaustive) of the
instrument, telescope, filter, and operations specs for
which we might want to set limits to systematic and
statistical errors on some timescale.  Many of them will
have to be visited by the other SWG work groups.

1.   telescope PSF ellipticity and variation over FOV (DC)
2.   telescope PSF ellipticity jitter spectrum
3.   focus error over FOV and between exposures
4.   unrecoverable PSF variations with stellar magnitude
5.   delivered PSF FWHM (telescope + site)
6.   astrometric standards density and required astrometry
7.   pixel dynamic range (magnitude range per exposure)
8.   pixel non-linearity and non-linearity jitter
9.   pixel cross-talk spectrum
10.  filter set  (photo-z)
11.  detector QE vs wavelength; need for high QE at the extremes.
12.  filter in which most observing is done for shear
13.  optimal trade of areal coverage vs depth
14.  optimal depths vs filter
15.  optimal observing algorithm  (pre-survey?, interleaved?)
16.  method of rotator chopping of shear errors; rotator specs
17.  optimal number of field revisits to minimize shear errors
18.  sky patch revisit pace constraint on exposure time
19.  minimum system etendue from survey pace/depth and noise
20.  seeing constraints on exposure time (PSF stars)
21.  non-Gaussian seeing spectrum constraints on exposure time
22.  photometric zero point calibration error
23.  optimal sky tiling (with overlap) constraints on slew
     settling time
24.  maximum zenith angle (LSST will not have an ADC) vs
     wavelength


Some of these sources of systematic error are familiar
lessons learned from our previous and on-going WL surveys.
The good news is that LSST offers the opportunity for
unprecedented  control of these systematics.  It is likely,
for example, that our WL aberration spec will drive the LSST
optics error budget constraint.  Control of PSF systematics
is paramount.  The best current WL surveys control
systematics, after PSF rounding, at roughly the 0.001 shear
level.  Much of the problem is due to the fact that existing
telescopes were not designed to minimize aberrations or time
variations in telescope PSF due to mirror motion under
gravity.  For example, 10% PSF ellipticity on some exposures
is not uncommon. The observed PSF ellipticity distributions
for single exposures are different for each telescope.  VLT
and Subaru have similar e*exp(-e) functional form but with
means of 6% and 3% respectively, while the NOAO 4m
telescopes have a broadened distribution with means
wandering from 3-7%. We can expect and will require LSST to
do better.  Some of the problem is inescapable: at high
latitudes the number density of stars sets a limit to the
highest spatial frequencies in the position dependent model
PSF used for shear bias removal. We should aim to be limited
on an individual exposure purely by that high spatial
frequency systematic, and attempt to discover it by
comparing E and B modes across repeated exposures of the
same FOV under different conditions.


WL science drivers

WL directly measures shear, and mass over-densities on
scales up to half that of the survey, and is different from
other probes of cosmology.  In the context of current
cosmological models, WL error ellipsoids are nearly
orthogonal to those of SNe in the  Omega_L - Omega_M plane,
and WL shear surveys vs redshift can break the sigma_8
Omega_M degeneracy and constrain the dark energy equation of
state w.  Cosmic shear and mass cluster counts N(M,z) taken
together can sharply constrain w.  At z<1  N(M,z) is
proportional to the product of the comoving volume and the
lens kernel, both of which are sensitive to the cosmology.
In current models most of the effects of dark energy occur
for lenses at z < 0.8.   In fact, models which are
degenerate in CMB may be distinguished via WL.  Other
cosmological parameters affect WL shear, and are usually
marginalized over.  A very large statistical WL data set
could in principle determine many of these parameters at
once; this is a driver for a very large survey compared to
those now contemplated.  We should consider doing a cost
benefit analysis of such a joint solution vs area and depth.
Generally, volume wins; i.e. for a given survey time and
throughput, it is better to go wide than deep.  One must of
course go deep enough to get a complete sample of source
galaxies out to the large redshifts required. This evolution
to all-sky precision data parallels that of the CMB history.

What if the current models are wrong?   One possibility
would be CDM + WDM, with a much smaller dark energy
component.  One would have to form early compact structures
in a different way.   More generally, the convergence power
spectrum may differ from that predicted by XCDM on large
angular scales. Perhaps there are low density collapsing
structures on much larger scales than cold high density
halos.  We would not know this in the current crop of WL
surveys or those being contemplated in the near future.
Bottom-up plus top-down structure formation may be
consistent with the data in some MDM model.  For this
reason, and to get a firm handle on cosmic variance, it will
be necessary to survey fields which are many times larger
than the largest over-dense structures of a given total
mass.


Where will WL cosmology be in ten years?

Several WL surveys will have been completed in this decade.
The Deep Lens Survey, a 4 band photo-z survey to 26th mag
over 28 square degrees, should be completed in 2004 and the
analysis and interpretation completed by 2006.  Seven 2x2
deg fields will yield shear correlations out to ~1 degree
and counts of mass clusters out to z ~ 0.8.  The DLS data
are world public.  The Megacam CFHT Legacy Survey should
begin in 2003-4 and should be complete and possibly analyzed
by the end of the decade.  The WL part of this survey will
cover up to 170 square degrees with color-z (ugriz) in
several 6x6 deg fields with depth comparable to the DLS.
These surveys will break the Omega_M sigma_8 degeneracy
independent of other probes, will pin down our location (in
current models) in the Omega_L Omega_M plane to perhaps 
10%, and will catalog about 2000 massive clusters out to
z~0.8.   From XCDM simulations by J. Hennawi (future memo),
this should result in an independent determination of w to
about 20% accuracy. However, larger surveys will be required
to adequately address cosmic variance.

There likely will be smaller (~degree) areas surveyed more
deeply, such as a Subaru survey and an HST ACS survey.  If
complemented by NIR ultradeep photometry, such pencil beam
surveys will usefully constrain the time development of dark
matter structure at z > 1.  Part of the contribution to the
cluster counts in the wide surveys, particularly at z>1,
come from this growth function, and these deeper surveys
will serve to calibrate the growth contribution to N(M,z).


LSST's contribution to WL cosmology

LSST's unique potential contributions to WL cosmology stem
from (1) deep multiband coverage of 30,000 square degrees,
(2) at least a factor of ten lower shear systematics. The
exciting cosmological opportunities in surveying such a
large volume down to very low shear come from the ability to
detect large low density structures and advancing beyond
cosmography to constraining models for the physics of dark
energy.   There is a lot to be gained in going from 20%
error to 2% in w. For example, current models for dark
energy can be constructed with very different physics which
have equations of state differing by a few percent.

To achieve this from measurements of the z dependence of the
mass function, will require roughly 5 z-bins, 10 mass bins,
and at least 10 fields on the sky. Based on N-body XCDM
simulations, this would require about 10,000 clusters in the
most massive bins for each field: over 200,000 mass clusters
distributed up to z=1.   This requires at least 20,000
square degrees in a shear survey down to about 26th R mag
equivalent in at least 5 optical bands.  This is the subject
of a following memo. We would want to go fainter in the band
(say R) in which shear is measured and select only the best
PSF images and combine shear data over many images to reduce
systematics.  The required minimum shear level is a function
of angular scale. What are the science requirements?  Even
though the dominant effect is the strong cosmological
dependence on the comoving volume, at the percent level
precision required for w, the mass function (for XCDM
theories) is universal only if we measure mass within a
rather large radius of about 3 Mpc/h.  If we are to
understand the "second parameter" dependence on the mass
function, and our selection function, we will have to be
complete down to 10^14 M_solar and out to 3 Mpc/h over the
lens redshift range of 0.2 - 1, or a shear 3-sigma floor of
about 0.001 (higher at z = 0.3) on scales of 10 arcminutes.
The required shear floor on smaller scales is higher: for
example, a 700 km/sec cluster at z = 0.3 gives a shear of z=1
sources of 0.04 at 2 arc min radius.  Thankfully, the shear
error from PSF systematics has the opposite dependence on
scale (more below).

The spectrum of over-densities at very large angular scale
is a useful diagnostic. A survey for large low density
structures will have to rely on shear measurements below the
0.001 level. Source density at 26th R mag is more than
sufficient, given the large source count in the shear
integral around each over-density.  To escape detection in
current surveys they would have to be lower than 0.02 peak
shear and with core size larger than 1 Mpc.  To
significantly contribute to Omega_M their warm dark matter
would have to extend beyond 10 Mpc radius, where the shear
would be 0.002 or lower.  Systematic errors in the final
catalogs would have to be controlled at the 0.0001 level.  A
CDM + WDM simulation should be done in order to come up with
more reliable predicted shear. Even in the case of CDM the
weak lens convergence spectrum at low l is useful. The LCDM
convergence power spectrum falls by a factor of about 100
from l = 1000 to l = 10, with a change in slope at l = 30.
The region from l = 10 -30 will be interesting; the slope in
this region could reveal multi component dark matter.  Wayne
Hu estimated the statistical error due only to source
ellipticity noise in a 10,000 sq.deg LSST survey to be
adequate for a slope measurement in this l = 10 - 30 region.

A related opportunity is cosmic variance.  For each model
there is a relation between the predicted shear signal in
angle redshift space and the predicted cosmic variance.  It
is likely that the history of mass structure formation is
more complicated than envisioned and depends on more
complicated physics.  The outliers are important, just as
they are for clusters of galaxies, and non-gaussianity is an
issue. LSST all sky shear coverage is needed for any
precision measure of cosmic variance.  For example if we
need contiguous 30 degree wide fields to study the
convergence spectrum at low l, then we will need many of
those 1000 sq.deg fields for a precision measure of
cosmic variance. In a word, the cosmic variance may be at
variance with simple cosmological models.

Each of the cosmological probes constrains a degenerate mix
of Omega_DE and Omega_M.  CMB, SNe, and z=0 structure
(including 2-D cosmic shear) each produce highly elongated
error ellipsoids. While the WL error ellipsoid in Omega_DE
Omega_M space will be determined to perhaps 10% precision
prior to LSST (via normalization to z=0 sigma_8), an all-sky
WL cosmic shear survey vs source redshift could shrink this
elongated ellipsoid to a small region.  As the source
redshift is raised the error ellipsoid rotates counter-
clockwise. The maximum likelihood solution, using only
cosmic shear vs z, will be a small region with error
comparable to that of the width of the CMB ellipsoid.

Strong cluster lensing is another important probe.  With 1-
10 kpc resolution in mass one  can probe the nature of dark
matter.  While the angular resolution of weak lens mass
reconstruction is limited to about 1 arcminute by the
density of non-overlapping source galaxies, one can obtain
much higher resolution in strong lensing in special cases.
If a mass cluster produces multiple images of a single
resolved source then the resolution of the mass map obtained
by parametric inversion can be as high as the source size
divided by the magnification.  The key is many images of a
source distributed over the lens plane.  The required lens-
source alignment is rare: only one good example of such a
cluster is known.   Apostiori calculations of the
probability are in the 0.001 range, curiously close to the
QSO-galaxy lensing rate. What is needed is an all sky search
for mass clusters with multiple arcs.  LSST will do this
survey, and then the 0.1 arcsec resolution follow-up optical
imaging can then be done with a pointed observation with a
space telescope.

There is another more philosophical motivation for pursuing
high precision WL cosmology.  The other cosmological probes
are metric based (standard candle or standard meter stick)
and get at mass-energy indirectly.  If dark energy is a
manifestation of something radically new in spacetime
gravity, a probe which is metric-less and which directly
measures spacetime gravity may have advantages.  It is
likely that such an advantage would be apparent only in the
precision LSST WL data.   In summary, the detailed LSST era
WL science requirements will have to be worked out by our
group in the next few months.


LSST system performance requirements

This is a major activity for the SWG WL group and will be
the focus of most of our work.  Without definitive science
requirements it is premature to specify the LSST system
performance.   However in the spirit of starting the debate,
here are some guestimates of the system performance
requirements to meet the above goals based on what we know
from the current crop of surveys:

What about the optimal observing strategy for LSST weak
lensing and what are the fundamental limits?  Here is one
possibility.  Most of the observing can be done in one
filter, perhaps the R.  Bad exposures (seeing etc) are not
used in the co-added image stack for shear measurement.
Less exposure in the other filters is required for the
photometry for color-z. Two factors limit the angular
resolution and precision of weak lens mass reconstructions:
the number density of PSF calibration stars, and the number
density of uncrowded source galaxies.  Both of these factors
are PSF dependent.  Imaging mid-latitude fields optimizes
the number of stars for PSF fitting. A low order polynomial
fit is made to the position dependent PSF across the FOV and
is used in constructing a PSF rounding filter. This is the
overall strategy we have used in the DLS.   As expected, we
have found that uncorrected shear systematics are minimized
in image stacks if the number of component images is large.
What is the per-exposure shear systematic error requirement
for LSST vs angular scale?  This is a function of the PSF
star density, the delivered PSF ellipticity distribution,
and the number of images of each FOV (to chop and assess the
PSF systematics and to go deeper into the source galaxy
population).  On scales smaller than the mean PSF star
separation, two effects drive PSF shear systematics:
unmodeled detector-focal plane errors and unmodeled seeing
induced PSF shear. The first we can know by multiply imaging
star fields at various defocus.

To get an idea of what is going to be possible, I scale from
or current survey.  On scales of 10 arcminutes and larger we
find that PSF rounding (convolution with an off diagonal
matrix) can reduce large-scale systematic shear by a factor
of 50 in a stack of 20 images.  There is a strong advantage
to more images.  Using the 4m telescopes as an example, we
find in our DLS R imaging of a 47 deg galactic latitude
field in 0.8 arcsec FWHM seeing that the stellar ellipticity
variance (calculated from the 3 sigma span between error
limits of both components of the star-star ellipticity
correlation) is: e_sys = 0.002 on 1' scales, 0.0008 on 2'
scales, 0.0005 on 8' scales, and 0.0003 on 30' scales.  This
is in a PSF rounded stack of 20 images, each of which has
been PSF rounded, and is averaged over all the 1620 stars
with R=18-22 mag in a 1600 sq.arcmin field. Thus, for
example, a single annular bin of radius 10 arcmin and width
3 arcmin centered on a putative lens has PSF ellipticity
systematics at the 0.001 level.  This is equivalent to a
shear systematic on that scale of 0.0005. The corresponding
shear systematic level per bin at smaller radii rises fast:
gamma_sys = 0.005 at 2 arcmin, and 0.02 at 1 arcmin.
Although we are basically limited by the number density of
faint stars, using many more than 20 images will help
control and average this systematic.  But the main benefit
of a stack of 200 images will come from getting better
source shear measurements.  These systematics in the 4m
telescope data are larger than the science requirements
mentioned above. LSST's control of delivered PSF systematic
ellipticity will change this.

We should expect that LSST will have delivered PSF
ellipticity a factor of ten lower than the 4m on these
angular scales (we should try to do even better).  Scaling
from our current survey, the above science goal of 0.0001
shear could be reached in a stack of 200 selected excellent
PSF  images, each of which has been convolved with a PSF
rounding filter. This is a combination of better PSF
systematic error and deeper source integration. A factor of
up to 2 better seeing for LSST will help with the
identification of PSF stars and will result in a decrease in
this PSF error via fewer misidentifications.  It will also
allow us to move perhaps one mag fainter in obtaining our
PSF star list, doubling the number density of PSF stars.
Estimates of the combined effect put the PSF ellipticity
systematics, for the above example of a single 10' annular
bin at 0.0001 (3-sigma).

Other requirements from previous workshops: Maximal QE
detectors and five filters, with two at the red end for hi-z
photo-z: grizy (the y filter is at 1 micron).  A moderate
depth pre-survey using six filters (ugrizy) would help a
number of projects. Most of the observing in the r filter.
Observe only at modest airmass.  Focus controlled at the 10
micron p-p level to limit PSF astigmatism.  Stepped detector
modules around the perimeter of the imager for PSF and focus
measurement, with feedback to active de-center/tilt of
optics.   Image headers should contain the measured defocus.
Delivered PSF for the images selected for the shear
measurement less than 0.5 arcsec FWHM (more on this in the
next memo). At least 200 selected r images at 24 mag (5
sigma) per FOV implies perhaps 500 r images per FOV.

One lesson we have learned is that one must control the
shear systematics and photometry across adjacent subfields
if we want to get reliable shear power on large angular
scales.  So for LSST: Tile the sky in overlapping exposures
for photometry, shear, and astrometry control across FOV
boundaries. With a 7 sq.deg circular imager, this implies
5.7 sq.deg per hexagonal tile.  If we must cover 20,000
sq.deg to full depth in less than 8 years, this is over
220,000 r exposures per year or about 1200 exposures per
night, assuming an average of 15 usable nights per month
including weather and moon. At 80% efficiency (70% is the
record), that corresponds to a 17 second  cycle, including
camera read and telescope re-point, for a mean 7 hour night.
This leads to an etendue minimum requirement of about 200
sq.m sq.deg for 5e read + thermal noise per exposure.
The range of PSF star magnitudes per exposure is 19 - 23 R
mag.

I have not covered all the items in the parameter list.
Chuck Claver is working on a model of the system in which he
will be producing sample delivered PSFs over the 3 deg FOV,
including effects of AO jitter and defocus.   But as a first
step we took the DC delivered PSF variations over the field
in 0.5" seeing and convolved it with a typical faint HDF
galaxy at 27 R mag: no significant shear systematic over the
field.  It remains to be seen if this can be maintained
under dynamic conditions.  The next memo will report on a
study I did of the effects of seeing on weak lens LSST shear
measurements.  After we weak lensers on the SWG have had
some discussion of all the above we should each choose some
LSST WL requirements problem to work on.


LSST LSST LSST LSST LSST    Mailing List Server   LSST LSST LSST LSST LSST LSST
LSST
LSST  This is message 24 in the lsst-general archive, URL
LSST         http://www.astro.princeton.edu/~dss/LSST/lsst-general/msg.24.html
LSST         http://www.astro.princeton.edu/cgi-bin/LSSTmailinglists.pl/show_subscription?list=lsst-general
LSST  The index is at http://www.astro.princeton.edu/~dss/LSST/lsst-general/INDEX.html
LSST  To join/leave the list, send mail to lsst-request@astro.princeton.edu
LSST  To post a message, mail it to lsst-general@astro.princeton.edu
LSST
LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST