Subject: Minutes of Tucson LSST meeting, March 17-18

From: strauss@astro.princeton.edu

Submitted: Tue, 25 Mar 2003 10:16:03 -0500 (EST)

Message number: 102 (previous: 101, next: 103 up: Index)

			March 17,18, 2003
			LSST SWG meeting, Tucson, Arizona

Attending:
  Gary Bernstein
  Todd Boroson
  Ted Bowell
  Chuck Claver
  Andy Connolly
  Kem Cook
  Daniel Eisenstein
  Peter Garnavich
  Richard Green
  Al Harris
  Lee Holloway
  Zeljko Ivezic
  Nick Kaiser
  Steve Larson
  Tod Lauer
  Jeremy Mould
  Knut Olsen
  Ron Probst
  Abi Saha
  David Shaw
  Chris Smith
  Michael Strauss
  Chris Stubbs
  Jon Thaler
  Tony Tyson
  Sidney Wolff
  Dennis Zaritsky

This meeting was an opportunity for the SWG to work on fleshing out
the science case for LSST, and to give explicit science drivers for
the technical aspects of the telescope.  Michael Strauss had
distributed a list of desiderata to consider for each science program,
which Sidney Wolff expanded upon in her introductory remarks (see
below).  Chuck Claver described his work on understanding the error
budget for the image quality for the telescope (Chuck, can you make
the diagrams you showed available on the web?).  Lynn Seppala's
optical design gives a spot size (80% encircled energy) of 0.2"
radius.  When one adds in realistic surface and alignment errors,
vibration, finite pixel size, etc., this goes up to 0.38", with no one
item being the high pole in the tent.

  Given the distribution of free-air seeing measured from three years
of site testing at Cerro Pachon, one can determine the distribution of
delivered image quality from such a system, as a function of zenith
angle.  Motivated in part by <a
href=http://www.phys.washington.edu/~stubbs/lsstdocs/LSSTMemo2003_001.pdf>Chris
Stubbs' suggestion of a larger field of view</a>, Chuck and Lynn have
examined the image quality for field diameters of 3.5 and 4 degrees.
At the zenith, for example, the 50% seeing in Pachon goes from 0.65 to
0.73 for the 4 degree field of view.  Of course, this may introduce
higher-order terms in the seeing, such as ellipticity, which could
make life difficult for the weak lensing folk.

  Chris Stubbs suggested that there is a tradeoff between field of
view and image quality; for any given science project, one needs a
figure of merit which includes such things. 

  Abi Saha presented a simulator he has developed for the LSST; a
series of fields are given priorities which include information on
zenith angle, the time since the field was last observe, positions
of the Sun and the moon, and so on.  This will be very useful for
testing cadence models for LSST, and determining whether they meet
specific science goals.  It is still in progress; he needs to add a
model for the weather, the ability to observe in different filters,
further hooks for setting field priorities for specific science
drivers, and so on. 

  Nick Kaiser pointed out that Pan-STARRS has found that a single
  observing mode (which Abi's simulator currently uses) is not
  adequate for all science tasks.  It is a question we have not
  tackled adequately, whether LSST can get away with a single
  observing mode. 

  We spent most of the first day in a series of four groups, each
tasked with exploring a different science area: asteroid science, the
variable universe, weak lensing, and stellar populations/astrometry.
At 3 PM, we reconvened, and heard reports from these panels.

  Their task was as follows (modified from the <a href=http://astro.princeton.edu/~dss/LSST/lsst-general/msg.76.html>minutes</a>
of the February SWG phonecon:

  1- Describe the science goal for LSST in this area in some
quantitative detail.  This should also include a discussion of where
the field might be in 8-10 years.

  2- List the requirements on the instrument and the data management
system to accomplish the goal (more on this below). 

  3- Come up with a specific observing plan assuming that your program
could use all the telescope time it needed, over a several-year
baseline.  For this, assume that the telescope has the ability to
reach R = 24.0 for a 5 sigma detection of point sources in 20 sec,
with a seven square degree field of view 

  4- Consider what fraction of the science goals would be reached with
the following strawman observing plan (which we can and should explore
further), using two different modes:
	     
   <a
href=http://www.astro.Princeton.EDU/~ivezic/talks/AAS201lsst.ps>Ivezic's
cadence</a>, which gives an interlocking set of fields observed twice
in two bands (r and another band) in 15-minute chunks, going to
roughly 24th in each chunk.  Repeating this gives a very deep exposure
in r, and somewhat less deep in 3 other bands (for the sake of
argument, let's suppose they are g, i, and z).  50% of the LSST time
would be taken up in that mode.

  Deep pointings of ~1 hour (made up of many 20-second exposures,
without offsetting the telescope), going to r=26, repeated ~4 times
per year in each of 2 bands (say r and i), in a region centered on the
ecliptic.  This covers a total of 5000 square degrees.

  On item (2) above, what are the desiderata?

       The area of sky imaged at any given time.
       The total area of sky to be covered. 
       The depth and dynamic range needed in a single exposure.
       The depth and dynamic range needed in stacked exposure. 
       Length of individual exposures. 
       Requirements on slew time. 
      The requirements on seeing, PSF, and pixel size: uniformity of
	PSF, aberrations of PSF, all as function of wavelength.
       The filters needed/on what cadence? 
       The need, if any, to stack the data. 
       The photometric accuracy needed (both relative and absolute).
       The astrometric accuracy needed (both relative and absolute). 
       Tails of the astrometric and photometric error distribution. 
       The filters needed. 
       The cadence of observations needed (very different for moving
          objects and, e.g., distant galaxies).   Should the cadence
	  be dynamic? 
       Requirements of sky darkness and photometricity. 
       Requirements on the speed of data reduction needed, and the
          nature of the measured quantities. 
       Auxiliary data needed (e.g., follow-up spectroscopy,
                observations at non-optical wavebands, and/or 
                a priori calibrating data)
       Specialized data analysis tools needed to carry out the
	science. 

*****************The Variable Universe************************

Chris Stubbs led the discussion.  Variables can be classified as
periodic and aperiodic (GRB afterglows, microlensing, supernovae,
novae, etc.), and can be classified by timescale and
amplitude of variability.  Note that periodic variables need not be
sampled at the Nyquist rate; the MACHO experience was that
period-folding gives superb light curves even with quite sparse
sampling.  The important thing to do is to properly sample the
timescale/amplitude space. 

  LSST can have an important synergy with other missions, including
  LISA; finding the optical counterparts to gravitational lens systems
  (which include close binary stars, and merging supermassive black
  holes.  LISA has sensitivity from 1 second to 3 hour variable objects, a
  nice match to LSST. 

  Among the topics discussed:
   -If we expect the community to follow up LSST variable object
  discoveries, we cannot have a proprietary period on the data; let's
  make it public as soon as possible.

   -Looking for planetary transits requires millimag differential
    photometry.  This led to a lively discussion of what the telescope
    can deliver.  All agreed that 1-2% ``absolute'' photometry should
    be doable, where ``absolute'' refers to the fact that it is
    fundamentally difficult to put standard stars on an absolute flux
    scale in erg/s/cm^2/A.  Zeljko Ivezic in particular pointed out
    that with 1% photometry, one can separate giants from dwarfs from
    their broad-band colors (cf, <a href=http://xxx.lanl.gov/abs/astro-ph/0211562>Helmi et al</a>).  Doing
    interesting  variable object science with LSST will require not
    only good photometry in the mean, but an error distribution which
    is accurately Gaussian in the tails; this is not obvious.  

  The variability folks are eager to observe under any conditions,
  including when  it is cirrussy and when there is cirrus.  ``As long
  as no rain is falling on the primary...''

  There was a general concensus that the cosmological supernova game
  will be largely over by the timescale of the LSST, and therefore
  there was not a strong need for follow-up of all but a tiny fraction
  of the supernovae that the telescope would discover.  An interesting
  exception is strongly lensed supernovae (perhaps 1/1000?); knowing
  the light curves of supernovae allow one to determine the
  magnification directly, thus understanding the lensing geometry in
  detail. 

*********************Stellar Populations/Astrometry*****************

Dave Monet led the discussion.  It is perhaps a bit premature, as
there will be a separate LSST meeting to discuss stellar population
science immediately following this one. 

  This group identified three science projects which LSST is
  particularly suited for:

  -The structure of the Galactic halo, both photometrically and via
   proper motions.  To get to the horizontal branch at 100 kpc, one
   has to go to roughly 23rd magnitude.  To do proper motions to 10
   km/s at 10 kpc, one needs to get to 1-2 mas/year.  

  -The white dwarf luminosity function.  This involves identifying
   white dwarfs from their colors and parallaxes to 100 pc, and would
   be part of a complete census of essentially all stars to 100 pc
   (limited to an extent at the bright end due to the saturation
   limits of the telescope; there was a suggestion of carrying out a
   full-sky survey, perhaps once per month to get parallax, using an r
   filter with a neutral density filter on top of it to get as bright
   as 13th magnitude).  

    Along the way, a complete census of L and T dwarfs from parallax
   at z band within the immediate solar neighborhood (to 10 pc). 

  -Using astrometric wiggles to look for companions to stars.  

  All this requires an a priori survey of the static universe, in a
  series of filters (including u).  No specific recommendations on
  exactly the full filter complement. 

  These science goals mostly drive relative astrometry goals.  The one
  area that requires real absolute astrometry is getting orbits for
  asteroids: NEO's require a frame defined to 100 mas, while KBO's can
  use quite a bit better reference frame. 

  Another science goal suggested was to find young stars (and
  therefore trace star formation) using variability as a measure of
  stellar age.  This needs quantification; how low-amplitude
  variability do you need to be able to measure? 

***********************Weak Lensing*********************

Tony Tyson led the discussion on this.  The main science driver is to
study the mass distribution with cosmic time.  With this, the dark
energy equation of state parameter w can be measured to of order 1%, a
capability which will *not* be in place in a decade.  

  What with cosmic variance on large scales, and shot noise on small
  scales, one wants to limits systematics on uncorrected components of
  shear noise at the level of 10^{-4}, on scales of 10 arcmin and
  larger.  This corresponds to a noise in the ellipticity of a PSF of
  1%, due to jitter in the telescope, etc.  In order to control
  systematics, one needs to have heavily overlapping fields. 

  Another approach to w is via counting of clusters from their lensing
  signature; this requires huge numbers, and pushes for a 3 pi ster
  survey. 

  Other requirements discussed:  Seeing of 0.6-0.7'' (images with
  sufficiently worse seeing would not be used).  Using galaxies as z=3
  as background puts a requirement on the numbers of such objects that
  are resolved per square degree.  

  Multi-band photometry is needed in order to get photo-z's; something
  like B, R, i, z, and perhaps y.   Calibrating such a survey requires
  spectroscopic redshifts for several thousands (or tens of thousands)
  of such galaxies a priori; such surveys are likely to be done by
  then. 

  We finished the day with a brief discussion of follow-up surveys.
  Is LSST its own follow-up telescope?  No, not for sufficiently rare
  objects, which require near-IR, spectroscopy, or densely sampled
  light curves.  

  We need a design reference mission!

*******************************Wednesday***************************

  We continued our presentation from the working groups.  Al Harris
discussed the NEA (near-Earth asteroid) search problem.  He started
with some general comments:
  The NEA folks are asking for roughly 1/2 dozen visits to a given
field per month.  While NEA are best found at opposition, the subset
of these that are Potentially Hazardous Asteroids (PHA) systematically
avoid opposition.  PHA's are defined as those objects whose orbits
come within 0.05 AU of the orbit of the Earth (this distance is called
the Minimum Orbital Intersection Distance, or MOID).  Ted Bowell has been
carrying out simulations of the distribution of PHA's on the sky; they
tend to bunch up close to the Sun, and are concentrated close to the
ecliptic.  Thus one wants to survey the ecliptic, +/- 20 degrees, in a
region +/-120 degrees from opposition.  

  Al emphasized that the estimate of the hazard from NEA's has been
dropping rapidly; this is because the fraction of the > 1 km objects
remaining undiscovered has been shrinking, and because the estimate of
the damage caused by a major tsunami has gone down dramatically.  On
the former, roughly 90% of the >1 km objects should be discovered by
the time LSST is on the air.  The best estimates of the tsunami danger
as a function of impact diameter shows no real cutoff in risk, so
there is no obvious threshold to set goals for LSST above, say, 100
meters. 

  As for requirements, the NEO folks want:
   Absolute astrometry to 100 mas
   Whatever image quality they can get; the smaller the PSF the deeper
	they go. 
   An object moving at 3 degrees a day (a typical upper limit for
        NEAs) will smear appreciably in 10 seconds, so they don't want
	exposure time much longer than that. 
   Want exposures in as wide a band as possible, ideally r or i.
   High photometric accuracy is *not* needed.  Useful data can be
 	taken in non-photometric and/or moony conditions. 
   LSST is its own follow-up telescope, so no follow-up is needed on
	other telescopes. 
   
  Ted Bowell then gave a presentation on his work on cadence for
PHA's.  He has carried out simulations of NEA's from Morbidelli,
Jedicke et al's models of the populations.  He shows that with LSST
0.1 arcsec astrometry, the orbits can be tied down well enough to
recover them into the future (a year or more) with three visits spaced
roughly 3 days apart.   In particular, this is adequate to ensure an
error on the MOID of less than 0.01 AU, which is what you want for
determining whether this object needs to be followed up further.
(This is good for a century or so; the MOID typically changes by 0.02
AU per century).  

  The exact recommended cadence is still somewhat up in the air, but
requires a 'CR split', and then a revisit on a timescale of something
like 20 minutes in order to determine a local proper motion vector.
Again, this should be done again 3-4 days hence, and again 3-4 days
after that, which ties things down pretty well.  The orbits are pretty
insensitive to the '3-4 days' numbers. 

  He then examined how long a given PHA is visible.  As it orbits, it
can either become too faint, go too far south, or get too close to the
Sun.  Luckily, he shows that the typical PHA (indeed, 97% of PHA's
with D>200 m) stays visible for > 30 days, appreciably longer than the
4-day arc described above.
 
  There was a general feeling (which needs to be checked properly)
that Daniel Eisenstein's <a href=http://astro.princeton.edu/~dss/LSST/lsst-general/msg.73.html>cadence</a> is compatible with what
Ted described. 

  Gary Bernstein then described what is needed to do KBO science.
This is a different regime; "bright" KBO's, those to 24th mag, are
interesting, but if we can go 2-3 mags fainter, that would get very
large numbers of these objects (10^6?) to do detailed dynamical and
population studies.  

  So what he needs is exposures of an hour or so (to r=27), roughly 3
times over a year or so, to get orbits.  Observations in 2 bands (r
and i?) allow colors to be measured (KBO's have featureless spectra,
and therefore one color is adequate).  If 10% of LSST time is devoted
to this, one can sparse-sample the ecliptic plane +/- 20 degrees,
where most known KBO's reside. 

  These 1 hour integrations would be divided up into many 30-sec
integrations, so this is a great way to sample the variable universe
on those timescales. 

  Gary has posted these thoughts more formally at <a
href=http://astro.princeton.edu/~dss/LSST/lsst-general/msg.100.html>LSST-general
#100</a>. 

  The other people who want to go faint are the weak lensing folk.
There is little point in going much fainter than 26.5; fainter
galaxies tend to be too small to be useful for lensing.  It is unclear
whether the KBO program and the weak lensing programs fit well
together. 

**************************Miscellaneous; summing up***************

  Chris Stubbs made a strong call for having no proprietary period on
the LSST data.  We (mostly) agreed that this was a good thing, but
needed clarification on what exactly this means.  Does this mean that
LSST should commit to making the images available immediately?  What
about the catalogs?  What commitment do we have to the quality of
these catalogs?  Chris will think about these things, and put
something down in writing to clarify this. (Editor's note: a first
draft of this can be found <a href=http://astro.princeton.edu/~dss/LSST/lsst-general/msg.101.html>here</a>.

  Each of the working groups was charged with putting down on paper a
summary of their deliberations, working towards a summary science case
and requirements.  It was clear that in some cases, there remain
appreciable lists of to-do's; these need to be written out as well.
These should be posted to lsst-general, even early drafts.

  We finished the meeting with a discussion of whether it all fits
together.  We have to go through the numbers carefully, but there was
definitely the feeling that the telescope is somewhat oversubscribed.
The idea of having a "pre-survey", in which the "static universe" is
surveyed over 3 pi ster in a series of filters to something like 25th
mag, was resurrected; this then can be used as a template against
which to look for variable objects.   The stellar populations folks
also want to go fairly deep over 3 pi ster in a variety of bands, the
weak lensing folks need 5-6 bands deep (although not over the entire
sky), KBO's need deep exposures... This is before we get to what we
think of as the main LSST observing mode, namely 10-20 sec exposures of
the sky, rapidly covering the entire sky, in 2 filters.  We were not
able to put together a final plan that satisfies all these desires. 



LSST LSST LSST LSST LSST    Mailing List Server   LSST LSST LSST LSST LSST LSST
LSST
LSST  This is message 102 in the lsst-general archive, URL
LSST         http://www.astro.princeton.edu/~dss/LSST/lsst-general/msg.102.html
LSST         http://www.astro.princeton.edu/cgi-bin/LSSTmailinglists.pl/show_subscription?list=lsst-general
LSST  The index is at http://www.astro.princeton.edu/~dss/LSST/lsst-general/INDEX.html
LSST  To join/leave the list, send mail to lsst-request@astro.princeton.edu
LSST  To post a message, mail it to lsst-general@astro.princeton.edu
LSST
LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST