Subject: Minutes of January 12/13 LSST SWG meeting

From: strauss@astro.princeton.edu

Submitted: Mon, 24 Jan 2005 09:19:01 -0500 (EST)

Message number: 295 (previous: 294, next: 296 up: Index)

	Minutes of LST SWG meeting, January 12-13, 2005
	Held at the San Diego AAS meeting

Minutes by Michael Strauss, with input from Richard Green. 

Summary and action items:
  We focussed on synergy and complementarity between Pan-STARRS and
  LSST, and concluded that while many of the issues can be expressed
  purely as a matter of etendue, many others depend critically on
  detailed design issues, which require real work to understand.  
    Our next goal is to go through the LST science drivers one by one,
  to understand the extent they will be addressed by Pan-STARRS and
  other missions before LST sees first light.  Assignments to lead
  this effort were as follows:
	NEAs: Al Harris
	KBOs: Gary Bernstein
	Galactic Structure: Dave Monet
	Variability: Zeljko Ivezic
	Supernovae: Phil Pinto
	Weak Lensing: Daniel Eisenstein and Tony Tyson

************************

Attending, January 12: Mould, Rockosi, Wolff, Monet, Lupton, Kaiser,
Stubbs, Pinto, Green, Strauss, Tonry, Garnavich, Chambers, Ivezic,
Harris

Attending, January 13: Strauss, Harris, Chambers, Green, Tyson,
Sweeney, Ivezic, Eisenstein, Pinto, Kahn, Lupton, Monet

Due to last-minute scheduling conflicts and changes, we ended up
meeting on two successive days rather than one, in order to make sure
the maximum number of people could attend.  Apologies again for the
confusion!  The theme of this meeting, and indeed the focus of much of
the SWG's efforts now, is the relationship between the LST and the
Pan-STARRS 4 project.

 Before going on, a note of nomenclature.  In our DRM document, we
awkwardly referred to the 'LSST' to mean any system delivering an
etendue of order 250 deg^2 m^2, while the '8.4m LSST' referred to the
specific monolithic telescope design with that etendue.  It is now
politically correct, when referring to the generic notion of a
large-etendue system, to use the term 'Large Survey Telescope', or
LST.  Now the term 'LSST' can consistently refer to the 8.4m telescope
that the LSST Corporation is planning to build.  Hopefully this will
avoid further confusion.

  Pan-STARRS 4 (PS 4) is of course the University of Hawaii project to
build 4 1.8m telescopes, each with 3-degree fields of view.  It is
currently scheduled to see first light in 2008.  Pan-STARRS 1 is an
initial prototype system, being put on the summit of Haleakala on
Maui, which is planned to see first light in early 2006.  With an
etendue of order 50 deg^2 m^2, PS 4 is certainly not an implementation
of LST per se, having of order 1/6 the etendue of the LSST.  However,
it will be able to fulfill some fraction of the LST science goals (see
the documents at
http://pan-starrs.ifa.hawaii.edu/project/science/precodr.html), and
the SWG now has the job to clarify what that fraction is.  For some
science cases, simply being able to cover the sky 6 times faster, or
go ~sqrt(6) = 1 mag deeper is the whole story.  But some science
output is not directly proportional to etendue.  For example, one
cannot make up for smaller aperture simply by integrating longer on
moving objects, and faint enough KBO's (which move ~1" per hour) are
simply out of the reach of Pan-STARRS.  That is, there sometimes is a
threshold in etendue (or telescope size), below which a given science
goal is unreachable (in 10 years of operation; we all agreed that LST
science drivers are unlikely to be valid for much longer than that).
Another example is the discovery of extremely rare objects, e.g.,
strongly lensed supernovae, which are simply undiscoverable below a
certain etendue (although for this specific example, it was suggested
that the way to go is to monitor a large number of known clusters with
a big telescope and modest field of view).  Other science goals, for
example, the inventory of Earth-crossing asteroids above a certain
size threshold, might largely be done by PS4 before LST sees first
light, as described in our DRM document.  What we need to do is go
science case by science case, and try to quantify each of these
things.

  Before getting into the specific science drivers, there was a strong
sense in the group that not only are Pan-STARRS and LSST strongly
complementary, but that this group really can help promote both
projects by stressing that complementarity.  For example, being able
to test LSST software on existing Pan-STARRS data would make a lot of
sense.  There is a great deal of overlap between the two projects,
especially in science goals, software, data distribution, and
photometric and astrometric calibration, and communication between the
two projects (as was exemplified by top people from both projects at
this meeting) is healthy and should be encouraged.  With full funding
not yet available for either project, it is more important than ever
that all possible synergies between the projects be explored.

  Speaking of funding, it was pointed out that the nominal budgets for
LSST and Pan-STARRS (including operations) was roughly in ratio of
their respective etendues, i.e., about 6:1.  We were surprised by
this, given, e.g., the very different costs the two projects assume
for their respective cameras.  These budgets don't allow for synergy
between the two projects, and if some of the software development
costs can be shared between the two, there is real savings to be had
here.  The Pan-STARRS folks emphasized that they would have a much
better idea on the costs for their cameras over the next year, as PS1
comes together and construction on PS4 progresses.

  With that said, here is a summary of the science drivers discussed:

****************

  Near-Earth Objects: This subject has gained recent prominence again,
as the tragedy of the Indian Ocean tsunami have reminded people how
devastating an ocean impact might be.  Pan-STARRS is planning a
focussed survey for NEAs, using a white-light (i.e., very broad)
filter, focusing at the 'sweet spots' (i.e., pointing down the
ecliptic) for an hour or two just after dusk and before dawn.  These
necessitate pushing to high airmass (up to 2.5), and, with the broad
filter, require the use of an atmospheric dispersion corrector (ADC).
It was pointed out that an ADC is much easier to put on a 1.8m
telescope than an 8.4m, and indeed, the LSST doesn't currently plan an
ADC.  Al Harris and Nick Kaiser have independently carried out
calculations of how complete Pan-STARRS will be.  They need to
cross-check their results, but more-or-less agree that Pan-STARRS
should be ~80% complete for >300 meter objects after 10 years of
operation (and I think Al estimated that a 300-meter oceanic impact
would set up a tsunami roughly as bad as that in the Indian Ocean).
To the extent this is the case, the LSST may well be in the happy
position of having to worry less about the NEA aspect of its mission,
and adjust its observing cadence, filter choice, and observing time
accordingly.  In particular, the "Universal Cadence" described in the
Appendix to our DRM document is largely driven by the NEA problem (and
resolving the confusion from main-belt asteroids), and it may well be
rethought in this context.  Al has taken this further, asking how the
completeness of the NEA survey scales, assuming, e.g., an LST focussed
purely on the NEA program, or an LST that does the sweet spot program
plus a routine program (i.e., one capable of doing all the other LST
science goals) that is limited to airmass < 1.5.  He found that the
latter works very well, and suggests a particularly good synergy
between LSST and Pan-STARRS.  Al was given the task of writing this up
further, and scoping the problem out.

  Daniel Eisenstein stressed that NEA's was perhaps the only driver
that really pushed hard for repeated imaging of the entire available
sky as often as possible.  If the NEA pressure is taken off by
Pan-STARRS, perhaps the right way to proceed would be to get full-sky
imaging once a month, and then spend the rest of the month with much
faster cadence of, say, 10% of the sky (which is better for SN,
various types of variables, etc.).  Through the year, the 10% of the
sky on which one would concentrate would of course rotate around,
yielding close to uniform full-sky coverage at the end of the survey
(indeed, at the end of one year).

  Speaking of cadence, Kem Cook has been working hard on a
sophisticated simulator/observing planner for LSST, which, given
priorities from a series of science programs, and a history of
weather/seeing from a given site, decides which field to observe next.
One could imagine it incorporating a strategy such as that which
Daniel suggested, and seeing what it did with this.  One could also
use this simulator to discuss the derivative of science output with
etendue.  The Pan-STARRS folks are eager to try the simulator out for
their system as well.  One could similarly take PS4 as a given, and
re-optimize the rules of the simulator, given the existence of PS4
data.  Eventually, it would be great to incorporate the existence of
both surveys into the software, and optimize the science over the sum
of the two projects.

**********************

Kuiper belt objects: Our KBO pundit, Gary Bernstein, was unable to
make the meeting.  I will contact him and ask him to scope the problem
(again with the emphasis on what Pan-STARRS will accomplish, and what
LSST can do that Pan-STARRS cannot).  It was pointed out that going
faint in KBO's allows one to go to *smaller* objects, in particular
to the point where erosion dominates.  This is important, because it
is thought to be these objects which we're seeing (indirectly) in dust
disks like that of Beta Pic.  


************************

Galactic structure studies: Sadly, we only discussed this topic for a
short time before the end of the meeting; here is some of the issues
which we touched upon.  Dave Monet emphasized that for proper motion
studies, observing the entire sky roughly once per month was good
enough; both LSST and Pan-STARRS will do this (albeit to different
depths).  If one is looking for astrometric wiggles due to planets,
then the cadence with which one visits a given (large) area of sky
sets the minimum orbital time to which one is sensitive to these
wiggles.  There are important questions here about how astrometric
errors will scale with aperture size, which PS1 will go a long way to
address.  Finally, the depth that one goes to in a full-sky survey
certainly scales with etendue, and we need to quantify this in terms
of the low-luminosity end of the stellar populations that each project
can probe.

***********************

Variable objects: Here again the discussion was brief.  One can make a
generic argument that the region of (sky area/# of filters/depth/cadence) 
one can cover with a survey scales with etendue.  One potential real
advantage of the Pan-STARRS approach for variable objects is to get
simultaneous colors (if a different filter is put in front of each
camera).  This could prove invaluable for understanding the physical
nature of objects that vary on fast (minutes) timescales.

  Another thing, that neither Pan-STARRS or LSST would address, would
be the value of having an aperture distributed in longitude on the
Earth, for 24-hour synoptic coverage of variable object.

  Another issue that came up was related to searching for very rare
types of quickly varying objects, 'rare flashers'.  There was a
consensus that one could find such objects equally well by repeated
observations over a relatively small fraction of the sky at any given
time as over the full sky (a la Daniel Eisenstein's cadence suggestion
above); a manifestation of the ergodic hypothesis.

*************************

  Supernovae:

  The first thing I learned from the discussion was that spectroscopy
is much less important for SN physics than I had previously thought.
Spectroscopic features come from the SN photosphere, which is due to
the material in the outer "8 teaspoons" (in Phil Pinto's memorable
phrase) of the exploding star, while the full light curve is due to
the physics of the star as a whole.  LST will of course be a light
curve factory. (Spectroscopy of course remains important for
determining redshifts, and the adequacy of photometric redshifts for
cosmological studies has not yet been explored thoroughly).  As
discussed in the DRM document, even an etendue of 250-300 is not
adequate to get good cadence SN light curves in all filters at high
redshift as part of the routine survey; we proposed there a special
campaign of longer exposures to yield this (the routine survey will do
an adequate job for SN up to z~0.7).  In Pan-STARRS, the problem is
correspondingly worse, and they plan a focussed campaign on SN over a
limited area of sky, 1300 deg^2 (indeed, their design reference
mission refers, to a much greater extent than for LSST, to a whole
series of different observing modes).  Full-sky coverage for SN
doesn't have an obvious strong science driver (although perhaps
directional dependence in w would suggest otherwise; see below), and
the relationship between PS4 and LSST comes down to shear numbers of
objects.  Phil emphasized the importance in supernova physics of
finding new and unanticipated types of supernovae; here, the larger
one's survey (i.e., the larger the etendue), the larger one's
sensitivity to rare finds.  To use SN for cosmology, one wants to make
sure that there are enough objects in each bin of redshift and
supernova type to reach (and presumably explore) the systematics floor
(perhaps caused in part by the unusual types of SN mentioned above).
The problem, of course, is that we do not yet have a clear idea of
what that floor is, making it difficult to estimate a priori how many
SN with excellent light curves are "enough".

  Phil Pinto will explore these ideas further for the SWG.

******************Weak lensing

  Like supernovae above, weak lensing is a study where the systematics
floor comes into play.  But here, the systematics are not
astrophysical in nature, but rather are due to the telescope system:
namely uncorrected and/or unmeasured anisotropies in the PSF.  Then
followed a fairly detailed discussion of where these systematics come
from, how they can be measured, and which system (LSST or PS4) would
be better at handling them.  It was pointed out, for example, that
PS4, which is f/4 rather than ~f/1, was much less sensitive than LSST
to collimation/mirror cell problems, which of course cause systematic
problems in the PSF.  LSST argues that many many exposures at a
variety of rotator angles will average over many of these problems.
PS4 says that it gets these many many exposures by observing each
field four times at each shot.  LSST then points out that that gives
that much more optics that needs to be controlled.  It was clear from
this discussion (and others) that there is much more to the question
of which project can do the science than simply etendue, and there was
a strong desire in the group to do some real, quantitative,
engineering studies of the different designs with these PSF issues in
mind.

  Nick Kaiser said that if systematics were not an issue, PS4 would be
able to do a superb job on cosmological parameters from weak lensing.
However, several people emphasized the need for independent, and
higher statistical significance, confirmations of whatever weak
lensing results come out of PS4.  Moreover, statistics are an issue
not just for the weak lensing, but the photo-zs needed to determine
where the galaxies are: if you don't go faint, your photo-zs will be
poor.  
  Tony Tyson described work using images from Subaru to understand the
nature of PSF systematics from a modern 8-meter telescope.  He showed
results of a simulation, including a model for the systematics and the
way they scale down with multiple exposures, for the error on w from
one of the weak lensing statistical measures (not sure which one).
This is the sort of calculation which is particularly valuable for
this comparison, although again it all comes down to the details of
the model assumed for systematics.

A related question: what are the science drivers for full-sky coverage
in weak lensing?  In standard cosmologies, there is little
cosmological handle in the measurement of the lensing shear spectrum
at small l (<100).  The more sky that is covered to a given depth, the
better one's statistics, of course, and one is open to real surprises
(like angular dependence of w).

LSST LSST LSST LSST LSST    Mailing List Server   LSST LSST LSST LSST LSST LSST
LSST
LSST  This is message 295 in the lsst-general archive, URL
LSST         http://www.astro.princeton.edu/~dss/LSST/lsst-general/msg.295.html
LSST         http://www.astro.princeton.edu/cgi-bin/LSSTmailinglists.pl/show_subscription?list=lsst-general
LSST  The index is at http://www.astro.princeton.edu/~dss/LSST/lsst-general/INDEX.html
LSST  To join/leave the list, send mail to lsst-request@astro.princeton.edu
LSST  To post a message, mail it to lsst-general@astro.princeton.edu
LSST
LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST