Minutes of the Telecon of the LSST/SWG December 9, 2002 Attending: Al Harris Jeremy Mould Sidney Wolff Tony Tyson Dave Monet Gary Bernstein Nick Kaiser Peter Garnavich Alan Stern Fiona Harrison Andy Connolly Kem Cook David Morrison Michael Strauss Mike Shara Daniel Eisenstein Jon Thaler (forgive me if we missed your name!) Michael Strauss opened the telecon by asking Nick Kaiser to summarize the two recent PanSTARRs meetings. The first meeting dealt with the pipelines for PanSTARRS; the second focused on developing a Design Reference Mission (DRM), with emphasis on the asteroid problem. There was a general consensus in favor of an all-sky multi-color survey in standard pass-bands, followed by narrow-field surveys to deeper magnitudes. There was not much demand for UV observations. Science of transients was not strongly represented. Nick plans to make minutes and summaries of these meetings available to the SWG when they are prepared. In particular, he hopes to get out a conceptual design document by mid-January. Several key players in developing NEA observing strategies attended the PanSTARRS meetings (including our own Al Harris). There was substantial discussion about observing strategies that use "sweet spots" for searching for NEAs. The "sweet spots" are ahead and behind the Earth in its orbit, roughly 60-90 degrees from the longitude of the Sun and within 10 degrees of the ecliptic. These areas of the sky are available for observation only within 2-3 hours of dawn and dusk and are at relatively high air mass (~2). These locations are obviously not optimum for deep sky surveying. Modeling to compare the efficiency and time to reach various levels of completeness for various observing strategies continues, but the different modelers seem to be in good contact and to be comparing their results. Harris reported that if one focusses just on the sweet spots (a total of 600 square degrees of sky), it would take 30-40 years to be complete in NEA's to 300 meters. On the other hand, he pointed out that a full-sky NEA survey would find of order 50% of the 50-meter objects (i.e., comparable to Tunguska) in the 10 years of the survey: far from complete, but plenty for studying these objects scientifically. Harris also pointed out that recovery of asteroids will do well with a uniform cadence of observing a given area of sky once every five days, with an opportunity roughly once per month of observing a given area of sky on two successive days. The possibility of surveying in selected areas of the sky along with the desire of the asteroid searchers to observe in broadband (i.e., broader than the standard filters) raise the question of whether the LSST survey should carry out different programs sequentially: e. g. searching for asteroids near dawn and dusk and doing other surveys through the middle of the night. Monet noted that the asteroid sweet spots are well suited for parallax measurements. However, broadband measurements and relatively high airmass would introduce problems of atmospheric refraction. Harris then described the debate currently in progress about the risk from asteroids smaller than 1 km (larger impacts are considered to be civilization-destroying events). We have taken the goal of a complete NEA catalog to 300m, but there is nothing special about that value, and no obvious knee in the risk vs. size plot there. The standard concern about 300m class impactors is the tsunamis they generate if they were to hit the deep ocean. Among the various researchers there appears to be good agreement about what happens at the impact site in the deep ocean and on the wavelength of the wave generated; the wave has a wavelength comparable in size to the crater generated on the impact, which is of order 10 times the linear size of the impactor. There is substantial disagreement about what happens when the waves reach shallow waters and whether or not they break in such a way as to mitigate damage. Jay Melosh, for example, was quoted as having said that the extent of damage of a breaking wave is strongly dependent on its wavelength. Impact-generated waves have much shorter wavelengths (up to 10 km) than earthquake-generated waves (thousands of km), implying that they will not be nearly as serious as previously thought. Reconciliation of the tsunami calculations will be important input for the NASA committee currently assessing hazards of NEAs, and their report is due in late spring of 2003. Engaging oceanographers to help resolve the differences seemed a fruitful strategy. Dave Monet also attended the PanSTARRS meeting. He stressed that there is a great deal of work to be done to prepare for both PanSTARRS and LSST in terms of synthesizing the results of existing astrometric catalogs. PanSTARRS will saturate at magnitude 15-16, which is about where existing catalogs (such as UCAC) start to lose their precision. Therefore, there will be a great deal of work to be done during the early stages of the new surveys to establish a grid on non-moving objects to serve as references. Since all stars move at the level of LSST accuracy (i.e., tens of milli-arcsec over 5-10 years), galaxies or quasars may have to be used (although this may not be a serious issue, e.g., for weak lensing work). This will be a challenge at low Galactic latitudes! Other work that remains is to characterize both the astrometric behavior of the Orthogonal Transfer CCD's that will be used by PanSTARRS, and to fully understand the degree-scale coherent astrometric offsets caused by atmospheric refraction. For both, Dave is waiting for data from folks at Hawaii to analyze. Monet also stressed the tremendous astrometric science that PanSTARRS and LSST could do, including looking for astrometric wiggles from unseen companions. He emphasized the difficulty of what's involved: for every star, one wants to solve for proper motion, parallax, wiggles, and atmospheric refraction. Tony Tyson amplified on his e-mail (lsst-general #24) concerning the requirements for weak lensing science. He stressed the importance of controlling the PSF and of characterizing the jitter of the telescope by building a model that includes all the servo systems, etc. He described his experience with current 4-m telescopes and felt that a factor of 10 improvement in control of systematics was easily possible for a modern telescope designed with the appropriate set of requirements. Michael Strauss asked whether a factor of 10 was sufficient and whether there was any cliff where the relationship between scientific return and cost/engineering feasibility changed slope. This will have to be determined as the design is developed. There was some discussion of whether a decrease in systematic effects by a factor of 10 necessarily translates in one's ability to measure shear improving by a factor of 10. For example, what is the difference between measuring the PSF shear as 10%+/-1%, and 0%+/-1%? The point is that if the PSF shear is 10%, it is unlikely to be accurately spatially uniform, and there will be some scale below which one can't measure it, giving an irreducible systematic error. If one has faith that the systematics are closer to zero, these irreducible errors are likely to be much smaller (Tony, is that a correct way to explain it?). Tyson briefly described simulations of the Hubble Deep Fields, exploring the sensitivity of weak lensing measurements to seeing. He described a fairly steady loss of lensing sensitivity, as the seeing worsens beyond 0.5". One advantage of going deep with the LSST is that one is measuring the shape at quite faint outer isophotes of galaxies, at 1" or more in radius, where seeing has less of an effect. Gary Bernstein has developed the optimal filter for measuring galaxy shapes, in which one fits a matched filter of an elliptical Gaussian to each image. Tony also claimed (for reasons I, Michael, am still a bit unclear on) that it is important to measure the PSF shear bias on individual exposures (as opposed to the final stacked image). This requires a large A Omega, so you go deep enough to get enough PSF stars to measure this. Andy Connolly stated that he is doing simulations with various filter sets to determine the optimum choice for determining photo-z's. He is looking at standard filter sets along with double-wide filters. Use of double-wide filters would again introduce problems of atmospheric refraction, decrease the hour angle over which observations are possible, etc. One real possibility is the inclusion of a Y filter, centered at 1 micron (i.e., beyond the z filter), which could give an important lever arm for photo-z's. We briefly discussed the rationale behind the minimum exposure time (10 seconds?) for variability/transient surveys. Tony Tyson asked whether one could set such a timescale by considering the relationship between the mass, luminosity, and crossing time of a black hole accreting at the Eddington rate, at a given redshift; Fiona Harrison will think through the numbers. We also briefly discussed the use of color and spectral information to characterize variable/transient objects; it is clear that variability in a single photometric band is not adequate to identify objects clearly, a point that Fiona already made strongly at the Princeton meeting. If the density on the sky of interesting variable objects is more than one per pointing (roughly 7 square degrees), then LSST automatically becomes its own light-curve follow-up telescope. If interesting transients occur less frequently than that, then we can consider the possibility of coming back to certain fields with recognized transients deliberately, in order to build up the light curves. For example, Mike Shara described the potential for identifying novae in stars that have been stripped from galaxies. About 300/year should be observable with LSST in extra-galactic space out to M87 (corresponding to 24th magnitude); this is roughly one every 100 square degrees. It is desirable to have good sampling of the light curves (once per night over a month) and some limited multi-color information may be required to distinguish novae from orphan after-glows. Because these objects are standard candles, this would allow one to trace the intergalactic stellar distribution. Mike will write up this science in more detail; this is a good test case as a possible well-defined transient science driver. *Dwarf* novae in our own galaxy could be recognized in a survey going only to 21st magnitude; this would vastly increase our knowledge of these objects. This type of science raises two generic questions for developing a DRM. Are there certain events (e.g. something that goes from invisibility to x magnitudes above invisibility) that should trigger frequent returns to a specific field? And what is the requirement for contemporaneous color information? Both of these types of observations will necessarily sacrifice sky coverage for more complete characterization of specific fields. As the survey continues, our knowledge of the various types of interesting transient objects will increase, and the objects we will decide are worthwhile to follow-up will change; we certainly need to be flexible on the observing strategy. There was no report from the KBO or supernovae groups, and a written report on transients is being developed. Minutes by Sidney Wolff and Michael Strauss LSST LSST LSST LSST LSST Mailing List Server LSST LSST LSST LSST LSST LSST LSST LSST This is message 33 in the lsst-general archive, URL LSST http://www.astro.princeton.edu/~dss/LSST/lsst-general/msg.33.html LSST http://www.astro.princeton.edu/cgi-bin/LSSTmailinglists.pl/show_subscription?list=lsst-general LSST The index is at http://www.astro.princeton.edu/~dss/LSST/lsst-general/INDEX.html LSST To join/leave the list, send mail to lsst-request@astro.princeton.edu LSST To post a message, mail it to lsst-general@astro.princeton.edu LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST