Subject: LSST WL dialog Michael, Thanks for your response. This is useful, since my memo was informal and intended really only for weak lensers. Here are some answers to your questions. I'm sure others will want to comment too. < You have assumed Gaussian seeing in the simulations that you've < done. As you know, real seeing is more "interesting" than that; the < SDSS, for example, has modeled it as the sum of two Gaussians, plus a < power-law tail. It will be interesting to see what effect this has on < shape measurements of galaxies. Any thoughts on this? There is shear dilution and shear bias. Regarding shear dilution correction, Gary and others have worked on this, and this is important for accurate correction for the seeing (this is a multiplier of the observed shear)and is seeing star magnitude dependent. However, the purpose of the simulations was to calculate the first order effects of seeing on LSST weak shear measurement. For that purpose a Gaussian PSF is OK. As mentioned, in the future we must use a more realistic PSF if we simulate effects which depend on the tail. The first order rounding of a resolved galaxy does not depend on the tail. I can repeat the HST HDF simulations for any PSF. I will post a writeup of these simulations to lsst-general this weekend. The need for them was identified at one of the Tucson LSST workshops. The shear bias introduced by an elliptical PSF is far more dangerous and hard to control. This is the source of all our systematics and must be adequately modeled in our simulations. We have not done that yet for LSST. When we do we will have to use realistic PSFs; the tail of a non circular PSF has interesting effects. Good reading: astro-ph/0107431 < I'm not sure I understand quite all the items listed in your list of < requirements to be considered. But a few comments. In particular, as < you mention later in the document, we'll be doing the shape < measurements off the stacked images, perhaps several hundred for a < given spot on the sky. Anisotropies and spatial variations of the PSF < should be averaged over to a certain extent in this stacking; this < will allow us presumably to relax any criteria on the uniformity and < isotropy of the PSF in a single exposure. Exactly. I was unclear perhaps. I was using what we now get for this process to scale to what I think we could do with LSST if we reduced the systematics in individual exposures by 10. That is, if the WL science requirements for LSST are ten times smaller shear, then we must reduce the systematics in each image by 10. This can be done through a combination of better optical performance, more exposures, and rotating the camera (which we do not do currently). See more on this below. < You have an item (#6) having to do with astrometry. Can you explain < that further? How do astrometric errors feed into errors on shear? Faint field stars are used for finding the position dependent PSF on each image. The astrometry has to be good enough to find these stars on each exposure. i.e. not too stringent. If they are crowded then their PSF is most likely compromised. Astrometry also enters in the auto detection of USNO standards for the initial distortion correction fits. Again not too stringent. My guess is that 0.1 arcsec at 3 sigma should do. < You mention that existing 4-meter telescopes show substantial PSF < ellipticity (several percent), and say that this reflects the fact < that these telescopes were not designed for such precision work. What < in fact is the leading cause of this ellipticity? Improper mirror < support? Jitter in the guiding? Poor tracking? All of the above? I < agree in principle that if we put our minds to it, we can minimize < these effects in the LSST, but we need to know exactly what we're < fighting against. Chuck will have some comments here. Generally it is all of the above, plus position dependent astigmatism introduced by the combination of defocus and non flatness of the CCDs. Our experience is that wild excursion (the tails will kill you) of anything can introduce PSF ellipticity, sometimes position dependent. That is the reason, even with much improved and controlled optics of LSST, to discard bad exposures. We do that now, and it really helps. It is also one reason not to have long exposures. (i.e. imagine what we could do if we could excise the bad moments within a 100 sec exposure.) We must take a systems approach to this with the LSST. That is where the jitter requirements come from. *********Science Drivers***************** < This may all exist in the literature (and some references would be < useful), but let me see if I understand the jist of your discussion < here. There are two separate weak-lensing studies that can be done < with LSST-like data: the power spectrum of the cosmic shear, and < direct counts of (presumably virialized) overdensities. The former is < presumably done on larger scales than the latter. The former is < directly predictable from linear theory (if one knows one's selection < function), while the latter can be predicted using a combination of < some version of Press-Schechter, and N-body simulations. Is this < correct? It seems that most of the discussion here (and also in Joe < Hennawi's work) has been focussed on the latter; is it the more < powerful, or simply better-understood of the two? Right. Wayne Hu's and Uros Seljak's papers discuss the convergence power spectrum (see 0012087), Joe Mohr etal have been addressing cluster counting (see 0112502 and 0103049), and Joe Hennawi's simulations for DLS and LSST have been focusing on the sum effects. i.e. ray tracing through full structures, virialized and otherwise (see 0203061 for an analytic treatment). For a general dark energy LSST WL discussion see 0209632. Also 0210134 for a discussion of the effects of the non virialized tail. Generally, cluster counting is less demanding than cosmic shear terms of shear error because you are surveying for compact cores of approx 10^14 M_solar virialized clusters, which create shears in excess of 0.02 at several arcmin radius. As I pointed out however, we will have to understand the mass function better with LSST and that will require going down to less massive clusters with a corresponding more stringent shear systematics requirement. And what if there also exists a family of much less compact mass concentrations? The power spectrum of convergence, particularly in the low l region, requires much lower shear measurements and that is why more effort must be put into that measurement. Systematics identification via E - B decomposition is one tool. Overlapping fields is an operational tool for large scale shear systematics control. < In this context then, you make reference to error ellipsoids in the < Omega_L/Omega_M plane (or should it be the w/Omega_M plane? The full < relevant parameter space is not completely clear to me, what with the < MAP data about to arrive...). Exactly which of the above analyses < gives rise to these constraints is unclear. Both planes are relevant and each gives its own shear constraints. The Omega_L - Omega_M plane will be explored before LSST, but LSST can contribute precision. The science question there is whether WL agrees with SN + CMB. WL vs z can supply independent constraints. The Omega_L - w plane is more interesting and is where the physics is. The error ellipsoid for cluster counts is orthogonal the those for CMB and SN (see 0112502, fig 5). CMB alone will not constrain w usefully. This is where we require better than a few percent precision in w to distinguish QCDM from a variant of LCDM which is degenerate in CMB. < In this context, there is reference to the relevant depths that are < sensitive to various effects, especially w. It is stated that the < dark energy manifests itself most strongly for z<0.8; is this simply a < statement that at higher redshifts, Omega_M approaches 1, and all < models become degenerate? No. Comoving volume is most sensitive to w (for a given Omega_m). More sensitive than luminosity or ang diameter distance. So is the lensing kernel. The product is what we measure in cluster counts. This peaks at z < 1 for z_source < 2. See astro-ph/0209632. < In this context, there is discussion of a lensing survey to 26th < magnitude. What exactly does this magnitude limit mean? One of < course can't carry out detailed shape measurements for objects with < 5-sigma detections, so I am guessing that this number refers to a < faintest galaxy at which the shape can be measured. Is this right? < Can one translate this into something like a 5-sigma limit for point < sources (which can be related to exposure time given the seeing, the < telescope aperture, and sky brightness). Since we are not going much deeper than current surveys (rather we are going much wider), we can scale from what we know. High ellipticity precision is useless for an individual source which has intrinsic ellipticity error of 0.3. We use many such sources to average down this random error. The required galaxy mag limit was obtained by scaling from what we now have, assuming 0.5 arcsec seeing. (next memo) ***************Weak lensing in 10 years************************* < The estimate of where the lensing field will be in ten years doesn't < mention VISTA. I do know that they plan to start with a near-IR < survey, but isn't it likely that on the 10-year timescale, they will < expand to wide-field optical imaging as well? I didn't mention projects which would not be analyzed by the end of the decade. I talked with Jim Emerson. I think they would be lucky to have an optical camera follow-on to their IR camera by 2007, given what they are planning to do in the IR. Any optical survey would run past end of the decade. In any case an LSST with etendue of 260 would be over ten times faster. *************LSST's contribution************************ < You mention that we will want to go to 26th mag in 5 bands for < photometric redshifts, and then deeper in one band. Does one gain < anything for shape measurements if one co-adds the data in different < bands? These are independent photons, and at least for early-type < galaxies, substructure and color gradients are unlikely to cause much < trouble. We have explored this. Surely LSST will have better imaging in all bands. But our strategy has been to observe only in R when the seeing is good. So we find that co-adding bands helps little. But if a lot of multiband observing is done with LSST for some other reason then you are right, I suspect, and we would gain by combining the shear measured in all bands. Color-z requires less overall exposure than good shear measurement. Although we have checked to see that our mass maps are the same in various bands, if we spend most of the time in a red band then we can use the best of those data for the shear measurement. < You say that for the optical shape measurements, you want just to < co-add the best-seeing images. I've heard Nick Kaiser quoted < second-hand that the optimal thing to do is to convolve each image by < its effective seeing, then co-add, then deconvolve with the coaddition < of the seeing kernels. This then doesn't involve throwing away any < data, but I've never seen a demonstration of its efficacy. This is sort of what we were discussing above. Nick is right, assuming the other bands or images are not drastically poorer image quality and we can trust their shear. We want to use all the data if we can. Such a weighting scheme is optimal for detection and photometry, but if you know some particular images have a systematic shear error then it is best to not use them for the shear calculation. < You say that the spectrum of over-densities at very large angular < scales is a useful diagnostic. I lost you; are you speaking of the < power spectrum of fluctuations (see my questions above)? Or are you < speaking of virialized structures? I lost you here. In the < following paragraph, you talk about measurement of cosmic variance; < aren't you simply talking about the measurement of P(k) of the shear < signal on the largest scales? You do make reference to a non-Gaussian < signature, but I am not sure I understand what you are referring to < here. I was saying that if there were also some much larger and more diffuse mass structures they would have escaped our attention in current shear surveys. See 0012087 and 0210134. I am imaginging that nature is more complicated than our current CDM model. If so, these things are not the same as cosmic variance. < When you talk about strong lensing, you say that there is only one < system in which multiple arcs from a single background galaxy is < known; I assume it is the one you've worked on with Ed Turner and Wes < Colley (I don't remember its telephone number). But there are *lots* < of clusters now with substantial arc systems (Abell 370 and 2218 being < among the classics), even if none is as clean as the one you had in < mind. Aren't those also useful for this purpose? Yes, 0024+1654. But no, these other classic multi arc clusters are not as useful. To get a unique solution we need many images of the same source. Or, to be fair, we just haven't figured out how to do the strong lens inversion for cases where there are single images of many sources. That is a different optimization problem. Surely in some of those clusters there are multiple images of individual sources, but we have not figured out how to separate them from single images of other sources without ultra faint spectroscopy. Nothing like having a morphologically distinct source image. *********************System requirements******************** < I remain unclear on some of the technicalities of what you write < here. I don't know what 'unmodeled detector-focal place errors' are, < which you say are a major effect in PSF shear systematics. I also did < not understand the statement that "the main benefit of a stack of 200 < images will come from getting better source shear measurements." I < lost you; what is a 'source shear measurement'? (1) Should have read 'unmodeled detector-focal plane errors' (i.e. the focal "plane" - detector surface errors.) These errors can be modeled to some accuracy. With LSST at f/1.2 we will have to do this to approx 3 microns. With enough stars in many exposures that should be possible. (2) We must measure the shear of thousands of source galaxies without adding ellipticity noise more than about 10% of their intrinsic ellipticity of 0.3 per source galaxy, and without introducing any systematic ellipticity over the whole sample larger than the specified floor. The large stack of selected excellent images allows you to control the latter, via averaging over uncorrected shear errors and also via the ability in a large data set to uncover systematic trends (like shear error proportional to some telescope parameter). Algorithms will have to be written to automate this. < You then say that the LSST should have PSF ellipticity a factor of < 10 below that of the 4m. See my question above; what exactly is < driving the ellipticity on the 4m, and how can we be confident that we < can get rid of these systematics on the LSST? See above answer. Chuck Claver, Jim Burge, and I have been working on this and it appears that it will be possible to control LSST optics decenter, tilt, and system defocus to the level required in a redundant way. Look for an engineering memo on this soon. It was to have been an SPIE paper. TT December 5 2002 LSST LSST LSST LSST LSST Mailing List Server LSST LSST LSST LSST LSST LSST LSST LSST This is message 30 in the lsst-general archive, URL LSST http://www.astro.princeton.edu/~dss/LSST/lsst-general/msg.30.html LSST http://www.astro.princeton.edu/cgi-bin/LSSTmailinglists.pl/show_subscription?list=lsst-general LSST The index is at http://www.astro.princeton.edu/~dss/LSST/lsst-general/INDEX.html LSST To join/leave the list, send mail to lsst-request@astro.princeton.edu LSST To post a message, mail it to lsst-general@astro.princeton.edu LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST LSST