The Large Synoptic Survey Telescope (LSST) is under construction in Chile. To make the delivered system meet the science goals, the project defines a set of performance metrics, and constantly monitors the system performance by evaluating the metrics against their requirements. In this paper, we describe the latest updates to the comprehensive tool set we have developed for evaluating the LSST system performance, which we collectively refer to as the LSST integrated model, and recent work on utilizing these tools for system verification. We also broaden our set of performance metrics and introduce an integrated-étendue-based metrics framework, which is useful for not just system verification, but also mitigation and optimization. Most of the major metrics currently being monitored fit under this framework, including image quality, system throughput, the single-visit point source 5σdetection limit, etc. We also mointor the Point Spread Function (PSF) ellipticity, which isn't part of this metrics framework, but is an output of the integrated model.
To optimize the observing strategy of a large survey such as the LSST, one needs an accurate model of the night sky emission spectrum across a range of atmospheric conditions and from the near-UV to the near-IR. We have used the ESO SkyCalc Sky Model Calculator1, 2 to construct a library of template spectra for the Chilean night sky. The ESO model includes emission from the upper and lower atmosphere, scattered starlight, scattered moonlight, and zodiacal light. We have then extended the ESO templates with an empirical fit to the twilight sky emission as measured by a Canon all-sky camera installed at the LSST site. With the ESO templates and our twilight model we can quickly interpolate to any arbitrary sky position and date and return the full sky spectrum or surface brightness magnitudes in the LSST filter system. Comparing our model to all-sky observations, we find typical residual RMS values of ±0.2-0.3 magnitudes per square arcsecond.
KEYWORDS: Large Synoptic Survey Telescope, Device simulation, Stars, Galactic astronomy, Data modeling, Photons, Systems modeling, Solar system, Sensors, Atmospheric modeling
The LSST will, over a 10-year period, produce a multi-color, multi-epoch survey of more than
18000 square degrees of the southern sky. It will generate a multi-petabyte archive of images and
catalogs of astrophysical sources from which a wide variety of high-precision statistical studies can
be undertaken. To accomplish these goals, the LSST project has developed a suite of modeling and
simulation tools for use in validating that the design and the as-delivered components of the LSST
system will yield data products with the required statistical properties. In this paper we describe the
development, and use of the LSST simulation framework, including the generation of simulated
catalogs and images for targeted trade studies, simulations of the observing cadence of the LSST, the
creation of large-scale simulations that test the procedures for data calibration, and use of end-to-end
image simulations to evaluate the performance of the system as a whole.
We describe the Metrics Analysis Framework (MAF), an open-source python framework developed to provide a user-friendly, customizable, easily-extensible set of tools for analyzing data sets. MAF is part of the Large Synoptic Survey Telescope (LSST) Simulations effort. Its initial goal is to provide a tool to evaluate LSST Operations Simulation (OpSim) simulated surveys to help understand the effects of telescope scheduling on survey performance, however MAF can be applied to a much wider range of datasets. The building blocks of the framework are Metrics (algorithms to analyze a given quantity of data), Slicers (subdividing the overall data set into smaller data slices as relevant for each Metric), and Database classes (to access the dataset and read data into memory). We describe how these building blocks work together, and provide an example of using MAF to evaluate different dithering strategies. We also outline how users can write their own custom Metrics and use these within the framework.
The Large Synoptic Survey Telescope will record approximately 2.5x10^6 images over a 10-year interval, using 6
optical filters, with a wide variety of cadences on time scales of seconds to years. The observing program will be of a
complexity that it can only be realized with heavily automated scheduling. The LSST OpSim team has devised a
schedule simulator to support development of that capability. This paper addresses the complex problem of how to
measure the success of a schedule simulation for realization of science objectives. Tools called Merit Functions evaluate
the patterns and other properties of scheduled image acquisitions.
Extracting science from the LSST data stream requires a detailed knowledge of the properties of the LSST catalogs and
images (from their detection limits to the accuracy of the calibration to how well galaxy shapes can be characterized).
These properties will depend on many of the LSST components including the design of the telescope, the conditions
under which the data are taken and the overall survey strategy. To understand how these components impact the nature
of the LSST data the simulations group is developing a framework for high fidelity simulations that scale to the volume
of data expected from the LSST. This framework comprises galaxy, stellar and solar system catalogs designed to match
the depths and properties of the LSST (to r=28), transient and moving sources, and image simulations that ray-trace the
photons from above the atmosphere through the optics and to the camera. We describe here the state of the current
simulation framework and its computational challenges.
We present an innovative method for photometric calibration of massive survey data that will be applied to the
Large Synoptic Survey Telescope (LSST). LSST will be a wide-field ground-based system designed to obtain
imaging data in six broad photometric bands (ugrizy, 320-1050 nm). Each sky position will be observed multiple
times, with about a hundred or more observations per band collected over the main survey area (20,000 sq.deg.)
during the anticipated 10 years of operations. Photometric zeropoints are required to be stable in time to 0.5%
(rms), and uniform across the survey area to better than 1% (rms). The large number of measurements of
each object taken during the survey allows identification of isolated non-variable sources, and forms the basis
for LSST's global self-calibration method. Inspired by SDSS's uber-calibration procedure, the self-calibration
determines zeropoints by requiring that repeated measurements of non-variable stars must be self-consistent when
corrected for variations in atmospheric and instrumental bandpass shapes. This requirement constrains both the
instrument throughput and atmospheric extinction. The atmospheric and instrumental bandpass shapes will
be explicitly measured using auxiliary instrumentation. We describe the algorithm used, with special emphasis
both on the challenges of controlling systematic errors, and how such an approach interacts with the design of
the survey, and discuss ongoing simulations of its performance.
The Large Synoptic Survey Telescope (LSST) will continuously image the entire sky visible from Cerro Pachon
in northern Chile every 3-4 nights throughout the year. The LSST will provide data for a broad range of science
investigations that require better than 1% photometric precision across the sky (repeatability and uniformity)
and a similar accuracy of measured broadband color. The fast and persistent cadence of the LSST survey
will significantly improve the temporal sampling rate with which celestial events and motions are tracked. To
achieve these goals, and to optimally utilize the observing calendar, it will be necessary to obtain excellent
photometric calibration of data taken over a wide range of observing conditions - even those not normally
considered "photometric". To achieve this it will be necessary to routinely and accurately measure the full
optical passband that includes the atmosphere as well as the instrumental telescope and camera system. The
LSST mountain facility will include a new monochromatic dome illumination projector system to measure the
detailed wavelength dependence of the instrumental passband for each channel in the system. The facility will
also include an auxiliary spectroscopic telescope dedicated to measurement of atmospheric transparency at all
locations in the sky during LSST observing. In this paper, we describe these systems and present laboratory
and observational data that illustrate their performance.
A survey program with multiple science goals will be driven by multiple technical requirements. On a ground-based
telescope, the variability of conditions introduces yet greater complexity. For a program that must be largely autonomous
with minimal dwell time for efficiency it may be quite difficult to foresee the achievable performance. Furthermore,
scheduling will likely involve self-referential constraints and appropriate optimization tools may not be available. The
LSST project faces these issues, and has designed and implemented an approach to performance analysis in its
Operations Simulator and associated post-processing packages. The Simulator has allowed the project to present detailed
performance predictions with a strong basis from the engineering design and measured site conditions. At present, the
Simulator is in regular use for engineering studies and science evaluation, and planning is underway for evolution to an
operations scheduling tool. We will describe the LSST experience, emphasizing the objectives, the accomplishments and
the lessons learned.
Science studies made by the Large Synoptic Survey Telescope will reach systematic limits in nearly all cases. Requirements for accurate photometric measurements are particularly challenging. Advantage will be taken of the rapid cadence and pace of the LSST survey to use celestial sources to monitor stability and uniformity of photometric data. A new technique using a tunable laser is being developed to calibrate the wavelength dependence of the total telescope and camera system throughput. Spectroscopic measurements of atmospheric extinction and emission will be made continuously to allow the broad-band optical flux observed in the instrument to be corrected to flux at the top of the atmosphere. Calibrations with celestial sources will be compared to instrumental and atmospheric calibrations.
The SDSS project has taken 5-band data covering approximately 3000 deg2, or 4Tby of data. This has been processed through a set of image-processing pipelines, and the resulting catalogues of about 100 million objects have been used for a number of scientific projects.
We discuss our software infrastructure, and outline the architecture of the SDSS image processing pipelines. In order to process this volume of data the pipelines have to be robust and reasonably fast; because we have been interested in looking for rare objects, the number of outliers due to deficiencies in the data and bugs in the software must be small. We have found that writing the codes has been one of the harder and more expensive aspects of the entire survey.
KEYWORDS: Asteroids, Observatories, Space telescopes, Chemical elements, Contamination, Statistical analysis, Databases, Solar system, Photometry, Spectroscopy
We announce the first public release of the SDSS Moving Object Catalog, with SDSS observations for 58,117 asteroids. The catalog lists astrometric and photometric data for moving objects observed prior to Dec 15, 2001, and also includes orbital elements for 10,592 previously known objects, and confirm that asteroid dynamical families, defined as clusters in orbital parameter space, also strongly segregate in color space. Their distinctive optical colors indicate that the variations in chemical composition within a family are much smaller than the compositional differences between families, and strongly support earlier suggestions that asteroids belonging to a particular family have a common origin.
The Sloan Digital Sky Survey (SDSS) imaging camera saw first light in May 1998, and has been in regular operation since the start of the survey in April 2000. We review here key elements in the design of the instrument driven by the specific goals of the survey, and discuss some of the operational issues involved in keeping the instrument ready to observe at all times and in monitoring its performance. We present data on the mechanical and photometric stability of the camera, using on-sky survey data as collected and processed to date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.