PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784001 (2010) https://doi.org/10.1117/12.881075
This PDF file contains the Front Matter associated with the SPIE Proceedings volume 7840, including Title page, Copyright information, Table of Contents, Conference Committee listing, and Introduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Earth Frameworks, Models, and Spatial Data Infrastructure
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784002 (2010) https://doi.org/10.1117/12.872294
The Geo-spatial Framework is an important foundation of Digital Earth consisting of spatial reference framework and
Geo-spatial Data, and provides spatial position criterion for spatial and other non-spatial data, and implements
conformity and share of all kinds of information sources. The article introduces the current organization methods of
spatial data which is the most essential part of Geo-spatial Framework. Then on the basis of analyzing the characteristics
respectively, it summarizes the advantages and disadvantages of them. In order to manage the Geo-spatial Framework
data better, the article advances the organization methods of data of commixed database of Geo-spatial Framework
model by integrating some advanced database technologies such as RDBMS and the Spatial Database Engine, namely on
the basis of database and the data of Geo-spatial Framework divided in a preceding part of this paper, it describes
layer-delaminating, block-dividing, map-dividing and so on to organize spatial data, including DLG, DOM, DEM, and
thematic data, taking example for the data of urban planning. At the same time it also realizes integrated management of
these data. Finally, combined with related data organization example, it implements these foregoing methods advanced
and verifies the reliability and feasibility of them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784003 (2010) https://doi.org/10.1117/12.872295
Being tightly related with map projection, planar mode cannot represent meso-scale and large-scale objects spanning
projection zones and spheres, because map projection causes problems of with data fissure, geometry deformation and
metric inaccuracy. Though Global Discrete Grid data model can solve these problems, it is only confined to the surface,
it can not reach to the inside and outside of the surface. However, Spheoid Degenerated Octree Grid (SDOG) extend the
extent to the whole spheroid space, and sets up a new spatial reference framework for 3D meso-scale to large-scale data
model for Digital Earth and Earth System Science. It is able to represent geographical objects spanning projection zones
and earth system objects spanning spheres, and able to be a substitution for the current data model. As for the property
of multi-hierarchy, it can represent spatial objects in multi-resolution grids, with finer resolution grids used in more
precise representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784004 (2010) https://doi.org/10.1117/12.872296
Though Digital Earth (DE) has been adopted in the nature conservation, its application in geoheritage conservation is
still rather limited. Geoheritage are usually distributed over extensive areas, posing great problems to actual conservation. The Digital Earth is a useful tool for systematic spatial accounts and data management. The aim of this article is attempting to apply DE in geoconservation. The Hexigten Global Geopark (HGG) in Inner Mongolia of China, which covers an area of 1783.58km2, was chosen as the study area. This paper was composed of five sections. The first section, we briefly reviewed the origin of geoheritage and developments of world geoconservation, and present the justifications of adopting DE for it. In the second and third section, DE is a comprehensive, massively distributed geographic information and knowledge organization system, so we developed a theoretic frame based on it, which can be applied in geoheritage survey, resources appraisal, geoconservation planning, research and public. The fourth section, DE platforms - Google Earth and Skyline Globe Pro were as tools for geoheritage surveying and zoning of HGG.
Finally, we draw conclusions that DE can be applied in geoheritage conservation limitedly today; however, excellent characteristics such VR earth, geo-library and digital atlas has huge potential for geoconservation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Min Li, Shihua Li, Xiaofang Liu, Fu Wang, Hong Jin
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784005 (2010) https://doi.org/10.1117/12.872297
Digital Earth rendering applications, such as Google Earth and World Wind, allow us to explore real information of the
Earth surface. To show the diverse details of the Earth surface, it requires high resolution satellite images. In some cases
obtaining high resolution satellite images cost too much and sometimes there are even no such images for the interesting
sites. In this paper, we present a method using example-based super-resolution techniques combined with image
analogies framework to improve the visual quality of satellite images. Detailed high resolution and low resolution
satellite images of the same site are regarded as example pairs to form a super-resolution filter. The filter effectively
improves resolution of low-resolution satellite images. Moreover, it preserves the coherence of the images and improves
the performance of the Digital Earth applications as well. The proposed method has been tested on the World Wind,
experiment results show the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784006 (2010) https://doi.org/10.1117/12.872298
The transition of rural and agricultural management from divisional to integrated mode has highlighted the importance of
data integration and sharing. Current data are mostly collected by specific department to satisfy their own needs and lake
of considering on wider potential uses. This led to great difference in data format, semantic, and precision even in same
area, which is a significant barrier for constructing an integrated rural spatial information system to support integrated
management and decision-making. Considering the rural cadastral management system and postal zones, the paper
designs a rural address geocoding method based on rural cadastral parcel. It puts forward a geocoding standard which
consists of absolute position code, relative position code and extended code. It designs a rural geocoding database model,
and addresses collection and update model. Then, based on the rural address geocoding model, it proposed a data model
for rural agricultural resources management. The results show that the address coding based on postal code is stable and
easy to memorize, two-dimensional coding based on the direction and distance is easy to be located and memorized,
while extended code can enhance the extensibility and flexibility of address geocoding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784007 (2010) https://doi.org/10.1117/12.872301
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid
based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in
this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality
technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation
layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that
functions of four-level protocol framework and three-layer data management pattern of Global GIS based on
organization, management and publication of spatial information in this architecture. Three kinds of core supportive
technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology,
and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will
be an important development tendency of GIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784008 (2010) https://doi.org/10.1117/12.872304
The information referring to Digital Basin includes various water recourse information, spatial information of river basin
and other socioeconomic statistics information corresponding with them. This data scatters in different administrative
authorities and it has the property of large quantity, isomerous and multiple dimensions. However, China's
administration departments are separated. In the original data copy the way to construct grand unified Digital Basin will
never stride forward substantially. This paper analyzes the sharing patterns of Digital Basin information, and investigates
the possibility to share basin information with current technology. The author then proposes a specific thread and makes
an experiment for the sharing of Digital Basin information with the Qinhuai River basin data to explain the correctness
and effectiveness of the method brought in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784009 (2010) https://doi.org/10.1117/12.872305
Spatial information especiallly Digital Earth technology have been put forward for about 50 years. Now it has been
applied in different area successful such as city management, landuse monitor, digital city and even globe change. In the
field of geology and mining, it also needs the Digital Earth technology to make it clear that how does the different
stratums and geology structures being under the ground and where the mine is being. Because of the complexity of
geology field, there is few successful report on the application of Digital Earth technology in this area. In this paper, we
just put forward a new method which will integrate all kinds of geology and mining data together, using spatial
information technology to develop a 3D geology and mining data sharing platform. In this platform, users can find the
different stratums location and development, they can also search some element of features, located them and viewed
them in 3D model. This platform has changed the traditional geology data's access and using.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400A (2010) https://doi.org/10.1117/12.871969
This paper aims at reevaluating the role of Spatial Data Infrastructure (SDI) as an initiative which can facilitate disaster
management by providing a better way for managing spatial data. SDI contains technological and non-technological
aspects for sharing and exploiting geo-information resources. The results of the research have shown that a prototype
emergency spatial data sharing system using an SDI framework can assist the disaster management in such a way that
they can improve the efficiency and quality of data for the decision making and increase the effectiveness in data
collection and sharing activities during the management of emergency spatial data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400B (2010) https://doi.org/10.1117/12.871970
Real-time and realistic visualization of the Earth terrain based on Global Discrete Grids is a subject of considerable
attention at present. In this paper a multi-resolution visualized model of the global terrain based on DQG (Degenerate
Quadtree Grid) is approached. Our approach starts with a partition method, called Degenerate Quadtree Grid (DQG) and
an encoding scheme of the corresponding grids is introduced briefly. Next, a multiple level of detailed model related to
viewpoint is presented. In this model, a criterion of terrain model simplification based on viewpoint and roughness of the
degenerate quadtree node is developed. Then, the quick display strategy based on DQG is discussed. It includes two
parts: one is frustum culling while the other is dynamic invocation of the data. In the end, the experiment and analysis
are made with the global terrain data, GTOPO30. The results illustrate that: (1) the global DEM based on the DQG is
seamless, hierarchical, and regular over the whole Earth; (2) the quantity of global DEM data can be reduced
significantly; (3) the visual effect of global terrain is not lost by the model simplification. The results are smooth and
receivable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400C (2010) https://doi.org/10.1117/12.872269
Place names are signs of geographic entities, and the database of which, a digital gazetteer, is an increasingly important
form of geographic information. So the construction and the application of digital gazetteers are growing research areas.
And significant progress has been made in the development of which, but there are still some vital issues that require
further work: (1) Places and attributes related would inevitably change over time, but few gazetteer services model
temporal ranges; (2) Current gazetteers do not normally hold historical information; (3) The relationships between place
name entries are few considered in most existing digital gazetteers; (4) Geographic footprints currently used in gazetteers
are usually confined to simple representations. In this paper, we proposed a spatio-temporal data model for
administrative division place names, which are in a significant and large proportion of place names. We took Xiamen
City as a case, located in coastal areas of Fujian Province, Southeast of China, to describe our model. In the model, we
considered spatio-temporal changes and relationships between entries in gazetteers, and the footprints used are multiscale
patches adapting to hierarchical administrative system. Accordingly, our model could provide an important
reference for digitizing gazetteers and further for implementing digital earth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400D (2010) https://doi.org/10.1117/12.872310
Soil moisture is the most important parameter in hydrology and meteorology, as well as in the fields of many agricultural
sciences. According to the water absorption rate curve, this paper proposes that the reflectivity of the band 6th and 7th of MODIS image data can be used to monitor soil moisture content and made comparisons with the Temperature
Vegetation Drought Index (TVDI). Through field investigation of desertified area of Xinjiang, China, we confirm that
there is a good negative correlation between the reflectivity of the band 7th of MODIS image data and the surface
humidity. The result shows that the reflectivity of the Band 7 is an effective way to monitor the soil moisture of
desertified areas in a large scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dirk Hoffmeister, Andreas Bolten, Constanze Curdt, Guido Waldhoff, Georg Bareth
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400E (2010) https://doi.org/10.1117/12.872315
The interdisciplinary Transregional Collaborative Research Center 32 (CRC/TR 32) works on exchange processes
between soil, vegetation, and the adjacent atmospheric boundary layer (SVA). Within this research project a terrestrial
laser scanning sensor is used in a multitemporal approach for determining agricultural plant parameters. In contrast to
other studies with phase-change or optical probe sensors, time-of-flight measurements are used. On three dates in the
year 2008 a sugar beet field (4.3 ha) in Western Germany was surveyed by a terrestrial laser scanner (Riegl LMS-Z420i).
Point clouds are georeferenced, trimmed, and compared with official elevation data. The estimated plant parameters are
(i) surface model comparison between different crop surfaces and (ii) crop volumes as well as (iii) soil roughness
parameters for SVA-Modelling. The results show, that the estimation of these parameters is possible and the method
should be validated and extended.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chong Du, Jun Xu, Jing Zhang, Wangli Si, Bao Liu, Dapeng Zhang
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400F (2010) https://doi.org/10.1117/12.872318
The description of a spatial relation is the reflection of human's cognition of spatial objects. It is not only affected by
topology and metric, but also affected by geographic semantics, such as the categories of geographic entities and
contexts. Currently, the researches about language aspects of spatial relations mostly focus on natural-language
formalization, parsing of query sentences, and natural-language query interface. However, geographic objects are not
simple geometric points, lines or polygons. In order to get a sound answer according with human cognition in spatial
relation queries, we have to take geographic semantics into account. In this paper, the functions of natural-language
spatial terms are designed based on previous work on natural-language formalization and human-subject tests. Then, the
paper builds a geographic knowledge base based on geographic ontology using Protégé for discriminating geographic
semantics. Finally, using the geographic knowledge in the knowledge base, a prototype of a query system is
implemented on GIS platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400G (2010) https://doi.org/10.1117/12.872321
The proposed climatic index, named C value, is the coefficient of the third correlative equation that characterizes the
dryness (or wetness) of climate, and the third correlative equation deals with heat and water balance related to
evaporation. In this article, C value, mean temperature, and summer temperature are combined to predict the distribution
of vegetation zones in world. The overall impression from examining the resulting vegetation map is that the location
and distribution of vegetation zones in the world are predicted fairly well. Comparison between the predicted vegetation
map and the Holdridge Life Zones map are made based on Kappa statistics, which indicates very significant agreement
for the Ice/polar desert and Desert. Agreement is also significant for the categories of tundra, boreal forest, temperate
mixed and deciduous forest, temperate steppe, subtropical mixed and deciduous forest, subtropical xerophytic
woods/shrubs, tropical rain forest, tropical seasonal forest, tropical savanna, and tropical thorn woods/shrubs, even
though much larger area of tundra and tropical thorn woods/shrubs were predicted compared to those on the Holdridge
life zones map. The results show that C value has a strong correlation with vegetation distribution. As a climatic index, C
value can be used for bioclimatic mapping at global scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400H (2010) https://doi.org/10.1117/12.872322
This article reviews the development process of Geographic Information System (GIS) platform technology in the past
decade, and prospects the future directions of the GIS industry. The influence of the future development direction on
future GIS applications was also discussed in this paper for the reference of GIS developers and end users to select the
right and promising GIS platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400I (2010) https://doi.org/10.1117/12.873492
Recent years have witnessed the emerging Virtual Globe technology which has been increasingly exhibiting powerful
features and capabilities. However, the current technical architecture for geovisualization is still the traditional data-
viewer mode, i.e. KML-Geobrowser. Current KML is basically an encoding format for wrapping static snapshots of
information frozen at discrete time points, and a geobrowser is virtually a data renderer for geovisualization. In the real
world spatial-temporal objects and elements possess specific semantics, applied logic and operational rules, naturally or
socially, which need to be considered and to be executed when corresponding data is integrated or visualized in a visual
geocontext. However, currently there is no a way to express and execute this kind of applied logic and control rules
within the current geobrowsing architecture. This paper proposes a novel architecture by originating a new mechanism,
DKML, and implementing a DKML-supporting prototype geobrowser. Embedded programming script within KML files
can express applied logic, control conditions, situation-aware analysis utilities and special functionality, to achieve
intelligent, controllable and applied logic-conformant geovisualization, and to flexibly extend and customize the DKMLsupporting
geobrowser. Benefiting from the mechanism developed in this research, geobrowsers can truly evolve into
powerful multi-purpose GeoWeb platforms with promising potential and prospects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400J (2010) https://doi.org/10.1117/12.872323
We present a methodology for the analysis of aggregate cell phone network activity that allows estimating the presence
of people in geographical areas at the scale of the borough. The methodology is based on hourly counts of GSM network
data handled by a major Dutch telecom operator over the metropolitan area of Amsterdam. Our methodology is
structured in four steps. First we build an allocation function that indicates how to redistribute the by-antenna statistics to
each borough. Second we analyze the trends of the temporal signatures of the boroughs and we select the clusters with
minimum inter-cluster variation. Then we analyze the correlation of two demographic measures against data from the
network during the selected temporal clusters. Finally we show that location updates are a good proxy for estimating the
presence people, and that their variations over time reflect major trends of social dynamics. We also discuss how the
analysis of GSM network activity can provide unprecedented insights on how crowds move in the city. We finally
discuss our future work, whose outcomes we believe have the potential to solve real-world problems like crowd
management, the support for special events, and the analysis of urban dynamics and urban policies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Christopher D. Elvidge, Benjamin T. Tuttle, Paul C. Sutton
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400K (2010) https://doi.org/10.1117/12.872324
We have developed a web-based interface for the collection of surface cover type data using gridded point counts on
displays of high spatial resolution color satellite imagery available in Google Earth. The system is designed to permit a
distributed set of analysts to contribute gridded point counts to a common database. Our application of the system is to
develop a calibration for estimating the density of constructed surface areas worldwide at 1 km2 resolution based on the
brightness of satellite observed lights and population count. The system has been used to collect a test data set and a
preliminary calibration for estimating the density of constructed surfaces. We believe the web-based system could have
applications for research projects and analyses that require the collection of surface cover type data from diverse
locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400L (2010) https://doi.org/10.1117/12.872326
New commercial high-resolution synthetic aperture radar (SAR) satellite systems offer new possibilities for radar remote
sensing. SAR images are difficult to understand and the interpretation of SAR images requires theoretical knowledge and
practical experience. Fast and interactive SAR simulators can assist the SAR image interpretation training and they are
also valuable tools for the SAR image interpretation itself. Using Graphics Processing Units (GPU), it is possible to
simulate SAR images in real-time. The single-bounce reflections can be simulated using the rasterization approach. For
real-time double-bouncing simulation a GPU based ray-tracing approach can be used. In this approach, the GPU is used
as a stream processor and the texture information is interpreted as data elements instead of pixel elements. In this way, a
real-time simulation, including single- and double-bounce reflections, can be achieved even for complex scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400M (2010) https://doi.org/10.1117/12.872328
Observations and research effects summarized by the fourth IPCC indicate that the earth's climate system has been
undergoing a notable change over the last 50 years and main character of this change is global swarming moving well
outside the range of natural variability. Earth observation satellites can detect global changes from space platform and
obtain the key sensitive factors of depicting these changes. Taking snow cover as a case, this paper generalizes principles,
methods, techniques and instruments of monitoring snow change, and analyzes abilities and problems of earth
observation satellites now. Finally, this paper proposes two possible resolution ways according to main problems of
monitoring snow cover. The first method develops hyperspectral, high spatial and temporal resolution new-generation
sensors. The second method, with four-dimensional data assimilation technique, introduces continuously snow
observations from various observation ways into dynamic process models. Furthermore, data assimilation can assimilate
advanced remote sensing data about snow by virtue of the new-generation sensors to be launched and create continuous,
higher precision and space-time consistent snow cover product. This provides scientific and reliable evidences to make
favorable strategies for strategy makers and make human better predict, adapt and mitigate warming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Web-based Services, System Design, and Algorithms for Digital Earth
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400N (2010) https://doi.org/10.1117/12.872669
Honghe National Natural Reserve (HNNR) in the Sanjiang Plain, Northeast China, is selected for this study due to its
status as an important international wetland and being threatened from the serious ecological degradation. Various
multispectral images of remote sensing such as TM, SPOT, QUICKBIRD are used for both the landscape classification
mapping and wetland biomass mapping. Wetland plants within the HNNR are classified into 6 ecosystem types. Digital
analysis techniques of hydrological analysis and GIS spatial analysis based on the finer DEM (1:10,000) integrated with
the remote sensing image interpretation are applied for biophysical information generation such as quantifying the spatial
pattern of the surface water within the study area. The field work and historical statistical data are used for digital model
validation and calibration, and ecological variables from temporal images coupled with those validated digital
biophysical information of wetland habitat are used for comprehensively assessing the marsh wetland degradation, and a
geo-information technical system framework is integrated for assessing the biophysical information, showing ecological
degradation of wetlands and analyzing its driving forces. Our study indicates that 54 per cent of marsh wetland has been
lost due to its degradation into the meadow wetland within HNNR in the past 30 years. The loss of suitable wetness of
marsh wetland habitat causes the marsh wetland degradation. With the application of Digital Earth theories and
techniques, the investigators provide a useful case study for efficient protection of wetland resources and scientific
ecological monitoring from this research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400O (2010) https://doi.org/10.1117/12.872670
With fast development of urban informationalization, digital city and digital traffic, cross-sectoral and inter-departmental
cooperation become increasingly needed. It is of difficulty for GIS system construction to break the information
fragmentation, realize the heterogeneous and multi-source geospatial related information integration, sharing and
interoperability. Service Oriented Architecture (SOA) is an effective solution to provide data access and application
service by Web Services. The combination of GIS and SOA can help GIS business agility, change traditional GIS
design, development and application patterns, which promote the conversion of visible GISystems to invisible
GIServices. Based on the construction of dynamic transportation data exchange GIService and dynamic path planning
GIService, and its integrated application of WebGIS transportation information service system, this paper presents the
overall structure of the transportation information resources integration platform, which realizes transportation
information resources integration, traffic block information submitting, information exchanging, traffic information
query, dynamic traffic information publishing, and dynamic best path query, etc.. The application result proves that it is a
better solution for transportation information resources sharing, and it also changes the traditional GIS design,
development and application patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
YanMing Gao, HaiHong Wang, Bo Wang, Lifen Yang, Zhen Shi
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400P (2010) https://doi.org/10.1117/12.872672
A novel submarine pipelines routing (SPR) planning method to find an optimal pipeline route automatically is proposed
based on spatial analysis of geographical information system (GIS) and dynamic programming theory. By analyzing the
effects of engineering geology conditions, ocean environmental conditions and ocean exploitation conditions around the
submarine pipeline system, an analytical model is provided to plan the pipelines to improve the pipeline stabilities and
reduce the costs. Dynamic programming is used to found the optimal routing with a performance index. The spatial
database is designed and the GIS-aided submarine pipeline system is developed. An example is carried out to verify the
method proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400Q (2010) https://doi.org/10.1117/12.872675
In recent years national parks worldwide have introduced online virtual tourism, through which potential visitors can
search for tourist information. Most virtual tourism websites are a simulation of an existing location, usually composed
of panoramic images, a sequence of hyperlinked still or video images, and/or virtual models of the actual location. As
opposed to actual tourism, a virtual tour is typically accessed on a personal computer or an interactive kiosk. Using
modern Digital Earth techniques such as high resolution satellite images, precise GPS coordinates and powerful 3D
WebGIS, however, it's possible to create more realistic scenic models to present natural terrain and man-made
constructions in greater detail. This article explains how to create an online scientific reality tourist guide for the
Jinguashi Gold Ecological Park at Jinguashi in northern Taiwan, China. This project uses high-resolution Formosat 2
satellite images and digital aerial images in conjunction with DTM to create a highly realistic simulation of terrain, with
the addition of 3DMAX to add man-made constructions and vegetation. Using this 3D Geodatabase model in
conjunction with INET 3D WebGIS software, we have found Digital Earth concept can greatly improve and expand the
presentation of traditional online virtual tours on the websites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400R (2010) https://doi.org/10.1117/12.872676
Web services technology is emerging as a new approach for providing access to heterogeneous computation resources
and integration of distributed scientific applications. Open GIS Consortium (OGC) has constituted a series of standards,
guiding creating the Web services to provide or operate geographic information. But it does not show how to integrate
these services. While a portal is an application framework based on Web, which provides a single integrated access to
applications, and it could integrate distributed information and applications. In this paper, we have tested building a
portal oriented Web services to carry out integration and interoperability of geographic information. A framework has
been designed based on an open source portal development tool named GridSphere. And some portlets invoking
geographic Web services have been created and deployed. The result of the experiment indicates that the solution is
feasible, and its operations are simple and convenient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400S (2010) https://doi.org/10.1117/12.872679
The paper discusses the position of GIS in Geography as a subject especially at German schools. It points out that
students only need simple GIS-functions in order to explore digital atlases or webbased data viewers. Furthermore it is
widely accepted that learning achievements improve if students make use of the idea of self-employed and explorative
working on information produced by themselves. These two arguments have led to the development of the WebMapping
tool "kartografix_school". It allows users to generate maps with new and individually defined content on the internet.
For that purpose the tool contains generalized outlines of all countries of the world as well as of German States. As these
boundaries are given users can assign new attribute data to these geoobjects. These data are transferred to a graphic
presentation. It is possible to define the classification and colours for each class. Users can change and update all
information (data as well as number of classes, definition of classes, colours) at any time. Moreover
"kartografix_school" offers the possibility to produce maps which are composed of two layers. All data are stored at a
server located in the University of Osnabrück. "kartografix_school" is integrated in an e-Learning environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400T (2010) https://doi.org/10.1117/12.872682
The proposed system integrates GPS / pseudolite / IMU and thermal camera in order to autonomously process the graphs
by identification, extraction, tracking of forest fire or hot spots. The airborne detection platform, the graph-based
algorithms and the signal processing frame are analyzed detailed; especially the rules of the decision function are
expressed in terms of fuzzy logic, which is an appropriate method to express imprecise knowledge. The membership
function and weights of the rules are fixed through a supervised learning process. The perception system in this
paper is based on a network of sensorial stations and central stations. The sensorial stations collect data
including infrared and visual images and meteorological information. The central stations exchange data to
perform distributed analysis. The experiment results show that working procedure of detection system is reasonable
and can accurately output the detection alarm and the computation of infrared oscillations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400U (2010) https://doi.org/10.1117/12.872683
In traditional electronic government GIS (E-gov GIS), spatial data evaluation, examination and approval are dealt with
by individuals, and the results are shared among collaborators in asynchronous mode. In order to improve the
collaborative ability of E-gov GIS, a message-based synchronized cooperative GIS system (MSCGIS) is proposed in this
paper. MSCGIS abstracts collaborators' GIS operations and encapsulates them into GIS command messages. And then
the GIS command messages are passed and executed among related collaborators. Based on messaging, MSCGIS can
realize the GIS synchronized cooperation of group. Some key issues are investigated in detail, such as the design scheme
of MSCGIS, the encoding specification of GIS command message based on XML, and the interface and the
collaborative process of prototype system. In a word, the construction idea of MSCGIS is sharing the GIS functions
through passing collaborators' operations, rather than sharing spatial data among collaborators in traditional modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400V (2010) https://doi.org/10.1117/12.872687
Emergencies are incidents that threaten public safety, health and welfare. Many disastrous emergency events that
happened in recent years have drawn great attention to more effective Emergency Response Systems (ERS). ERS need to
integrate various kinds of information to support quick emergency response. Digital Earth can solve data interoperation
and information integration problems in emergency response. This paper aims to establish the system architecture for
quick emergency response based on relevant principles and technologies in the domain of Digital Earth. First, this paper
analyzes the system requirements of ERS in terms of information integration, fast data access, timeliness and information
updating, etc. Second, this paper explores the useful principles and technologies in Digital Earth and discusses how to
incorporate them into the architecture of ERS. More attention is paid to Open Geospatial Consortium's Sensor Web
Enablement (SWE) information standards. Furthermore, Service Oriented Architecture (SOA) and Location-Based
Services (LBS) are also reviewed and the "From Sensor to User" application pattern in emergency response is put
forward. Finally, a system architecture based on Digital Earth is proposed for ERS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400W (2010) https://doi.org/10.1117/12.872688
As satellite remote sensing information has gradually become an important data source and plays more and more
important role in multi-domain, this paper does an in-depth study on global satellite remote sensing image data
scheduling and publishing mechanism. It also design and implements a grid-based management and publication engine
of satellite image data. For meeting the multi-request from clients with different access interface at the same time, the
designed web-based high-effective general-purpose publication system is represented in the end.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400X (2010) https://doi.org/10.1117/12.872689
An intelligent Synthetic Aperture Radar simulation system can be used to optimize the design of SAR system parameters
and select optimum SAR data acquisition mode. Previous research mainly focused on simulating geometric
characteristics of SAR image, lacking of radiometric consideration in flat areas because of the complexity of the
problem. The popular geometric model of Range Doppler Equations cannot be applied to SAR sensor pre-launched as it
relies on so many parameters contained in the original SAR data. In this paper we develop a new simulation system
based on simplified geometric model and statistical radar scattering model for different thematic contents. It can generate
simulated SAR image product at different bands, polarizations, incidence angles and resolutions, according to user's
need. As an experiment, a simulation example of ENVISAT ASAR is compared with the real data collected, to
demonstrate the utility and correctness of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400Y (2010) https://doi.org/10.1117/12.872691
Product Archive System (PAS), as a background system, is the core part of the Product Archive and Distribution
System (PADS) which is the center for data management of the Ground Application System of HY-1B satellite hosted
by the National Satellite Ocean Application Service of China. PAS integrates a series of updating methods and
technologies, such as a suitable data transmittal mode, flexible configuration files and log information in order to make
the system with several desirable characteristics, such as ease of maintenance, stability, minimal complexity. This paper
describes seven major components of the PAS (Network Communicator module, File Collector module, File Copy
module, Task Collector module, Metadata Extractor module, Product data Archive module, Metadata catalogue import
module) and some of the unique features of the system, as well as the technical problems encountered and resolved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78400Z (2010) https://doi.org/10.1117/12.872693
Disaster prevention and mitigation get more and more attentions by Chinese government, with the national economic
development in recent years. Some problems exhibit in traditional disaster management, such as the chaotic management
of data, low level of information, poor data sharing. To improve the capability of information in disaster management,
Meteorological Disaster Management and Assessment System (MDMAS) was developed and is introduced in the paper.
MDMAS uses three-tier C/S architecture, including the application layer, data layer and service layer. Current functions
of MDMAS include the typhoon and rainstorm assessment, disaster data query and statistics, automatic cartography for
disaster management. The typhoon and rainstorm assessment models can be used in both pre-assessment of pre-disaster
and post-disaster assessment. Implementation of automatic cartography uses ArcGIS Geoprocessing and ModelBuilder.
In practice, MDMAS has been utilized to provide warning information, disaster assessment and services products.
MDMAS is an efficient tool for meteorological disaster management and assessment. It can provide decision supports
for disaster prevention and mitigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Claudia Spinetti, Laura Colini, M. Fabrizia Buongiorno, Fawzi Doumaz, Valerio Lombardo, Massimo Musacchio, M. Ilaria Pannaccione Apa
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784010 (2010) https://doi.org/10.1117/12.872699
The availability of EO satellites in the last decades has offered the possibility to integrate the ground surveillance with
satellite derived information increasing the knowledge of territory situation and phenomena characteristics during natural
disasters. All member states of the European Union are affected by at least one major natural hazard such as Floods,
Fires, Windstorms, Earthquakes, Volcanoes, Landslides, Rapid vertical ground displacements, and also by risks related
to man-made activities such as chemical and nuclear accidents. The above mentioned risks can be mitigated through a
better prevention and preparedness within a multi-risk joint effort of all actors in risk management and integration of
societal needs.
In this frame the EC FP6 Preview-Eurorisk project aims at developing new geo-information services for atmospheric,
geophysical and man-made risk management at a European level; the End-Users of these services are represented by the
Civil Defence Agencies of the different partner countries.
In the Geophysical Cluster dedicated to earthquakes and volcanoes risks, a prototype system to support end-users (i.e.
national Civil Protections) has been developed. The service separates the natural phenomena in 3 main phases: earlywarning,
crisis management and post-crisis.
The service prototype provides easy and rapid access to assets mapping, mapping, monitoring, forecasting and awareness
of risk as well as damage assessment at European, regional and local levels according to the operative necessities of the
End-Users.
The system products chain consists in:
inquiring the satellite data archives,
extracting the information/parameter from Earth Observation data using already developed scientific modules
producing numerical values or geo-coded thematic maps as products archived in the database system.
The End-User interface consists in a Web-GIS system where products, in vector or raster format, are visualized and
distributed according to the specific emergency phase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784011 (2010) https://doi.org/10.1117/12.872700
Currently, the conflict between vehicle and road is becoming increasingly serious, how to implement advanced technology
to obtain traffic information fast and accurately becomes a key point to upgrade the level of transportation management and
services. It is an important expansion of conventional technology that the dynamic traffic information is obtained rapidly
by the low-altitude aircraft. It is low cost and suitable for collecting a wide range of traffic information. This paper use
low-altitude airship as the platform, and several sensors(such as GPS,CCD, video encoder and COFDM wireless
transmission equipment) are integrated into the aircraft compose a low altitude remote sensing platform to obtain the
high-definition traffic video data. This paper aim at the video proposed a vehicle detection method in the complex and
varying background. This method is capable of detecting moving and static vehicles accurately on the road in real time
without any supplementary information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784012 (2010) https://doi.org/10.1117/12.872701
Terrain structure lines are lines that indicate significant topographic features of the terrain. It is widely used in the field
of surveying and mapping, GIS, topography representation and engineering designing. Digitized contour data contains
the information of these structure lines implicitly. In this paper, the authors investigate the problem of the extraction of
terrain structure lines, and discuss the existing methods for the extraction of terrain structure lines. After analyzing the
existing method theoretically, conclude that it is very important to make full use of data information in extracting terrain
structure lines, and put forward a new brief and practical algorithm for auto extracting terrain lines from digital terrain
data. The new algorithm extracts terrain feature points by making digital contour lines into sections; identifies and
classifies character points; finally extracts ridge and valley lines. The algorithm combines geometry and physics
characteristics of ridge or valley line. Experiment result shows that the ridge and valley extracted by the arithmetic is
concord to the terrain, and proves that the new algorithm is quite effective and reliable for extracting terrain structure
lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784013 (2010) https://doi.org/10.1117/12.872702
In recent years, the Rational Function Model (RFM) is more and more popular because of its generalization, high
accuracy and simplification. And it gradually replaced the traditional rigorous physical models. The classical form of
Rational Function Model is usually expressed as ratios of polynomials whose term's maximum power are limited to 3.
Although the maximum power of each term is limited, the terms of a polynomial with 3 power in the model are 20, and
the total terms of one RFM will be up to 80, which brings heavy calculation. Meanwhile, it needs 39 control points to
solve the RFM, but it is difficult to collect so many control points artificially. So this paper aims to simplify the RFM.
The paper can mainly be divided into three parts. Firstly, the development of RFM is introduced and its advantages and
disadvantages are discussed. Secondly, the classical method, iterative least squares solution, for solving RFM is detailed
described, and three anti-ill-posed algorithms for overcoming the RFM's ill-posed problem are introduced. Lastly, using
experimental results to prove it is feasible to simplify the RFM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784014 (2010) https://doi.org/10.1117/12.872703
The establishment of multi-source database was designed to promote the informatics process of the geological disposal
of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application
are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province,
and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully
share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree
algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory,
programming design and implementation of the ideas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hongsheng Li, Yingjie Wang, Qingsheng Guo, Jiafu Han
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784015 (2010) https://doi.org/10.1117/12.872704
Digital earth is a virtual representation of our planet and a data integration platform which aims at harnessing multisource,
multi-resolution, multi-format spatial data. This paper introduces a research framework integrating progressive
cartographic generalization and transmission of vector data. The progressive cartographic generalization provides
multiple resolution data from coarse to fine as key scales and increments between them which is not available in
traditional generalization framework. Based on the progressive simplification algorithm, the building polygons are
triangulated into meshes and encoded according to the simplification sequence of two basic operations, edge collapse
and vertex split. The map data at key scales and encoded increments between them are stored in a multi-resolution file.
As the client submits requests to the server, the coarsest map is transmitted first and then the increments. After data
decoding and mesh refinement the building polygons with more details will be visualized. Progressive generalization and
transmission of building polygons is demonstrated in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784016 (2010) https://doi.org/10.1117/12.872838
Current routing systems in GIS software mostly provide routes that allow the users to navigate between source and
destination points in 2 dimensions. This paper describes the development of a web-based 3D routing system for a
university campus using Open Source Software (OSS) and Open Specifications (OS). The system uses the advantages of
interoperability and allows the integration and extension of different system components. A data model is described and
the process of creating the data model and the migration of the data stored in dxf architectural drawings to the data model
are explained. The paper also discusses the architecture and the interaction of the different prototype components such as
3D viewer, database, and programming languages. Furthermore, it describes customized tools that were developed to
provide the users a simple interface to interact with the system through a standard internet browser.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784017 (2010) https://doi.org/10.1117/12.872839
The intertidal zone as is one of the most dynamic areas on the earth and conducting topographic surveys is very
difficult. It is particularly hard to obtain elevation information when the tidal flat is a mud flat. In this article, we selected
Jiuduansha (Jiuduan Shoal) tidal flat as a test area to demonstrate a fractional Brownian motion model based tidal flat
elevation estimation. The Jiuduansha is a large shoal with a large mud flat located in the Changjiang River Estuary.
Two Landsat TM images and two CBERS CCD images acquired in different seasons in 2008 were processed and
waterlines were extracted from images of different times according to the tidal conditions. By considering the fact that
the water surface is actually a curved surface, we dissected the waterlines into waterside points and assigned an elevation
value to every point through interpolation of the tidal level data taken from nearby tidal observation stations at the
similar time when the satellite images were acquired. With the elevation data of the waterside points and digital sea chart
depth data, a digital elevation model (DEM) was constructed using the fractional Brownian motion (fBm) model with a
midpoint displacement algorithm through the Matlab toolbox. Finally, a quantitative validation of the model was
completed using the 17 ground survey data positions on the tidal flat measured in November 2008. The simulation results
show a good visual effect and high precision. The mean square root error is 0.155m respectively for the ground survey.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784018 (2010) https://doi.org/10.1117/12.872840
Considering large, complex and multi-dimensional marine field data, the general representation and analysis methods
based on 2D GIS can not meet the requirements of Ocean Research. For multi-dimensional marine sampling data of sea
water properties (ARGO sampling data, ship measured data, etc.), three dimensional interpolation and visual analysis
methods can be used to reveal the distribution of sea water properties (such as temperature and salinity), and it is an
effective way to detect regional anomaly of marine phenomenon. In order to handle three dimensional sampling data of
sea water properties, this paper developed a three dimensional volume visualization analysis GIS component with
OpenGL and C++. Three visualization analysis methods are designed and presented, including three-dimensional volume
visualization, three dimensional slicing and cutting analysis and three-dimensional contour surface analysis. Example
test is conducted, and the test result shows the proposed methods can be effectively used for ocean data analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784019 (2010) https://doi.org/10.1117/12.872841
In 2005, Ningbo Design Research Institute of Mapping & Surveying started the development of concepts and an
implementation of Virtual Reality Ningbo System (VRNS). VRNS is being developed under the digital city
technological framework and well supported by computing advances, space technologies, and commercial innovations. It
has become the best solution for integrating, managing, presenting, and distributing complex city information. VRNS is
not only a 3D-GIS launch project but also a technology innovation. The traditional domain of surveying and mapping
has changed greatly in Ningbo. Geo-information systems are developing towards a more reality-, three dimension- and
Service-Oriented Architecture-based system.
The VRNS uses technology such as 3D modeling, user interface design, view scene modeling, real-time rendering and
interactive roaming under a virtual environment. Two applications of VRNS already being used are for city planning and
high-rise buildings' security management. The final purpose is to develop VRNS into a powerful public information
platform, and to achieve that heterogeneous city information resources share this one single platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401A (2010) https://doi.org/10.1117/12.872843
Nowadays, the development of Digital Earth has made a tremendous impact on all aspects of human social life, and a
series of effective work around Digital Earth, which attracted wide attention, have been carried out by people with
different background. Meantime, exploitation of the popularized technology is constantly being mature with the
development of Google Earth. However, users still have many difficulties in constructing and operating 3D scenes on
web, because of the limitation of network speed. This paper aims to provide a more convenient method for the client
users to interact with the 3D models, thus to promote popularization of Digital Earth technology and apply it in many
aspects of social life. In this research, Ajax was utilized as a newly emerged network technology, since it could provide a
good solution for application in developing of virtual reality technology based on Web. This paper analyzed the
principles and key technologies of Ajax, explored how Ajax can be used in construction of interactive virtual reality
scenes through case study, in order to enhance access speed of virtual scenes on Web page, and increase the authenticity,
interactivity and extensibility of virtual scenes. This paper introduced the whole process of virtual model construction
and proposed an efficient way to achieve interactive scene. The results showed that the combination of SAI (Scene
Access Interface) method and Ajax technology could effectively save network bandwidth and enhance user experience,
which laid the foundation for the development of virtual reality technology based on Web and the popularization of
Digital Earth technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401B (2010) https://doi.org/10.1117/12.872844
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided
challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic
environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic
capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic
environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important
part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper
introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of
realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim
virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand
box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on
geographic information are discussed as well. Further virtual geographic applications can be developed based on the
foundation work of realistic terrain visualization in virtual environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jian Wang, Fengxiang Jin, Xiangwei Zhao, Yunling Li
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401C (2010) https://doi.org/10.1117/12.872845
The 3D landscape system of Digital City is an urgent demand. Because of dense urban construction and complex
topography, capture of 3D spatial information is very difficult, traditional manual modeling method is impossible to
complete a large-scale 3D modeling. In this paper 3D laser scanning technique is introduced to capture urban spatial
information rapidly. Data processing methods of filter by range and filtering based 2D Delaunay are presented. And the
technology and method of modeling on the points cloud are discussed. Finally as an example, the 3D landscape models
of a campus are introduced in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401D (2010) https://doi.org/10.1117/12.872847
A method for identifying the released region of Asian dust using the wind velocity near the surface and the long-range
inverse transport model that traces the wind field in the backward direction from positions where Asian dust was
observed is described. The spatio-temporal concentration distribution of dust clouds over the East Asia was computed in
the case that Asian dust clouds were observed in Japan from April 1 to April 2, 2007. In this case, the released mass flux
of Asian dust in source regions was determined such that the simulated concentration of Asian dust almost corresponds
to the concentration of the suspended particulate matter (SPM) measured at various places in Japan. In order to better
understand through which paths sand dust particles are transported to Japan, the time variation of the concentration
distribution of Asian dust clouds and the wind field at 950hPa from 22:00 on March 30 to 15:00 on April 4, 2007was
animated every one hour in the Google Earth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401E (2010) https://doi.org/10.1117/12.872849
In this paper we focus on the application of transferable, object-based image analysis algorithms for dwelling extraction
in a camp for internally displaced people (IDP) in Darfur, Sudan along with innovative means for scientific visualisation
of the results. Three very high spatial resolution satellite images (QuickBird: 2002, 2004, 2008) were used for: (1)
extracting different types of dwellings and (2) calculating and visualizing added-value products such as dwelling density
and camp structure. The results were visualized on virtual globes (Google Earth and ArcGIS Explorer) revealing the
analysis results (analytical 3D views,) transformed into the third dimension (z-value). Data formats depend on virtual
globe software including KML/KMZ (keyhole mark-up language) and ESRI 3D shapefiles streamed as ArcGIS Server-based
globe service. In addition, means for improving overall performance of automated dwelling structures using grid
computing techniques are discussed using examples from a similar study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401F (2010) https://doi.org/10.1117/12.872850
Modeling the world has been done in various forms and formats. From traditional 2D maps, with well established set of
rules and conventions, to globe spheres that attempt to model the earth closer to reality. Now with the advance in
computer graphics and computing resources, modeling a digitally interactive 3d earth has become more possible and
achievable. This paper attempts to describe ESRI's 3D visualization systems that could be employed to attain the
creation of such a digital globe. In particular emphasis will be given how such a system could incorporate analytical and
modeling capability as well as a discussion on its data delivery and consumption paradigm. Where appropriate, solutions
in the form of workflows and software design approaches are presented to alleviate some of the challenges such systems
currently face. The paper will conclude with forward looking assessment into the opportunities such systems promise
and the unique position they have for the advancement of Geographic Sciences as a whole and digital earth modeling in
particular.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401G (2010) https://doi.org/10.1117/12.872851
This paper demonstrates a study to simulate brightness temperature (BT) between 1-100GHz on the Tibetan plateau area
using a coupled land-canopy-atmosphere model at clear-sky conditions.These simulations were compared with
Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) measurements on May 1, 2003. A
sensitivity study was also carried out to access the relative contributions of the main parameters (particularly the
roughness and vegetation water content). Difference between simulated and measured TB were analyzed, discriminating
possible issues either linked to the radiative transfer model or due to the land surface parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401H (2010) https://doi.org/10.1117/12.872946
High spatial resolution satellite imagery has been widely used in mapping, environmental monitoring, disaster
management, city planning, because of its favorable visual effects, plentiful texture information, accurate positioning etc.
Traditional classification methods which face to the medium/low-resolution satellite data have been proved not fit for the
high resolution image processing. The object-oriented classification method can resistance the salt and pepper effect,
because it based on patches of spectrally similar pixels which have been produced by image segmentation. In this paper,
a hierarchical framework that based on the stratified classification idea is proposed and applied to the land cover
mapping of city. This stratified framework integrates the object-oriented multi-scale segmentation technology and
quantification of image object features. The scale parameter of segmentation is the key factor during the framework
building. In the study, Scottsdale, Arizona state, USA, is selected as the study area because of its plentiful spatial features
and beautiful sight. The overall accuracy of the land cover classification is 82.58%, the Kappa Coefficient is 0.80 and the
user's accuracies of the most land-objects are exceeding 85%. The study is demonstrated using the object-oriented image
analysis software, Definiens Developer 7.0, which can be integrated with other spatial data in vector-based geographical
information system (GIS) environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401I (2010) https://doi.org/10.1117/12.872947
Shape is an important visual feature of very high-resolution satellite data. Fourier Descriptor (FDs) was introduced in
this paper as a new method to extract and represent objects' shape features of IKONOS imagery and a 5-dimensional (5-
D) feature-vector was proposed as a shape parameter. A classification model was established based on K-means
clustering algorithm, the 5-D feature-vector was taken as discrimination variable together with the mean gray values. The
results showed that when involving the shape feature-vector into the classification model, the overall classification
accuracy was 82.4% with 84.6% producer accuracy of roads. So it was confirmed a feasible way to represent shape
features of remotely sensed imagery based on FDs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401J (2010) https://doi.org/10.1117/12.872948
The grey system theory has recently been emerged as a powerful tool for image processing, image analysis and image
understanding, such as, image compression, image denoising, image edge detection, image hiding, objects recognition
and classification, image retrieval, image fusion and so on. However, these are mainly analyzed based on the
technologies of grey model and grey relational analysis. The grey difference information principle is one of six basic
principles of grey system theory. All there must be the difference in information, namely, the difference is the
information. But so far the studies in image engineering based on this principle are rare. And image noise is exactly
shown as the difference of image information. Therefore, the images are always corrupted by noises, which have a bad
influence on the subsequent processing. Though some classical filters have successfully been used in gray scale imaging
to remove impulsive noise, their extension to color images is not direct. The main difficulty is that an order has to be
defined to sort the color vectors. In this paper, the grey difference information principle is applied to a novel valuable
color image denoising strategy. Firstly, the grey difference information principle is introduced in detail including
difference information sequence and difference information measure. Secondly, the basic idea of color image denoising
is proposed based on the difference information principle, while the pixel difference information sequences are
established in the light of the template types of filters and the properties of image noises. The novel method will select a
suitable pixel, which has a minimum difference information synthesis measure in the predefined template window, to
replace pixel of the noise. Lastly, the experiments are analyzed based on the novel method and the classical filters. The
experimental results demonstrate that the proposed method outperforms the conventional mean filter in removing
impulsive noise in color images. The successful application indicates that it is just as feasible and effective to use grey
difference information principle as the technologies of grey model and grey relational analysis to process the color image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401K (2010) https://doi.org/10.1117/12.872949
There exist many problems on management and application of multi-beam sounding data, such as massive surveying
points, large data quantity, difficulty in managing and integrating data from different survey regions, and inefficiency of
multi-scale display and gross error detection. Aiming at these problems, new methods for effective management and
quality detection of the massive multi-beam sounding data are presented in this paper. Three key techniques including
organization and management of massive multi-beam sounding data, fast visualization and quality detection are
discussed in detail. Based on the above theories and algorithms, a system named MMSIMS (Massive Multi-beam
Sounding Information Management System) is developed. It has been successfully used in the Bohai waterway multibeam
sounding project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401L (2010) https://doi.org/10.1117/12.872950
A method for urban major road extraction from IKONOS imagery was proposed. The texture features of the image were
first analyzed in three different levels. The first level calculated the Mahalanobis distance between test pixels and
training pixels. The second level was the calculation results of Bhattacharyya distance between the distributions of the
pixels in the training area and the pixels within a 3×3 window in the test area. The third level employed cooccurrence
matrices over the texture cube built around one pixel, and then Bhattacharyya distance was used again. The processed
results were thresholded and thinned, respectively. With the assistance of the geometrical characteristic of roads, the
three resultant images corresponding to three levels were computed using fuzzy mathematics for their likelihood
belonging to road and then merged together. A knowledge-based algorithm was used to link the segmented roads. The
result was finally optimized by polynomial fitting. The experiment shows that the proposed method can effectively
extract the urban major roads from the high-resolution imagery such as IKONOS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401M (2010) https://doi.org/10.1117/12.872951
Keyhole Markup Language (KML) is an XML-based spatial data description language of modeling and storing geographic
features such as points, lines, images, polygons, and models for display in earth browsers. It's very convenient to use KML
for spatial data description, however, KML lacks of description of terrain information that has important value in geology,
hydrology, nature disaster monitoring and many other fields. In consideration of such shortcomings, we expand some
functions based on the KML and present Distributed Spatial Data Markup Language (DSDML). DSDML can describe
DEM and raster data of large size, and support customed terrain information application. This paper mainly researches on
how to describe terrain data using DSDML and implement visualization, including designing a data organization model
using quad-tree model and pyramid model, utilizing external linked DEM dataset for terrain description of large volume
and DEM tile for terrain description of small volume, realizing a DSDML data parser based on SAX model and building a
terrain buffer pool for terrain visualization to obtain a high performance mechanism. We make experiments on
three-dimensional visualization platform with different data size, and the results demonstrate that DSDML can be efficient
and effective for terrain description and visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Di Zhao, Xuesheng Zhao, Shigang Shan, Liangjun Yao
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401N (2010) https://doi.org/10.1117/12.872952
Wavelets have been proven to be an exceedingly powerful and highly efficient tool for fast computational algorithms in
the fields of image data analysis and compression. Traditionally, the classical constructed wavelets are often employed to
Euclidean infinite domains (such as the real line R and plane R2). In this paper, a spherical wavelet constructed for
discrete DEM data based on the sphere is approached. Firstly, the discrete biorthogonal spherical wavelet with custom
properties is constructed with the lifting scheme based on wavelet toolbox in Matlab. Then, the decomposition and
reconstruction algorithms are proposed for efficient computation and the related wavelet coefficients are obtained.
Finally, different precise images are displayed and analyzed at the different percentage of wavelet coefficients. The
efficiency of this spherical wavelet algorithm is tested by using the GTOPO30 DEM data and the results show that at the
same precision, the spherical wavelet algorithm consumes smaller storage volume. The results are good and acceptable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401O (2010) https://doi.org/10.1117/12.872953
High-quality spatially referenced population information plays an important role in many social-demographic fields. This
paper focuses on grid transformation method for population data by combining geographic factors and simulated
township boundary adjustment. Given the location, area and census data of each town-level administrative unit and
national base map (1:25000) in China, 1km*1km gridded population data can be acquired after interpolation and several
adjustments. Besides the adjustment based on geographical factors like topography, transport (roads), rivers and
settlements, a new adjustment method based on township boundary simulation and total population of the town is
proposed in this paper. The Voronoi polygon of town point is generated and rasterized into 1km*1km grids. Considering
the area of each township and boundary line of the corresponding country, the collapse or expansion process of the
rasterized Voronoi polygon is conducted pixel by pixel to minimize the differences between total count of grids with
same township ID and the announced area of the corresponding township iteratively. After boundary simulation, gridded
population data is adjusted based on town-level census data. The study indicated that the proposed method can acquire
fine-grained grid population surface. It is demonstrated as an effective method to transform census population data into
regular grids.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Guangcai Xu, Yong Pang, Lingling Yuan, Mingyang Li, Tian Fu
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401P (2010) https://doi.org/10.1117/12.872954
Full-Waveform Lidar systems have already been proved to have large potentialities in forest related applications. Entire
backscatter signals of each emitted pulse, which give end users more controls in raw data management and interpretation
process compared to traditional discrete return lidar data, are digitized and recorded by the system. Especially in the
forest area, more detail information is provided using waveform data and new opportunities are inspired for point cloud
classification from waveform characteristics.
In this study, full-waveform data were collected by Riegl LMS Q560 system with a point density of 1.4 points/m2 in
Dayekou Watershed (DYK), Gansu province, China. These small footprint airborne full-waveform lidar data were used
to extract statistic information (i.e. echo half-width, amplitude and intensity) of different targets such as grass, shrub,
forest and bared area, and try to classify the typical targets in test field. Non-linear least square method was adopted to fit
a series Gaussian pulse to decompose the raw waveform data. Then the attributes including peak location, half-width,
amplitude, intensity of each pulse were calculated. Generally, different objects response to the emitted pulse diversely,
which is incarnated in the three attributes described above. The decomposed waveform data were transformed to 3D
points with several related attributes. And the field survey information and the same period of the high-resolution multispectral
images were used to determine the specific location and extent of different features areas (forest, bare land,
grassland, construction), then get the statistic value of three attributes for the corresponding regions in the decomposed
waveform data. The results showed that three statistical characteristics of different targets are different in some extent,
which demonstrated their potential in point cloud classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401Q (2010) https://doi.org/10.1117/12.872955
The paper attempts to make a general comparison on different spatial interpolation methods by a case of Marine
temperature and salinity (T-S) of Tianjin coastal waters in the Bohai bay. It discusses four spatial interpolation methods:
Inverse Distance Weighing (IDW), Radial Basis Function (RBF), Ordinary Kriging and Universal Kriging. Marine
temperature and salinity of 15 stations during summer and winter cruises of 2006 in Tianjin coastal waters are employed
in this study. It carries out the four spatial interpolation methods in the 14 stations which are compared and validated by
the other surplus 1 stations. The criteria (Mean Error, Mean Absolute Error, and Root Mean Square Error) demonstrates
that Ordinary Kriging and IDW methods are both suitable for marine temperature which were sampled during summer
and winter cruises of 2006 and for marine salinity which were sampled during summer cruise of 2006, specially IDW
method is appropriate for marine salinity during winter cruise of 2006. As a result, Simple Kriging method slightly is
superior to IDW methods, while Universal Kriging method is inferior to Simple Kriging and IDW methods, and RBF
method is the bad method in Tianjin coastal waters in the Bohai Bay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401R (2010) https://doi.org/10.1117/12.872957
Automatic registration of multi-source remote sensing images is a research focus and difficult task. This paper proposes
a robust and accurate method for multi-source remote sensing images registration. The proposed method is a two-step
process including pre-registration and fine-tuning registration. Firstly, the method detects the matching points by the
Scale Invariant Feature Transform (SIFT) algorithm and then the input image is pre-registered by using these points
according to polynomial model. As a result, the input image is transformed with the same spatial pixel size and the
reference coordinate system as the reference image. Secondly, a large number of feature points are detected based on the
Harris corner detector in the input image, tie point pairs are found rapidly by correlation coefficient in a small search
window determined in the reference image. Tie point pairs with errors are pruned by Baarda's data snooping method.
Finally, both the reference image and the input image are divided into a number of triangular regions by constructing the
Triangulated Irregular Network (TIN) based on the selected tie point pairs. For each triangular facet of the TIN, an affine
transformation is applied for rectification. Experiments demonstrate that the proposed method achieves precise
registration effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401S (2010) https://doi.org/10.1117/12.872958
It is important to locate and optimize the atmospheric environmental monitoring points in mid-scale regions. This paper,
taking Hubei province as a case study, analyzed the situations of weather, climate conditions, terrain features and
economics status, firstly, and proposed an idea of air environmental impact division, which required land classification.
The study area was then classified into three feature types from MODIS data with GIS software, and, the original
monitoring points of air environmental quality was optimized by means of fuzzy clustering. The results showed that the
optimized points equaled to the present monitoring points that had just been changed. These methods can help other
areas where the land surface is rough or complex in locating the atmospheric environmental monitoring points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401T (2010) https://doi.org/10.1117/12.872959
This work proposes a features extraction strategy for each land cover class using a hybrid classification method on multidate
ASTER data. To enable an effective comparison among multi-date images, Multivariate Alteration Detection
(MAD) transformation was applied for data homogenization to reduce noises due to local atmospheric conditions and
sensor characteristics. Consequently, different features identification procedures, both spectral and object-based, were
implemented to overcome problems of misclassification among classes with similar spectral response. Lastly, a postclassification
comparison was performed on multi-date ASTER-derived land cover (LC) maps to evaluate the effects of
change in the study area. All the above methods, when used in multi-date analysis, do not consider the issue of data
homogenization in change detection to reduce noises due to local atmospheric conditions and sensor characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401U (2010) https://doi.org/10.1117/12.872960
With the start tracker and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), SPOT5 satellite
point positioning error is lower than 50m. In this paper, two terrains, mountainous areas and plain areas, are defined by
terrain gradient. The ground check points are chosen on these terrains with the geometric error and the elevation error
both lower than 1m. The coordinate in UTM of each point is calculated both in the Rigorous Sensor Model and the
Simple Polynomial Direct Location Model, and the differences between the computed coordinates and the surveyed
coordinates are discussed. According to the experiment results, the two models have their own advantage in different
circumstances. In mountainous areas, the Rigorous Sensor Model can get higher location accuracy. In plain areas, the
two models can locate with same accuracy, but the Simple Polynomial Direct Location Model performs better in the
aspect of speed and the simplicity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401V (2010) https://doi.org/10.1117/12.872961
Urban fringe lies in the transitional region between urban area and rural area. Defining the urban fringe area and
researching the changing situation will be beneficial to the urban planning and the readjustment of land use structure.
Taking Land-Sat TM images as basic information, using the theories of Shannon entropy and land use degree
comprehensive index, methods on how to define the urban fringe area of Beijing are discussed. Further, the urban fringe
area of Beijing is defined by using these two methods. It shows that the urban fringe area in Beijing includes a part of the
urban district and also a small part of rural district. Distributing ring, it extends around irregularly, especially northwest and
southeast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401W (2010) https://doi.org/10.1117/12.872962
Updating geospatial data has recently become an important work for related fields. Constantly changing geospatial data
are meaningless for all geospatial databases at all scales with problems in the representation condition and reasoning for
new objects. We proposed an incremental updating strategy and method for geospatial data based on granular computing,
to solve the problems in both static and dynamic conditions. We pointed out that proper representation of geospatial data
at a given scale cannot be achieved unless the original data of geospatial objects satisfy the representation condition.
With granular computing, we can implement the representation condition, with which new geospatial data can be
inferred. In addition, we also introduced the method for a case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401X (2010) https://doi.org/10.1117/12.872963
It is still an open problem to extract object features from high-resolution remote sensing images, although this topic has
been intensively investigated and many methods have been carried out. This thesis focuses on modern urban roads in the
following four steps, namely imagery pre-processing, threshold calculation, feature extraction for straight line and
curved line and target reconstruction. From this perspective, a new and semi-automatic approach is proposed based on
the phase classification. Firstly, the basic road network can be obtained from high-resolution remote sensing images
based on grey level mathematical morphology and canny algorithm. Secondly, the road information can be accurately
extracted by means of the "grey" parameters, which are various for different kinds of road models according to the
theory of phase-based classification. Thirdly, the proposed method can also be employed to elevate urban highways,
especially for their curve parts. The experimental results demonstrate that the proposed extraction method can obtain a
reasonable result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401Y (2010) https://doi.org/10.1117/12.872964
Focusing on the fusion problem of the multispectral (Ms) and panchromatic (Pan) images from the same scene, a novel
image fusion method is proposed based on nonsubsampled contourlet transform (NSCT) and human visual system
(HVS). The most traditional fusion methods are IHS, PCA and Brovey transforms, which can bring the phenomenon of
spectral distortion. Avoiding this problem, the wavelet transform is usually used in image fusion in recent years, but it
only can capture limited directional information. Compared with the wavelet and other transforms, the contourlet
transform has the characteristics of multi-scale, time-frequency localization and multi-directions. However, due to the
lack of translation invariance of the contourlet transform, this paper uses the nonsubsampled contourlet transform. The
basic procedure consists of four steps. Firstly, the NSCT is performed on Pan image and the intensity component I of Ms
image with HIS transform, which can obtain the low frequency subband and highpass directional coefficients of each
image. Then a new fusion rule is presented based on HVS: corresponding low frequency and highpass components are
divided into several blocks, and contrast variance of every block is calculated, followed by a selection of an adaptive
threshold which can be used to construct the new low frequency and highpass components. The blocks with higher
contrast variance will be chosen. Thirdly, the new intensity component Inew with high spatial resolution is obtained by
performing the inverse NSCT on the attained coefficients. Finally, the inverse IHS using Inew component is performed
and the new fused multispectral image is obtained. According to the quantitative evaluation criteria, it is shown that the
proposed method can effectively preserve spectral information, improve spatial information of the fused image, and
outperform the traditional IHS, PCA, Brovey, wavelet and contourlet methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 78401Z (2010) https://doi.org/10.1117/12.872965
This paper aims to introduce a case of Socio-economic statistical Spatio-temporal Database. This database system
services in the rural socio-economic statistical work, which is a combination of statistical tables, spatial data, search
algorithm and maintenance interface. Administrative codes are the conjunction media of spatial data and attribute data,
and also are the key words of database query processing. Through storing the changing information in the database, it
could reflect the change of administrative divisions. As the main issues of database design, the studying of the approach
to recording and querying these changes as well as the processing of statistical data by the rules of administrative
divisions changes, requires a large amount of research work. To address these problems, a series of management analysis
tools have been developed to deal with the processing of socio-economic statistical data with changes in the
administrative division. A searching algorithm of spatio-temporal database is used to ensure the comparability of the
results, which are acquired by the positive sequence and the anti-sequence temporal query under complex spatial changes
in the administrative division. According to the spatial changes, searching algorithm of spatio-temporal database mainly
translates temporal series statistical data into standard format data which is matched to the benchmark year. The
searching algorithm controls the process of inquiry through recursion of the table of the administrative code changes,
which are composed of multi-way tree structure and double linked list and record the relationship between upper and
lower level administrative units. These search algorithms and meta-data storage structures constitute the spatio-temporal
database, so as to serve the spatial analysis of statistical data. The comparability problem mentioned above was well
solved by this approach. And a set of functions was provided by this system with spatio-temporal database, such as
specialization of statistical data, temporal query, spatial data which can be automatically updated, and maintenance
interface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784020 (2010) https://doi.org/10.1117/12.872966
Currently, the trend of International Surveying and Mapping is shifting from map production to integrated service of
geospatial information, such as GOS of U.S. etc. Under this circumstance, the Surveying and Mapping of China is
inevitably shifting from 4D product service to NCGISPC (National Common Geospatial Information Service Platform of
China)-centered service. Although State Bureau of Surveying and Mapping of China has already provided a great
quantity of geospatial information service to various lines of business, such as emergency and disaster management,
transportation, water resource, agriculture etc. The shortcomings of the traditional service mode are more and more
obvious, due to the highly emerging requirement of e-government construction, the remarkable development of IT
technology and emerging online geospatial service demands of various lines of business. NCGISPC, which aimed to
provide multiple authoritative online one-stop geospatial information service and API for further development to
government, business and public, is now the strategic core of SBSM (State Bureau of Surveying and Mapping of China).
This paper focuses on the paradigm shift that NCGISPC brings up by using SWOT (Strength, Weakness, Opportunity
and Threat) analysis, compared to the service mode that based on 4D product. Though NCGISPC is still at its early
stage, it represents the future service mode of geospatial information of China, and surely will have great impact not only
on the construction of digital China, but also on the way that everyone uses geospatial information service.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784021 (2010) https://doi.org/10.1117/12.872968
Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on
the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the
used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its
spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical
maps had been used to compare different correction methods. The results showed that: (1) The correction errors were
more than one pixel and some of them were several pixels when the polynomial model was used. The correction
accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the
collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the
polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio
of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic
convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image
corrected was the worst and the computation time was the longest by using the cubic convolution method. According to
the above results, the result was the best by using bilinear to resample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linyan Bai, Yong Xue, Chunxiang Cao, Jianzhong Feng, Hao Zhang, Jie Guang, Ying Wang, Yingjie Li, Linlu Mei, et al.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784022 (2010) https://doi.org/10.1117/12.872969
Atmospheric aerosol, as particulate matter suspended in the air, exists in a variety of forms such as dust, fume and mist.
It deeply affects climate and land surface environment in both regional and global scales, and furthermore, lead to be
hugely much influence on human health. For the sake of effectively monitoring it, many atmospheric aerosol observation
networks are set up and provide associated informational services in the wide world, as well-known Aerosol robotic
network (AERONET), Canadian Sunphotometer Network (AeroCan) and so forth. Given large-scale atmospheric
aerosol monitoring, that satellite remote sensing data are used to inverse aerosol optical depth is one of available and
effective approaches. Nowadays, special types of instruments aboard running satellites are applied to obtain related
remote sensing data of retrieving atmospheric aerosol. However, atmospheric aerosol real-timely or near real-timely
monitoring hasn't been accomplished. Nevertheless, retrievals, using Fengyun-2 VISSR data, are carried out and the
above problem resolved to certain extent, especially over China. In this paper, the authors have developed a new
retrieving model/mode to retrieve aerosol optical depth, using Fengyun-2 satellite data that were obtained by the VISSR
aboard FY-2C and FY-2D. A series of the aerosol optical depth distribution maps with high time resolution were able to
obtained, is helpful for understanding the forming mechanism, transport, influence and controlling approach of
atmospheric aerosol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784023 (2010) https://doi.org/10.1117/12.872970
In this study we present new satellite-based maps of the growing season of northern areas. The maps show trends and
mean date in onset and length of the growing season at different scales north of 50° N. For all the circumpolar area we
use the GIMMS-NDVI satellite dataset for the 1982 to 2006 period, and for the Nordic countries we used the MODISNDVI
satellite data for the 2000 to 2007 period. The circumpolar maps are not as accurate as the one covering the
Nordic countries, this due to lack of ancillary environmental geo-data available that can be included in the mapping
process. In particular this is a problem for the Russian part of the circumpolar north. The resulting growing season maps
are useful in a broad range of ecological and climatic changes studies. Changes in the timing of the growing season are
sensitive bio-indicators of climate change of northern areas, and these changes crucially affects primary industries, such
as agriculture, animal husbandry and forestry, as well as the population dynamics of wild mammals and birds. The onset
of growing season maps is also useful to improve pollen forecasts, and the maps can be used to improve the global
change models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784024 (2010) https://doi.org/10.1117/12.872971
Algal bloom is of concern to any coastal regions especially when it is of close proximity to cities. This became
particularly evident in Qingdao city in Shandong province of China, when it experienced an unexpected algal bloom
covering major parts of its shore in June 2008. The study makes use of Aqua MODIS monthly composite images of
Chlorophyll concentration, indication of algal productivity, alongside respective Aqua MODIS Sea Surface Temperature
images. Given the images are readily available, variability of chlorophyll concentration with temperature in the region is
mapped to see how the algal bloom vary spatially in ascertaining the extent of predominant regions with high
concentration and also to understand if any temporal association can be made prior to the bloom. The methodology
includes the use of SeaDAS and Open Source GIS in carrying out spatial analysis and image processing. The results
provide insights into the seasonal pattern of chlorophyll distribution and sea-surface temperature prior to the bloom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Sixth International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, 784025 (2010) https://doi.org/10.1117/12.872972
In comparison with polar-orbiting satellites, geostationary satellites have a higher time resolution and wider field of
visions, which can cover eleven time zones (an image covers about one third of the Earth's surface). For a geostationary
satellite panorama graph at a point of time, the brightness temperature of different zones is unable to represent the
thermal radiation information of the surface at the same point of time because of the effect of different sun solar radiation.
So it is necessary to calibrate brightness temperature of different zones with respect to the same point of time. A model
of calibrating the differences of the brightness temperature of geostationary satellite generated by time zone differences
is suggested in this study. A total of 16 curves of four positions in four different stages are given through sample
statistics of brightness temperature of every 5 days synthetic data which are from four different time zones (time zones 4,
6, 8, and 9). The above four stages span January -March (winter), April-June (spring), July-September (summer), and
October-December (autumn). Three kinds of correct situations and correct formulas based on curves changes are able to
better eliminate brightness temperature rising or dropping caused by time zone differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.