Loss of texture information and color distortion have always been two key issues that plague the quality of fusion images. In recent years, with the development of remote sensing payload technology, the difference between the spectral range of panchromatic and multi-spectral images has become larger and larger, resulting in the problems proposed above becoming more prominent in the process of true color fusion. On one hand, the energy distribution of water areas and vegetation in the near-infrared and visible spectral ranges is very different, therefore, the color distortion is mainly concentrated in the water area and vegetation area, the specific manifestation is that the energy of the vegetation is sufficient and the energy of the water area is very small. On the other hand, multi-spectral devices have poor antisaturation and anti-dispersion characteristics, which often leads to the loss of texture information in images. In addition, the lack of energy of multi-spectral sensor results in limited recognition of textures in the shadow area of the fusion image. Based on the analysis of the shortcomings of the existing fusion methods, we propose a pan-sharpening fusion optimization method based on the pyramid model in this paper. This method first uses the spectral relationship between the spectral image and the panchromatic image to build the basic fusion model, then, in order to prevent the "illconditioned equation" phenomenon appeared during the fusion process, unequal conditional equations are introduced into the basic fusion model to form simultaneous equations to avoid color distortion and invalid data in the fusion results. Secondly, in order to suppress the blurring of the edges of the fused image caused by the saturation overflow in the image, we calculates the ratio of the panchromatic image to the up-sampling multi-spectral images, and replaces the deficiency of the previous fusion model to generate fusion images with high clarity and high spectral fidelity.
Traditional approaches for remote sensing image segmentation are mature in certain respect, as the new spaceborne technology of continuous observation satellite video emerges, it arises a new demand for moving object detection from this new data source. In the field of computer vision, deep learning technique has achieved outstanding performance for general images. In the research, a deep learning based method is introduced and several modifications are made in the processing steps. Faster Convolutional Neural Network (Faster CNN) algorithm is selected as the basal pipeline and conditional random field is used to generate finer detail proposals. After enforced iterations, the computed extraction result of ships in remote sensing satellite video data is compared with original Faster CNN method which demonstrates an improved target detection output in different tests.
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
In this paper, a method for individual tree shape modeling and canopy coverage delineation is provided for high density airborne LiDAR data. Three basic 3-D canopy shape models are introduced as fundamental assumptions, and then an iterative algorithm for calculating tree canopy window is implemented. After that, the prototype test is carried out with simulated forest point data which visually shows a valid result. After that, a real mixed forest LiDAR dataset is being put into experiment. Based on the same theory, the output and a statistical analysis reveals that the proposed method can yield an effective and distinguishable extraction of different tree canopy coverage delineation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.