Visualizing stream surfaces in three-dimensional flow fields is a popular flow visualization method for its ability to depict flow structures with better depth cues compared to simply rendering a large number of streamlines. Computing stream surfaces accurately, however, is non-trivial since the result can be sensitive to multiple factors such as the accuracy of numerical integration, placement of sampling seeds, and tessellation of sample points to generate high quality polygonal meshes. To date, there exist multiple stream surface generation algorithms but verification and evaluation of the quality of the stream surfaces remain an open area of research. In this paper we address this issue, propose different stream surface evaluation metrics and study different aspects of stream surface generation process like choice of algorithms, seeding curve placement, initial seeding curve density, choice of algorithm parameters with four verification metrics to reach meaningful conclusions.
As growth in dataset sizes continues to exceed growth in available bandwidth, new solutions are needed to facilitate efficient visual analysis workflows. Remote visualization can enable the colocation of visual analysis compute resources with simulation compute resources, reducing the impact of bandwidth constraints. While there are many off-the-shelf solutions available for general remoting needs, there is substantial room for improvement in the interactivity they offer, and none focus on supporting stereo remote visualization with programmable error bounds. We propose a novel system enabling efficient compression of stereo video streams using standard codecs that can be integrated with existing remoting solutions, while at the same time offering error constraints that provide users with fidelity guarantees. By taking advantage of interocular coherence, the flexibility permitted by error constraints, and knowledge of scene depth and camera information, our system offers improved remote visualization frame rates.
KEYWORDS: Visualization, Distortion, 3D displays, Information fusion, Information visualization, Video, Data processing, Mirrors, Data hiding, Visual analytics
Geospatial data are often visualized as 2D cartographic maps with interactive display of detail on-demand. Integration of
the 2D map, which represents high level information, with the location-specific detailed information is a key design issue in
geovisualization. Solutions include multiple linked displays around the map which can impose cognitive load on the user
as the number of links goes up; and separate overlaid windowed displays which causes occlusion of the map. In this paper,
we present a self-adaptive technique which reveals the hidden layers of information in a single display, but minimizes
occlusion of the 2D map. The proposed technique creates extra screen space by invoking controlled deformation of the
2D map. We extend our method to allow simultaneous display of multiple windows at different map locations. Since our
technique is not dependent on the type of information to display, we expect it to be useful to both common users and the
scientists. Case studies are provided in the paper to demonstrate the utility of the method in occlusion management and
visual exploration.
KEYWORDS: Human-machine interfaces, Particles, Visualization, 3D modeling, 3D visualizations, Inspection, Data modeling, Visual analytics, 3D displays, Algorithm development
While there have been intensive efforts in developing better 3D flow visualization techniques, little attention has
been paid to the design of better user interfaces and more effective data exploration work flow. In this paper, we
propose a novel graph-based user interface called Flow Web to enable more systematic explorations of 3D flow
data.
The Flow Web is a node-link graph that is constructed to highlight the essential flow structures where a
node represents a region in the field and a link connects two nodes if there exist particles traveling between
the regions. The direction of an edge implies the flow path, and the weight of an edge indicates the number of
particles traveling through the connected nodes. Hierarchical flow webs are created by splitting or merging nodes
and edges to allow for easy understanding of the underlying flow structures. To draw the Flow Web, we adopt
force based graph drawing algorithms to minimize edge crossings, and use a hierarchical layout to facilitate the
study of flow patterns step by step. The Flow Web also supports user queries to the properties of nodes and
links. Examples of the queries for node properties include the degrees, complexity, and some associated physical
attributes such as velocity magnitude. Queries for edges include weights, flow path lengths, existence of circles
and so on. It is also possible to combine multiple queries using operators such as and , or, not. The FlowWeb
supports several types of user interactions. For instance, the user can select nodes from the subgraph returned
by a query and inspect the nodes with more details at different levels of detail.
There are multiple advantages of using the graph-based user interface. One is that the user can identify
regions of interest much more easily since, unlike inspecting 3D regions, there is very little occlusion. It is also
much more convenient for the user to query statistical information about the nodes and links at different levels of
detail. With the Flow Web, it becomes easier for the user to log and track the progress of data exploration which
is crucial for exploring large data sets. We demonstrate how to construct and draw the Flow Web effectively,
and how to query the Flow Web to retrieve useful information from the data. Case studies are provided to
demonstrate the exploration process.
Rendering a lot of data results in cluttered visualizations. It is difficult for a user to find regions of interest from contextual data especially when occlusion is considered. We incorporate animations into visualization by adding positional motion and opacity change as a highlighting mechanism. By leveraging our knowledge on motion perception, we can help a user to visually filter out her selected data by rendering it with animation. Our framework of adding animation is the animation transfer function, where it provides a mapping from data and animation frame index to a changing visual property. The animation transfer function describes animations for user selected regions of interest. In addition to our framework, we explain the implementation of animations as a modification of the rendering pipeline. The animation rendering pipeline allows us to easily incorporate animations into existing software and hardware based volume renderers.
Existing texture advection techniques will produce unsatisfactory rendering results when there is a discrepancy between the resolution of the flow field and that of the output image. This is because many existing texture advection techniques such as Line Integral Convolution (LIC) are inherently none view-dependent, that is, the resolution of the output textures depends only on the resolution of the input field, but not the resolution of the output image. When the resolution of the flow field after projection is much higher than the screen resolution, aliasing will happen unless the flow textures are appropriately filtered through some expensive post processing. On the other hand, when the resolution of the flow field is much lower than the screen resolution, a blocky or blurred appearance will be present in the rendering because the flow texture does not have enough samples. In this paper we present a view-dependent multiresolutional flow texture advection method for structured recti- and curvi-linear meshes. Our algorithm is based on a novel intermediate representation of the flow field, called trace slice, which allows us to compute the flow texture at a desired resolution interactively based on the run-time viewing parameters. As the user zooms in and out of the field, the resolution of the resulting flow texture will adapt automatically so that enough flow details will be presented while aliasing is avoided. Our implementation utilizes mipmapping and programmable GPUs available on modern programmable graphics hardware.
We present an interactive visualization technique for spatial probability density function data. These are datasets that represent a spatial collection of random variables, and contain a number of possible outcomes for each random variable. It is impractical to visualize all the information at each spatial location as it will quickly lead to a cluttered image. We advocate the use of hierarchical clustering as a means of summarizing the information, and also as a tool to bring out meaningful spatial structures in the datasets. For clustering, we discuss a distance function which preserves the spatial correlation present in these datasets. To create an informative visualization of the clusters, we introduce a scheme of colors and patterns to represent statistical properties of the clusters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.