One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
Distributed interactive applications tend to have stringent latency requirements and some may have high bandwidth demands. Many of them have also very dynamic user groups for which all-to-all communication is needed. In online multiplayer games, for example, such groups are determined through region-of-interest management in the application. We have investigated a variety of group management approaches for overlay networks in earlier work and shown that several useful tree heuristics exist. However, these heuristics require full knowledge of all overlay link latencies. Since this is not scalable, we investigate the effects that latency estimation techqniues have ton the quality of overlay tree constructions. We do this by evaluating one example of our group management approaches in Planetlab and examing how latency estimation techqniues influence their quality. Specifically, we investigate how two well-known latency estimation techniques, Vivaldi and Netvigator, affect the quality of tree building.
Content distribution networks (CDNs) are a popular service for the
dissemination of multimedia content over wide areas. The existance of a centralized administrative structure makes them attractive for the
commercial distribution of high quality content. By sharing resources,
service providers can implement their services more efficiently than a
single content provider who establishes a distribution structure himself. An efficient operation requires cost estimations that allow service providers to determine the dimensioning of their infrastructure and the placement of content in the system. In case of video streaming, distribution mechanisms that exploit multicast, segmented delivery and out-of-order delivery can be applied to merge streams and reduce resource consumption. Several applicable stream merging mechanisms exist in the literature and can be used. We examine three such mechanisms, namely patching, gleaning and prefix
caching in a hierarchically organized CDN. We show that a co-optimization of movie placement and stream merging mechanism has an undesirable effect on quality by delivering highly popular movies over longer distances than less popular ones. We explore and compare two approaches for overcoming this problem by qualifying the placement optimization with additional conditions. We find that in this case, straight-forward sorting is a good solution.
KEYWORDS: Video, Internet, Control systems, OSLO, Multimedia, Local area networks, Computing systems, Network architectures, Video coding, Computer programming
This paper investigates an architecture and implementation for the use of a TCP-friendly protocol in a scalable video distribution system for hierarchically encoded layered video. The design supports a variety of heterogeneous clients, because recent developments have shown that access network and client capabilities differ widely in today's Internet. The distribution system presented here consists of videos servers, proxy caches and clients that make use of a TCP-friendly rate control (TFRC) to perform congestion controlled streaming of layer encoded video. The data transfer protocol of the system is RTP compliant, yet it integrates protocol elements for congestion control with protocols elements for retransmission that is necessary for lossless transfer of contents into proxy caches. The control protocol RTSP is used to negotiate capabilities, such as support for congestion control or retransmission.
By tests performed with our experimental platform in a lab test and over the Internet, we show that congestion controlled streaming of layer encoded video through proxy caches is a valid means of supporting heterogeneous clients. We show that filtering of layers depending on a TFRC-controlled permissible bandwidth allows the preferred delivery of the most relevant layers to end-systems while additional layers can be delivered to the cache server. We experiment with uncontrolled delivery from the proxy cache to the client as well, which may result in random loss and bandwidth waste but also a higher goodput, and compare these two approaches.
In contrast to classical assumptions in Video on Demand (VoD) research, the main requirements for VoD in the Internet are adaptiveness, support of heterogeneity, and last not least high scalability. Hierarchically layered video encoding is particularly well suited to deal with adaptiveness and heterogeneity support for video streaming. A distributed caching architecture is key to a scalable VoD solution in the Internet. Thus, the combination of caching and layered video streaming is promising for an Internet VoD system, yet, requires thoughts about some new issues and challenges, e.g., how to keep layered transmissions TCP-friendly. In this paper, we investigate one particular of these issues: how can a TCP-friendly transmission exploit its fair share of network resources taking into account that the constrained granularity of layer encoded video inhibits an exact adaptation to actual transmission rates. We present a new technique that makes use of retransmissions of missing segments for a cached layered video to claim the fair share within a TCP-friendly session. Based on simulative experiments the potential and applicability of the technique, which we also call fair share claiming is shown. Moreover, a design for the integration of fair share claiming in streaming applications which are supported by caching is devised.
Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.
One major problem of using multimedia material in lecturing is the trade-off between actuality of the content and quality of the presentations. A frequent need for content refreshment exists, but high quality presentations can not be authored by the individual teacher alone at the required rate. Several past and current projects have had the goal of developing so-called learning archives, a variation of digital libraries. On demand, these deliver material with limited structure to students. For lecturing, these systems provide just as insufficient service as the unreliable WWW. Based on our system HyNoDe [HYN97] we address these issues in our distributed media server built of 'medianodes.' We add content management that addresses teachers' needs and provide guaranteed service for connected as well as disconnected operation of their presentation systems. Medianode aims at a scenario for non-real-time, shared creation and modification of presentations and presentation elements. It provides user authentication, administrative roles and authorization mechanisms. It requires an understanding of consistency, versioning and alternative content tailored to lecturing. To allow for predictable presentation quality, medianode provides application level QoS supporting alternative media and alternative presentations. Viable presentation tracks are dynamically generated based on user requests, user profiles and hardware profiles. For machines that are removed from the system according to a schedule, the systems guarantees availability of consistent, complete tracks of selected presentations at disconnect time. In this paper we present the scope of the medianode project and afterwards its architecture, following the realization steps.
Today's interactive television systems are using proprietary communication protocols and interchange formats. To provide inter-operability at the application level the next generation of interactive television system will be based on standardized communication protocols, monomedia and multimedia formats. This paper presents the Globally Accessible Services (GLASS) system which is a prototype interactive television system based on the Multimedia and Hypermedia Expert Group (MHEG) standard. After a brief introduction to MHEG as the multimedia interchange format between application server and set-top box in interactive television systems, the GLASS clients and servers are described, and an example scenario for navigation in the GLASS system is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.